id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
15630548
pes2o/s2orc
v3-fos-license
Observation of inertial energy cascade in interplanetary space plasma We show in this article direct evidence for the presence of an inertial energy cascade, the most characteristic signature of hydromagnetic turbulence (MHD), in the solar wind as observed by the Ulysses spacecraft. After a brief rederivation of the equivalent of Yaglom's law for MHD turbulence, we show that a linear relation is indeed observed for the scaling of mixed third order structure functions involving Els\"asser variables. This experimental result, confirming the prescription stemming from a theorem for MHD turbulence, firmly establishes the turbulent character of low-frequency velocity and magnetic field fluctuations in the solar wind plasma. Space flights have shown that the interplanetary medium is permeated by a supersonic, highly turbulent plasma flowing out from the solar corona, the so called solar wind [1,2]. The turbulent character of the flow, at frequencies below the ion gyrofrequency f ci ≃ 1Hz, has been invoked since the first Mariner mission [3]. In fact, velocity and magnetic fluctuations power spectra are close to the Kolmogorov's -5/3 law [2,6]. However, even if fields fluctuations are usually considered within the framework of magnetohydrodynamic (MHD) turbulence [2], a firm established proof of the existence of an energy cascade, namely the main characteristic of turbulence, remains a conjecture so far [4]. This apparent lack could be fulfilled through the evidence for the existence of the only exact and nontrivial result of turbulence [6], that is a relation between the third order moment of the longitudinal increments of the fields and the separation [5]. This observation would firmly put low frequency solar wind fluctuations within the framework of MHD turbulence. The importance of such question stands beyond the understanding of the basic physics of solar wind turbulence. For example, it is well known that turbulence is one of the main obstacles to the confinement of plasmas in the fusion devices [7,8]. The understanding of interplanetary turbulence and its effects on energetic particle transport is of great importance also for Space Weather research [9], which is a relevant issue for spacecrafts and communication satellites operations, and for the security of human beings. Finally, more theoretical problems are concerned, such as the puzzle of solar coronal heating due to the turbulent flux toward small scales [10]. Incompressible MHD equations are more complicated than the standard neutral fluid mechanics equations because the velocity of the charged fluid is coupled with the magnetic field generated by the motion of the fluid itself. However, written in terms of the Elsässer variables defined as z ± = v ± (4πρ) −1/2 b (v and b are the velocity and magnetic field respectively and ρ the mass density), they have the same structure as the Navier-Stokes equations [4] where P is the total hydromagnetic pressure, while ν is the viscosity and κ the magnetic diffusivity. In particular, the nonlinear term appears as z ∓ · ∇z ± , suggesting the form of a transport process, in which Alfvénic MHD fluctuations z ± propagating along the background magnetic field are transported by fluctuations z ∓ propagating in the opposite direction. This transport is active as z ± and z ∓ are clearly not independent. Still, following the same procedure as in [11,12], and assuming local homogeneity, a relation similar to the Yaglom equation for the transport of a passive quantity [13] can be obtained in the stationary state Here, ∆z ± ≡ z ± (x ′ ) − z ± (x) are the (vector) increments of the fluctuations between two points x and x ′ ≡ x+r, ∇ and ∇ ′ are the gradients at the corresponding two points, ∂ is the longitudinal derivative along the separation r, while Y ± (r) are the mixed third order structure function |∆z ± | 2 ∆z ∓ and ǫ ± ≡ ν |∇z ± | 2 hom = 3ν |∂ z ± | 2 are the pseudo-energy average dissipation rates, namely the dissipation rates of both |z ± | 2 /2 respectively. Finally, ∆P represent the increment of the total pressure fluctuations and the kinematic viscosity ν is here assumed to be equal to the magnetic diffusivity κ (this last assumption is in fact not necessary if we concentrate on the inertial range, as we will do from now on). The last term on the r.h.s. of equation (2) is related to large-scale inhomogeneities and disappears if the flow is globally homogeneous. Also, assuming local isotropy, the term containing pressure correlation vanishes, so that after longitudinal integration of (2) and in the inertial range of MHD turbulence (i.e. when ν → 0), a linear scaling law is obtained, characterizing a turbulent cascade with a well-defined finite energy flux ǫ ± . An alternative derivation of this result using correlators instead of structure functions was also obtained in [14], and observed in numerical simulations [15]. When neutral fluid turbulence is considered, equation (3) becomes [11] |∆v| 2 ∆v = −4/3 ǫ r (ǫ being the average kinetic energy dissipation rate), from which Kolmogorov's -4/5 law for the longitudinal third order structure function can be recovered if there is full isotropy as (∆v ) 3 = −4/5 ǫ r. In this work, we show that relation (3) is indeed satisfied in some periods within solar wind turbulence. In order to avoid variations of the solar activity and ecliptic disturbances (like slow wind sources, Coronal Mass Ejections, ecliptic current sheet, and so on), we use high speed polar wind data measured by the Ulysses spacecraft [16,17]. In particular, we analyse here the first seven months of 1996, when the heliocentric distance slowly increased from 3 AU to 4 AU, while the heliolatitude decreased from about 55 • to 30 • . The fields components are given in the RT N reference frame, where R (radial) indicates the sun-spacecraft direction, centered on the spacecraft and pointing out of the sun, N (normal) lies in the plane containing the radial direction and the sun rotation axis, while T completes the right-handed reference frame. Note that, since the wind speed in the spacecraft frame is much larger than the typical velocity fluctuations, and is nearly aligned with the R radial direction, time fluctuations are in fact spatial fluctuations with time and space scales (τ and r respectively) related through the Taylor hypothesis, so that r = − v R τ (note the reversed sign). From the 8 minutes averaged time series z ± (t), we compute the time increments ∆z ± (τ ; t) = z ± (t + τ ) − z ± (t), and obtain the mixed third order structure function Y ± (− v R t τ ) = |∆z ± (τ ; t)| 2 ∆z ∓ R (τ ; t) t using moving averages • t on the time t over periods spanning around 10 days, during which the fields can be considered stationary. A linear scaling Y ± (τ ) = 4/3 ǫ ± v R t τ is indeed observed in a significant fraction of the periods we examined, with an inertial range spanning as much as two decades, indicating the existence of a well defined inertial energy cascade range in plasma turbulence. In fact, solar wind inertial ranges can even be larger than the ones reported for laboratory fluid flows [12], showing the robustness of this result. This is the first experimental validation of the turbulence MHD theorem discussed above. Figure 1 shows an example of scaling and the extension of the inertial range, for both Y ± (τ ). The linear scaling law generally extends from a few minutes to one day or more. This happens in about 20 periods of a few days in the 7 months considered. Several other periods are found in which the scaling range is considerably reduced. In particular, the sign of Y ± (τ ) is observed to be either positive or negative. Since pseudo-energies dissipation rates are positive defined, a positive sign for Y ± (τ ) (negative for Y ± (r)) indicates a (standard) forward cascade with pseudo-energies flowing towards the small scales to be dissipated. On the contrary, a negative Y ± (τ ) is the signature of an inverse cascade where the energy flux is being transferred on average toward larger scales. Figure 2 shows the location of the most evident scaling intervals, together with the values of the flux rate ǫ ± estimated through a fit of the scaling law (3), typically of the order of a few hundreds in J kg −1 s −1 . For comparison, values found for ordinary turbulent fluids are 1 ÷ 50 J kg −1 s −1 [18]. It is worth noting that, in a large fraction of cases, both Y ± (τ ) switch from positive to negative linear scaling (or viceversa) within the same time period when going from small to large scales (see Figure 3). The occurrence of both kind of cascades within the same flow is not so uncommon within hydrodynamic turbulence. This phenomenon has been attributed to some large scale instability, as observed for example in geophysical flows or when the flow is affected by a strong rotation [19]. In the case of solar wind plasma a possible explanation for the inverse cascade could be the enhanced intensity of the background magnetic field. This would make the turbulence mainly bidimensional allowing for an inverse cascade as observed in numerical simulations [20]. It should also be noticed that in most of the cases the time scale at which the cascade reverses its sign is of the order of 1 day. This scale roughly indicates where the small scale Alfvénic correlations between velocity and magnetic field are lost [21,22]. This could mean that the nature of the fluctuations changes across the break. However, these particular aspects still deserve to be adequately considered within the solar wind context. At this point, the question is why the scaling is not observed all the time within the solar wind. As already stated, equation (2) reduces to the linear law (3) only when local homogeneity, incompressiblity and isotropy conditions are satisfied. In general, solar wind inhomogeneities play a major role at large scales so that local homogeneity is generally fulfilled within the range of interest. Regarding incompressibility, it has been shown that compressive phenomena mainly affect shocked regions and dynamical interaction regions like stream-stream interface [1,2]. However, the time interval we analyze, because of Ulysses high latitude location, is not affected by these compressive phenomena [23]. On the other hand, it has also been shown [2] that magnetic field compressibility increases mainly at very small scales within the fast wind regime. It follows that the incompressibility assumption can be considered valid to a large extent for the analyzed interval and at intermediate scales.The large scale anisotropy, mainly due to the average magnetic field, is only partially lost at smaller scales, and a residual anisotropy is always present [24,25], generally breaking the local isotropy assumption. Thus, while inhomogeneity, compressibility and anisotropy could all be responsible for the loss of linear scaling, anisotropy probably is the main candidate within high latitude regions of the solar wind. It is important to note that the presence of a Yaglom-like law that involves the third order mixed moment, is more general than the phenomenology usually involving the second order moment. Indeed, while the Yaglom MHD relation (2) involves only differences along the parallel direction, that are in fact the only quantities accessible from single satellite measurements, phenomenological arguments involve the full spatial dependence of vector fields that cannot be directly measured yet. This means that our result is compatible with both Kolmogorov and Iroshnikov-Kraichnan type cascade [4,2], and does not help in discriminating between these phenomenologies [26,27]. In conclusion, we observed, for the first time in the solar wind, the only natural plasma directly accessible, evidence of Yaglom MHD scaling law indicating the existence of a local energy cascade in hydromagnetic turbulence. The scaling holds in a number of relatively long periods of about 10 days, and also provides the first estimation of the pseudo-energy dissipation rate. Although our data might not fully satisfy requirements of homogeneity, incompressibility and isotropy everywhere, the observed linear scaling extends on a wide range of scales and appears very robust. The unexpected existence of the scaling law in anisotropic, weakly compressible and inhomogeneous turbulence still needs to be fully understood. Our result estabilishes a firm point within solar wind phenomenology, and, more generally, provides a better knowledge of plasma turbulence, carrying along a wide range of practical implications on both laboratory fusion plasmas and space physics.
2014-10-01T00:00:00.000Z
2007-02-09T00:00:00.000
{ "year": 2007, "sha1": "3254e77a50ea252975225985f075043d112e290e", "oa_license": null, "oa_url": "https://hal.archives-ouvertes.fr/hal-00388162/file/sorriso-valvo2007.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3254e77a50ea252975225985f075043d112e290e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
53212671
pes2o/s2orc
v3-fos-license
Abstract: Preliminary Experience with High-Resolution 3D Lymphangiovenulography: The First Success in Video Recording of the Lymphatic Pumping Using Photoacoustic Imaging in Man INTRODUCTION: Complications associated with venous microcirculation remains prevalent to autologous reconstruction with the free deep inferior epigastric artery perforator (DIEP) flap. It has been previously been demonstrated that retaining the dermal component in DIEP flap harvest has a significant role in overall flap perfusion. Studies of venous perforator anatomy and assessment of perforator vascular territories (venosomes) has not received the same attention compared to arterial studies. This study evaluates the venous microcirculation of the anterior abdominal wall integument of the DIEP and superficial inferior epigastric venous systems and the impact on the dermis on venous microvascular perfusion. INTRODUCTION: Complications associated with venous microcirculation remains prevalent to autologous reconstruction with the free deep inferior epigastric artery perforator (DIEP) flap. It has been previously been demonstrated that retaining the dermal component in DIEP flap harvest has a significant role in overall flap perfusion. Studies of venous perforator anatomy and assessment of perforator vascular territories (venosomes) has not received the same attention compared to arterial studies. This study evaluates the venous microcirculation of the anterior abdominal wall integument of the DIEP and superficial inferior epigastric venous systems and the impact on the dermis on venous microvascular perfusion. METHODS: Fourteen hemi-abdominal flaps from the midline to mid-axillary line were harvested in fresh, nonfrozen human cadavers. Following perforator mapping, the venae comitantes of the largest DIEP perforator and the superficial inferior epigastric vein cannulated using a 27-gauge butterfly catheter. Flaps were evaluated with high resolution computed tomography (CT) following an injection of iodinated contrast (Omnipaque ®) to the DIEP venae comitantes and SIEV. The contrast agent was flushed out from the flap between injections. The dermis was subsequently removed with cautery at the subdermal dissection plane and flaps were re-imaged with CTA following injection in the DIEV perforator. Three-dimensional CT angiography of the venous territories allowed detailed assessment at each stage including perfusion areas, volume and pattern of perfusion. RESULTS: Average territory of the largest DIEV perforator was 180cm2 and extended to 47% of the hemiabdominal integument. Patterns of venous territory distribution from individual venous perforators were assessed in hemi-abdominal flaps of the anterior abdomen. The perfusion territory without the dermis was significantly reduced by a mean of 142cm2, P=0.01. Mean volume of perfusion was significantly reduced on average by 10cm3, P=0.01. A direct communication of the DIEV perforator with the superficial system was seen in 10/14 flaps (71%). CONCLUSION: Venous microcirculation plays a critical role in success of DIEP flap harvest. This study details patterns of venosomes of DIEV perforators, the comparison with the superficial system but appreciates the critical role of the preservation of the dermal integrity to preserve and optimize venous drainage of the flap by avoiding aggressive de-epithelialization. , which is based on photoacoustic (PA) technology, is an optical imaging that can image the distribution of light absorbing tissue components like hemoglobin or melanin, or optical absorption contrast imaging agents like Indocyanine green (ICG), with high spatial resolution. 1 The visualization of lymphatic system in mice with PAI technique has been demonstrated in previous report, 2 however, human lymphatic vasculature has not been visualized because the penetration depth was limited. Recently, our group has reported PAI analysis of tumor-associated vasculatures in human breast cancer 3 or palmar blood vessels. 4 In this report, we introduce a new imaging technique of PA lymphangiovenulography to visualize human lymphatic vessels in three dimensions in detail. MATERIALS AND METHODS: We used the PAI-05 system with semi-spherical detector array, which was made by Canon Inc. (Japan), Hitachi, Ltd.(Japan) and Japan Probe Co, Ltd.(Japan). To image the lymphatic structures of the limbs in 4 healthy subjects, ICG was administered subcutaneously dorsal aspect of each foot or hand (some web spaces). A PA image was acquired by irradiating the tissue using a laser at wavelengths of near-infrared region. The voxel size was 0.125 mm. RESULTS: In the still images, the lymphatic vessels up to the diameter of 0.2 millimeters could be observed threedimensionally with the blood vessels around them. In the videos, it was observed that lymphatic fluid including ICG was transported by spontaneous contraction of the collecting lymph vessels. The flow was observed intermittently with various intervals. The velocity of the flow was also varied from subject to subject. Lymph flow tended to be faster in the upper limbs than in the lower limbs. CONCLUSION: In this study, three-dimensional high spatial and temporal resolution PA images were obtained using the PAI-05 system, allowing the visualization of fine lymphatic vasculature and its pumping movement. The system is a promising tool for more precise quantitative assessment of the pumping frequency and the velocity in the collecting lymphatic vessels in lymphedema patients or subclinical subjects. ACKNOWLEDGEMENT: This work was funded by the ImPACT Program of the Council for Science, Technology and Innovation (Cabinet Office, Government of Japan). Co-Authors: Susumu Saito, MD, PhD; Shigehiko Suzuki, MD, PhD Affiliation: Kyoto University, Kyoto BACKGROUND: Thin free flaps are challenging procedures. In particular, the failure of thin anterolateral thigh (ALT) flaps is reported to be associated with the distal branching pattern of the perforator vessels. In a previous study, we demonstrated the feasibility of using photoacoustic tomography (PAT) to identify ALT perforators and their branching patterns in the subcutaneous layer, especially those in oblique or horizontal orientations. 1 In this paper, we present the protocol and preliminary results of a clinical trial for three-dimensional vascular mapping of the distal ALT perforators using PAT imaging. METHODS: Patients for whom reconstructive surgery using an ALT flap was planned were recruited. Four days before the operation, the bilateral anterolateral aspects of the mid-thigh were examined by PAT. The perforator orientation, as determined by ultrasound, was marked with red ink. The body surface was marked every 4 cm with purple ink. The acquired data were processed threedimensionally using a laboratory-made imaging software program. The depth of the visualized perforator vessels and the distal networks were distinguished based on the color gradation. The body surface markings were preserved until the operation day using a film sheet. Two days before the operation, the three-dimensional vascular data were converted into a two-dimensional vascular map using a projective image reconstruction technique. A semi-automated normal vector detection method and curvature approximation were applied to maintain accuracy. The depth was indicated by color gradation on a sterilized transparent sheet. The mapping sheet was attached to the patient's thigh before the operation. The skin incision was performed with cutting the transparent mapping sheet. The stem portion of the perforator vessels was evaluated at the level of fascia lata. RESULTS: The first clinical trial involved a 32-year-old male patient with a malignant chest-wall tumor. Each PAT scan took approximately 5 minutes per thigh. The perforator vessels were visualized at the expected points by ultrasonography. A two-dimensional vascular mapping sheet was prepared by drawing the courses of the subcutaneous micro-vessels using projection mapping. Our computing
2018-11-25T08:57:02.638Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "0cea9e73af59d6e533106dd8b6ee93a1ce852a7d", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/01.gox.0000547147.06961.c0", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0cea9e73af59d6e533106dd8b6ee93a1ce852a7d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
17427233
pes2o/s2orc
v3-fos-license
Bulk Scalar Stabilization of the Radion without Metric Back-Reaction in the Randall-Sundrum Model Generalizations of the Randall-Sundrum model containing a bulk scalar field $\Phi$ interacting with the curvature $R$ through the general coupling $R f(\Phi)$ are considered. We derive the general form of the effective 4D potential for the spin-zero fields and show that in the mass matrix the radion mixes with the Kaluza-Klein modes of the bulk scalar fluctuations. We demonstrate that it is possible to choose a non-trivial background form $\Phi_0(y)$ (where $y$ is the extra dimension coordinate) for the bulk scalar field such that the exact Randall-Sundrum metric is preserved (i.e. such that there is no back-reaction). We compute the mass matrix for the radion and the KK modes of the excitations of the bulk scalar relative to the background configuration $\Phi_0(y)$ and find that the resulting mass matrix implies a non-zero value for the mass of the radion (identified as the state with the lowest eigenvalue of the scalar mass matrix). We find that this mass is suppressed relative to the Planck scale by the standard warp factor needed to explain the hierarchy puzzle, implying that a mass $\sim 1\tev$ is a natural order of magnitude for the radion mass. The general considerations are illustrated in the case of a model containing an $R\Phi^2$ interaction term. Introduction Although the Standard Model (SM) of electroweak interactions describes successfully almost all existing experimental data, the model suffers from many theoretical drawbacks. One of these is the hierarchy problem: namely, the SM cannot consistently accommodate the weak energy scale O(1 TeV) and a much higher scale such as the Planck mass scale O(10 18 GeV). Therefore, it is widely accepted that the SM is only an effective low-energy theory embedded in some more fundamental highscale theory that presumably could contain gravitational interactions. Recently many models that incorporate gravity have been proposed in the context of higher dimensional space-time. These models have received tremendous attention since they might provide a solution to the hierarchy problem. One of the most attractive attempts has been formulated by Randall and Sundrum [1] who postulated a 5D universe with two 4D surfaces ("3-branes") with the following action: where R is the Ricci scalar, κ 2 = 8πG (5) N with G N the Newton constant in 5D and Λ, Λ 1 ≡ Λ hid and Λ 2 ≡ Λ vis are the cosmological constants in the bulk, on the hidden and visible branes, respectively. In the above, g ij (i, j = 0, 1,2,3,4) is the bulk metric and (g 1 ) µν ≡ (g hid ) µν (x) ≡ g µν (x, y = y 1 ≡ 0) and (g 2 ) µν ≡ (g vis ) µν (x) ≡ g µν (x, y = y 2 ≡ 1/2) (µ, ν = 0, 1,2,3) are the induced metrics on the branes. It turns out that if the bulk and brane cosmological constants are related by and if periodic boundary conditions (y → y + 1) with identification of (x, y) and (x, −y) are imposed, then the following metric is a solution of the 5D Einstein equations: where σ(y) = m 0 b 0 [y(2θ(y) − 1) − 2(y − 1/2)θ(y − 1/2)]; b 0 is a constant parameter that is not determined by the equations of motion. Within the Randall-Sundrum (RS) model all the SM particles as well as the nongravitational forces are assumed to be present on one of the 3-branes, the "visible brane". Gravity lives on the visible brane, on the second brane (the "hidden brane") and in the bulk. All mass scales in the 5D theory are of the order of the Planck mass. By placing the SM fields on the visible brane, the initial 5D electroweak mass scale O(M P l ) is rescaled by an exponential suppression factor (the "warp factor") γ ≡ e −m 0 b 0 /2 , down to the weak scale O(1 TeV) without any severe fine tuning. To achieve the necessary suppression, one needs m 0 b 0 /2 ∼ 37. This is a great improvement compared to the original problem of accommodating both the weak and the Planck scale within a single theory. The radion effective potential A drawback of the RS model is the presence of a massless degree of freedom called the radion. There have been several attempts (see Refs. [2,3,4]) in the literature to generate the radion mass by introducing a bulk scalar field Φ that would induce an appropriate radion potential. Here we will derive the general form of the potential within a class of models containing the bulk scalar interacting with gravity in the following manner: where we have introduced the bulk potential V (Φ) and the brane potentials . In addition to the standard scalar kinetic-energy term, we have allowed for a general coupling of the bulk scalar to gravity through the αRf (Φ) interaction term. Since we would like to preserve the explanation of the hierarchy proposed by Randall and Sundrum, we will also require that the RS metric (3) remain an exact solution 1 of the Einstein equations even in the presence of the bulk scalar Φ. Therefore, it is useful to separate out in the action (4) both the bulk (Λ) and brane (Λ hid , Λ vis ) cosmological constants that satisfy the RS conditions, Eq. (2). In order to identify the radion, it is sufficient to consider scalar excitations of the metric around the background RS solutions. Hereafter, we will adopt the following parameterization (see Refs. [5,6]) of the metric fluctuations: where h µν (x, y) and b(x) are related to the graviton 2 and the radion, respectively. Then from − |g|R/(2κ 2 ) in the action (4) It is easy to verify that if interactions of the scalar field Φ are switched off, then there is no potential for b(x) and consequently the radion would be massless. The bulk scalar has been introduced here in order to generate a non-trivial potential for the radion. However, in general the presence of the scalar leads to a non-trivial interaction potential between the radion and the scalar in addition to the appearance of a radion potential. Therefore, the strategy that we will follow here will be to determine the background scalar configuration Φ(y) such that the RS background metric is preserved and then to expand the action (4) around it. First, one has to solve the Einstein equations together with the equation of motion for Φ. Let us start with the Einstein equations, keeping in mind that we would like to preserve the RS metric as a vacuum solution. We write where G ij is the Einstein tensor, T In the following we will not discuss interactions with gravitons as they will not influence the potential for scalar degrees of freedom. 3 Hereafter, the flat metric η µν will be assumed whenever repeated indices are summed. pure RS model) contributions to the energy-momentum tensor. It is easy to show that: where For the RS background metric we obtain: and where here, and in what follows, the prime denotes differentiation with respect to the 5th coordinate, y. Since we demand that the RS metric be preserved even when the scalar is present (no back-reaction from the scalar), we have to require that the extra contributions to the energy momentum tensor (calculated using the RS metric) vanish: Since we want to find a background solution for Φ that satisfies 4D Lorentz invariance, we will assume that the solution is only a function of the extra dimension coordinate, y. The (µ, ν) and (5,5) components of Eq. (13) read, respectively: Note that since Φ(y) should be a continuous function, the above equations imply Φ ′ (y) = 0 and V (Φ) = V vis (Φ) = V hid (Φ) = 0 for α = 0. Therefore, introduction of the extra coupling αRf (Φ) is essential in order to obtain a no-back reaction solution, (δT ) ij (Φ) = 0. In addition, Φ must satisfy the following equation of motion: where the RS metric was used. 4 Once the vacuum solution (Φ 0 ) is determined, we expand the action, Eq. (4), adopting the parameterization of the scalar fluctuations of the metric given in Eq. (5) and the following definition for the Φ quantum fluctuation: Then, in order to determine the effective 4D potential for the scalar degrees of freedom, we collect all non-derivative contributions to the d 4 x integrand in the action of Eq. (4) containing b(x) and φ(x, y). In other words, we expand the theory defined by the action (4) First, let us calculate the Ricci scalar for the metric (5) and collect all the terms containing derivatives with respect the the extra component y: where ellipses contain only (x, y)-derivatives of the graviton and x-derivatives of the radion. Since we are going to calculate the potential, derivatives of b(x) will be dropped hereafter. As has already been mentioned, we will not consider fluctuations of the η µν part of the metric. Therefore, we will also neglect all terms containing h µν (x, y). 5 Using the contributions to the Ricci scalar displayed in Eq. (18), one gets the following form of the effective 4D potential from the action (4): where Φ 0 = Φ 0 (y) denotes the vacuum solution (that preserves the RS metric) for the scalar Φ. Note that Φ 0 is determined as a solution of the equations of motion for the RS background metric. As a result, it does not contain any dependence on the radion field b(x). It is easy to verify that contributions from the pure RS model to V eff (b, φ) vanish when the relations (2) are satisfied: a non-trivial potential requires an extension of the minimal RS model. Next, it is easy to show from Eq. (8) that if Φ is independent of x then the following identity holds: Multiplying the above equation by exp{−4σ} term by parts and using the RS relations, Eq. (2), one obtains the following simple relation between (δT ) µ µ (Φ 0 ) and the minimum of the effective potential of Eq. Note that in Eq. (21) we employ the trace from Eq. (20) calculated for the background solution Φ 0 . Since the no-back-reaction requirement, Eq. (13), implies (δT ) µ µ (Φ 0 ) = 0, the relation (21) shows that the effective potential must vanish at the minimum It is straightforward to verify that linear terms in b(x) and φ(x, y) disappear by virtue of Eq. (13) and Eq. (16), respectively. In order to determine scalar masses one has to expand the action (4) up to terms quadratic in b and φ. First, let us define the K-K modes of the scalar fluctuations: with orthonormal functions J n (y): The resulting mass terms are the following: where r is the canonically normalized radion [see Eq. (6)]: Inputing the equation of motion (16), the elements of the mass matrix read: Before we can estimate the size of the elements of the mass matrix, we must discuss first the constraint that is imposed on the model by the requirement of maintaining the standard strength of classical 4D gravity. Adopting the metric defined by Eq. (5), one can calculate the coefficient of the 4D Ricci scalar obtained for g µν = η µν +h µν . In order to reproduce the standard result, the coefficient should be M 2 P l /2. The resulting constraint is: where M P l ∼ 2 × 10 18 GeV is the reduced Planck mass and γ = exp(−m 0 b 0 /2). In order to solve the hierarchy problem, one needs m 0 b 0 /2 ∼ 37. Therefore, terms of order γ 2 can be safely neglected in Eq. (30). It is clear that the most natural scenario 6 emerges when all the mass parameters of the 5D theory are of the order of M P l . In this case, the elements of the scalar mass matrix defined by Eq. (25) are of the following order of magnitude: where a and b are calculable coefficients of the order of 1. It is clear that for m 0 b 0 /2 ∼ 37 the lowest scalar mass is of order: There are two essential conclusions. First, we see that the lowest scalar mass The RΦ 2 interaction In this section, we will illustrate the general discussion from Section 2, choosing a specific form of the interaction between the bulk scalar and gravity: The function f (Φ) has been normalized such that α = −1 corresponds to a 5D conformally invariant interaction. This coupling was discussed in various contexts by many authors in the past, see e.g. Ref. [7]. Insertion of the solution Φ 0 (y) into, for example, Eq. (35) fixes the form of the bulk potential: In addition, Φ must satisfy its equation of motion as obtained from Eq. (16) for the form Eq. (33): It is easily verified that the bulk form for Φ 0 (y), Eq. (36) where (38) and (42) can be solved for the parameter c in terms of V hid (0) and V ′ hid (0). Two solutions are possible for c, specified by where The functions f i denote the two possible solutions of the quadratic equations for c: where i = 1, 2 corresponds to the + and − signs in front of the square root, respectively. The quantities A, B, C in the above are given by: . Once c is determined, we can compute cγ in terms of V ′ vis 1 2 and V vis 1 2 using Eqs. (39) and (43). One finds where and the appropriate branch j = 1 or j = 2 is determined by the need to obtain a very small value for the warp factor γ i , i.e. f j (β, R vis ) ∼ 0. The latter is most straightforwardly achieved by requiring C vis ≃ 0 and choosing the j = 1, + (j = 2, −) solution for f j (β, R vis ) [see Eq. (46)] for B vis > 0 (B vis < 0), respectively. From Eq. (47), the requirement that C vis ≃ 0 is equivalent to For R vis as above, 7 the positivity of B 2 vis − 4A vis C vis is automatic so long as B vis is not extremely tiny. For R vis as given in Eq. (49), one finds B vis ≃ 3β/2 − 1, so that we must use the j = 1, + solution for β > 2/3 and the j = 2, − solution for β < 2/3. For β = 2/3, the choice becomes ambiguous. (47), the warp factor γ i (and hence the distance between the branes) for a given solution c i is given by with j = 1 for β > 2/3 and j = 2 for β < 2/3, as discussed earlier. In practice, we will require that γ i = γ ≡ e −37 for either choice of i. Further, one can use (for example) Eq. (42) to determine d i : Once d i , c i and γ i have been fixed as specified above, Eqs. (38) and (39) imply a consistency constraint on the model parameters: where γ i ∼ 0 has been used to obtain the last approximate form. Using γ i ∼ 0, Eqs. (39) and (43) also simplify to Finally, it is important to note that the definition of R hid , Eq. (45), yields the following constraint on the relative signs of R hid and V hid (0): Using Eqs. (53) and (52), the condition Eq. (54) can be converted to a requirement expressed entirely in terms of R hid and β for a given solution branch i: The conformal limit of α = −1 (β = −2/3) requires special treatment, 8 since for this choice A hid = 0. In this case, g(x) = 2(1 − x) −4 and h(x) = 8 3 (1 − x) −5/2 . Eqs. (38) and (42) then yield respectively, from which we conclude that V hid (0) < 0 and V ′ hid (0) < 0 are required, which also implies that [see Eqs. (45) and (54)] R hid < 0. By combining Eq. (56) and Eq. (45) we find Note that R hid < −2 is required for 0 < c < 1, but that c is negative for −2 < R hid < 0. There is nothing obvious to forbid this latter choice since (1 − ce −σ(y) ) will automatically be positive for all y if c < 0. In an analogous spirit, utilizing Eqs. (39) and (43), one can show that Combining Eqs. (57) and (58), one obtains the following result for the warp factor: In order to have a phenomenologically acceptable small value for the warp factor γ, either R vis ∼ 2 or R hid ∼ 0 is required. The remaining constraint [the analog of Eq.(52)] in this case reads: Combining this equation with the earlier-noted constraint that V hid (0) < 0 results in the requirement that V vis Eq. (59) and the requirement that γ > 0, the only allowed choices are: (0 < R vis < 2 and − 2 < R hid < 0) or (R vis > 2 and R hid < −2) . (61) 8 In addition to the general solutions discussed in the main text for this case, there exists a special background solution, Φ 0 (y) ∝ e 3/2σ(y) , for which there are no matching conditions since the solution satisfies all the necessary equations everywhere, including the boundaries. For this particular special conformally symmetric case, substituting the form of Φ 0 , as given above, into Eq. (35) leads to a vanishing bulk potential, V (Φ 0 ) = 0. Similarly, Eq. (36) gives V 1,2 (Φ) = 0. Because all the potentials are zero, one finds M 2 r = 0. We are only interested in cases for which a non-zero mass is generated for the radion. (Note that Eq. (49) does not apply for the conformal choice of α.) For R vis ∼ 2, as generally needed for small γ, one finds In Fig. 3, we display Φ 0 (y)/d as a function of y for three cases. In all cases, we have chosen input parameters so that 9 m 0 b 0 /2 ≃ 37 as required for the warp factor γ = e −m 0 b 0 /2 ∼ 1 TeV/M P l . In the first case, we have taken β ∼ +2/3, equivalent to R vis = 4 from Eq. (49), and R hid = +1. For this choice, c ∼ 0.7835. In the second (third) cases, we make the conformal choice of β = −2/3, take R vis ∼ 2 (for small γ) and employ R hid = −4 (R hid = −1) for which c = 1/2 (c = −1). In the β = 2/3 case (which is representative of cases with β > 0), we see that Φ 0 (y) is repulsed from the hidden brane located at y = 0. The second (third) case is representative of a β = −2/3 case for which Φ 0 (y) is strongly peaked on (strongly repulsed from) the hidden brane. A useful cross-check is to adopt the explicit form of the solution (37) to verify that indeed V (0, 0) vanishes as predicted by Eq. (22). In order to calculate the radion potential at the minimum we use Eq. (35) to eliminate V (Φ) in the general formula (19). The result is the simplified expression Then, inserting the solution Eq. (37) into Eq. (63), one obtains: 9 We reemphasize that m 0 b 0 /2 is calculable in terms of the input parameters using Eq. (50) or Eq. (59). where In Fig. 2, we plot K(R hid ) as a function of R hid . We observe that M 2 r > 0 everywhere except at R hid = −2 (a point where c = 0 is required by Eq. (57) and Φ 0 (y) becomes trivial). Our two earlier β = −2/3 plots of the wave function thus correspond to choices for which M 2 r > 0. For fixed β = −2/3, and γ ≪ 1, M 2 r is approximately a function of R hid only, where R hid is to be restricted to those values such that a given solution c 1 or c 2 satisfies the other consistency constraints. We explore the behavior of M 2 r as follows. First, recall that V vis through Eq. (53)] and depends non-trivially on R hid , the other parameters being held fixed. Remarkably, one finds (M 2 r ) i > 0 (as expected) so long as: (a) R hid is such that 1 + (2 + 3β)/R hid > 0 (so that c 1,2 are real); (b) 1 − c i > 0 when 1/β is not an integer; and (c) the positivity condition, Eq. (54), which we abbreviate as R hid V hid > 0, is satisfied. There are many different cases. Here we simply describe a couple of illustrative possibilities. Consider first two choices of β such that 2 + 3β > 0. However, in the R hid ≤ −4 region, 1−c 2 and (R hid V hid ) 2 are both only positive for −4.5 < ∼ R hid ≤ −4 and (M 2 r ) 2 varies rapidly, as illustrated in Fig. 3. This case illustrates the extreme sensitivity that M 2 r can have to R hid and shows that very large and very small values of M 2 r are quite possible. Overall it is clear that there is a large range of possible models that satisfy all the constraints necessary for exact Randall-Sundrum metric with positive radion masssquared. (Some particular choices are somewhat more fine-tuned than others.) We have not understood how to choose between the various models; possibly, the conformal models with β = −2/3 should be regarded as the more attractive. Conclusions We have considered a class of generalizations of the Randall-Sundrum model containing a bulk scalar field Φ. We demonstrated that no-back reaction from the scalar on the Randall-Sundrum metric solution requires the existence of an extra interaction between gravity and the scalar. Here, we considered the coupling R f (Φ). A general form of the potential for the fluctuation of the compactification volume (the radion) and the Kaluza-Klein modes of the excitation of the bulk scalar was derived and the mass matrix was determined. In order to obtain the values of scalar masses, one has to take into account the mixing between the radion and the Kaluza-Klein modes of the fluctuation of the bulk scalar. We demonstrated that a non-zero mass for the lowest eigenstate (which we identify with the physical radion) can be generated using a choice of the background bulk field, Φ 0 (y), that preserves the RS metric (no back-reaction). We found that the radion mass receives the same suppression from the warp factor that is necessary to explain the hierarchy puzzle. Thus, ∼ 1 TeV is a natural order of magnitude for the radion mass. Finally, we illustrated the general scenario for the case of f (Φ) ∝ Φ 2 , for which the scalar background solution that preserves the Randall-Sundrum metric was explicitly found. Since the mass-squared matrix for the radion and KK bulk scalar excitations is non-diagonal, it is clear that the introduction of Higgs-radion mixing on the visible brane through a term in the Lagrangian of the form ξ |g vis |R(g vis ) H † H, as considered for example in [8], would result in a complicated situation where the Higgs field, the radion and the KK excitations of the bulk scalar would all mix. A phenomenological study of the magnitudes of the various mixings, as a function of the available parameters, would be required to understand the extent to which the phenomenology of the physical Higgs boson eigenstate can be modified. Such a study is beyond the scope of this paper. Energy Physics. This work was also supported by a joint Warsaw-Davis collabora-tion grant from the National Science Foundation.
2014-10-01T00:00:00.000Z
2003-04-24T00:00:00.000
{ "year": 2003, "sha1": "750ac6c1688f938f6e594e3ff5925eef41966e41", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0304241", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c3f50f9de7bcc089c3d546e0939a6b478dc55750", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
1186148
pes2o/s2orc
v3-fos-license
An Overview of Label-free Electrochemical Protein Sensors. Electrochemical-based protein sensors offer sensitivity, selectivity and reliabilityat a low cost, making them very attractive tools for protein detection. Although the sensorsuse a broad range of different chemistries, they all depend on the solid electrode surface,interactions with the target protein and the molecular recognition layer. Traditionally, redoxenzymes have provided the molecular recognition elements from which target proteins haveinteracted with. This necessitates that the redox-active enzymes couple with electrodesurfaces and usually requires the participation of added diffusional components, or assemblyof the enzymes in functional chemical matrices. These complications, among many others,have seen a trend towards non-enzymatic-based electrochemical protein sensors. Severalelectrochemical detection approaches have been exploited. Basically, these have fallen intotwo categories: labeled and label-free detection systems. The former rely on a redox-activesignal from a reporter molecule or a label, which changes upon the interaction of the targetprotein. In this review, we discuss the label-free electrochemical detection of proteins,paying particular emphasis to those that exploit intrinsic redox-active amino acids. Introduction Genetic information, imprinted on nucleic acids, has always enchanted the researchers and intrigued them into unraveling its secrets. Proteins, molecular expression of this genetic information, are at the very core of biological function. They are the centre of most pathological conditions and most disease biomarkers are proteins. Besides DNA studies, they are perhaps the subject of most intense research. Biosensors technology has emerged as one of the most promising platforms for studying proteins. Biosensors are devices that combine a biological component (a recognition layer) and a physicochemical detector component (a transducer). The transduction unit can be electrochemical, optical, piezoelectric, magnetic or calorimetric. The recognition layer can be constructed using enzymes, antibodies, cells, tissues, nucleic acids, peptide nucleic acids, and aptamers [1][2][3][4]. In this review, we discuss electrochemical sensors for protein analysis. Electrochemical biosensors have superior properties over other existing measurement systems, because electrochemical biosensors can provide rapid, simple and low-cost on-field detection. Electrochemical measurement protocols are also suitable for mass fabrication of miniaturized devices. In fact, electrochemical biosensors have played a major role in the move towards simplified testing for point-of-care usage. Indeed, self-testing glucose strips, based on screen-printed enzyme electrodes, coupled to pocket-size amperometric meters for diabetes, have dominated the market over the past two decades [5]. There are basically four different pathways for electrochemical detection of proteins: a change in the electrochemical signal of (i) a label, which selectively binds with the target protein, (ii) electro-active amino acids of antibody or target protein, (iii) a secondary antibody-tagged probe, (iv) aptamers-and (v) an enzyme-tagged probe can be monitored [4,6,7]. In this review, we focus on the label-free electrochemical detection of proteins with particular emphasis to those that exploit intrinsic redox-active amino acids. We present recent work carried out by our group as well as work by other groups. Intrinsic redox-active amino acids-based sensors: direct application Since the middle of the 20 th century, electrochemical analysis of proteins is increasingly gaining prominence [8]. From the early 1970s until today, many electro-chemists have focused on a relatively small group of proteins containing a metal center with reversible redox-activity (metalloproteins) [9]. Nowadays, the fact that most of the proteins not containing a metal center can show electrochemical activity, depending on their amino acid structure, has attracted a lot of the attention from researchers. Since polarography has been a well-established method, the first label-free electrochemistry of proteins came from mercury electrodes. Peptides and proteins containing cysteine/cystine (Cys) showed specific electrochemical signals on mercury electrodes with the help of Hg-S bond formation [10,11], reduction of disulfide groups [12], and the catalytic evolution of hydrogen in cobaltcontaining solutions (Brdicka reaction) [13,14]. Hydrogen evolution was also catalyzed at highly negative potentials in the absence of transition metal ions using mercury electrodes with proteins that contained or lacked sulfur amino acids. In combination with chronopotentiometric stripping analysis, presodium catalysis resulted in a well-defined signal that enabled the detection of several important proteins [15]. In the early 1980s, Reynaud et al. [16] and Brabec et al. [17,18] showed that tyrosine (Tyr) and tryptophan (Trp) residues in proteins are electro-oxidizable at carbon electrodes. The electro-oxidation of Tyr residues involves two electron and two proton transfer with an electrode process that is similar to the oxidation of simple p-substituted phenols [16][17][18][19]. Well-developed oxidation peaks of Tyr and Trp in nM concentrations of peptides were obtained by applying voltammetric methods in combination with a sophisticated baseline correction [20].The reader is referred to a recent review by Herzog and Arrigan for a comprehensive discussion on the electrochemical strategies for label-free detection of amino acids, peptides [21]. Here-in our primary focus is on the exploitation of redox-active amino acids for protein sensing. Our group successfully exploited oxidation of Tyr for detection of several biomolecules. Vestergaard at al. presented the first electrochemical detection and aggregation study of Alzheimer's amyloid beta peptides (Aβ-40 and Aβ-42) using three different voltammetric techniques at a glassy carbon electrode (GCE). The method was based on detecting changes in the oxidation signal of Tyr at various time periods during amyloid beta incubation at 37ºC in Tris buffer, pH 7.4. We hypothesised that as the conformation if the peptides changed during aggregation, we should see an accompanied change in the oxidation signal of Tyr. A clear difference in the rate of aggregation was observed between the two peptides. During the study, we observed a decrease in the Try oxidation signal with increase in incubation period. The degree of aggregation was confirmed using thioflavin T label and analysed using a fluorescence spectroscopy and imaging using atomic force microscopy (AFM) [22]. The results are depicted in Fig. 1. We also studied label-free electrochemical detection of phosphorylation based on the electrooxidation of Tyr in connection with differential pulse voltammetry (DPV) using a screen-printed carbon electrode (SPCE). First, we monitored the electrochemical current responses of Tyr and ophospho-L-Tyrosine. We observed that the phosphorylation caused a significant suppression on the electro-oxidation of Tyr. We also monitored electrochemical responses of sarcoma (Src) both in the non-phosphorylated and phosphorylated forms. The procedure was very simple and we propose that label-free electrochemical in vitro detection of Tyr phosphorylation can be performed in a rapid and cost-effective format [23]. Using this principle, we detected the inhibition of Tyr phosphorylation using a small molecule. Using DPV in conjunction with multi-walled carbon nanotube-modified SPCEs, we determined the activity of c-Src non-receptor protein tyrosine kinase, p60 c-Src , in combination with its highly specific substrate peptide, Raytide. Tyr kinase reactions were also performed in the presence of an inhibitor, 4-amino-5-(4-chlorophenyl)-7-(tert-butyl)pyrazolo [3,4- Schematic illustration for the label-free detection of tyrosine-kinase catalysed peptide phosphorylation. The peptides that are conjugated with a magnetic bead (MB) contain a single phosphorylation site such as tyrosine (Tyr). Since Tyr has intrinsic electro-activity, the current response from its voltammetric oxidation is monitored. Under optimized conditions, Tyr residue is phosphorylated in the presence of a tyrosine kinase and ATP. During phosphorylation, the phosphate group at the γ-position of ATP is transferred to the hydroxyl group of Tyr. The intrinsic electroactivity of Tyr is lost upon phosphorylation and the current response decays with the increasing concentration of the tyrosine kinase. Aggregation of α-synuclein has been detected based on the redox-active Tyr and Cys residues. The authors used constant current chronopotentiometric stripping analysis (CPSA) to measure hydrogen evolution (peak H) catalyzed by α-synuclein at hanging mercury drop electrodes (HMDE) and squarewave stripping voltammetry (SWSV) to measure Tyr oxidation at carbon paste electrodes (CPE). Aggregation-induced changes in peak H at HMDE were relatively large in strongly aggregated samples, suggesting that this electrochemical signal may find use in the analysis of early stages of αsynuclein aggregation. Native α-synuclein could be detected down to subnanomolar concentrations by CPSA [25]. The same group successfully detected a metallothionein from rabbit liver by CPSA in conjunction with HMDE [26], and using a phytochelatin-modified electrode, they were successful in detecting cadmium and zinc ions [27]. This highlights the versatility of proteins as recognition elements, serving not only for other macromolecules but also for small molecules such as heavy metals. Directly capturing the possible configuration of biomolecules, and/or their involved interactions with other molecules, without a molecular recognition element is truly a remarkable progress. Although they enable quick and simple initial investigation into whether direct label-free detection is possible or not, they have a profound limitation. They cannot be used, successfully in complex sample matrices, where various protein molecules are present. Label-free protein detection is, therefore, commonly achieved by employing biomolecules with high affinity for the target protein. This ensures much improved specificity, especially when dealing with a more complex sample matrix such as urine, cerebral spinal fluid (CSF), and serum, which contains high levels of serum albumin and immunoglobulins. In this review, we will discuss antibody-based and aptamer-based electrochemical protein sensors that utilise label-free strategies. Antibody-based protein detection Immunosensors exploit the interaction between an antibody (Ab), synthesised in response to the target molecule, an antigen (Ag). Antibodies can be formed, when they are attached to an immunogen carrier such as serum albumin. There are two types of Abs: polyclonal and monoclonal. Polyclonal antibodies (pAb) have an affinity for the target antigen, and are directed to different binding sites, with different binding affinities. Monoclonal antibodies (mAb), on the other hand, are identical, because they are produced from one type of immune cell. They have higher sensitivity and selectivity than pAb, and are, therefore, preferred. Antibody binding sites are located at the ends of two arms (Fab units) of the Y-shaped protein. The tail end of the Y (aka Fc unit) contains species-specific structure, commonly used as an antigen for production of species-specific Abs. The antibody is used as the recognition layer in biosensor development. There exists a handful of general immunosensor formats ( Figure 3) [28]. Antibody-based biosensors started to emerge in the 1970s following the work of Giaver, and Kronick and Little [29,30]. Since then, there has been an immunosensor boom, not surprising given the specificity and sensitivity of Ab:Ag interactions. Our group has developed label-free electrochemical immunosensors, mostly targeting pathologically-important biomarkers. Following successful detection of Aβ-peptide aggregation based on Tyr oxidation signal [22], Kerman and colleagues developed an immunosensor for the model hormone (human chorionic gonadotropin: hCG). Following pretreatment, a carbon paste electrode (CPE) was dipped in phosphate buffer containing EDC and NHS. After incubation at room temperature for 1 h, lysine residues of protein A were coupled onto the modified CPE. Protein A has high binding affinity for the Fc region of Ab. Monoclonal antibody for hCG (β-hCG-mAb) was immobilised on protein A-linked CPE and incubated for a specified time period. Unreacted covalent-active surface groups were subsequently passivated using ethanolamine to prevent non-specific adsorption. The immunosensor was ready for introduction of synthetic hCG and later, human urine samples from pregnant and non pregnant women as well as men. Urine samples were collected in accordance with the ethical standard of the Helsinki Declaration of 1975, as revised in 1996. The analysis was carried out using square wave voltammetry (SWV). Peak currents for both β-hCG-mAb and hCG were observed at ~ 0.6 V (vs Ag/AgCl) (Fig. 4). Using this sensor, the limit of detection was 15 pM in synthetic hCG and 20 pM in human urine [31]. Employing the described underlying principle, we developed an immunosensor for detection of human telomerase reverse transcriptase (hTERT) in urine, using differential pulse stripping voltammetry (DPSV) in conjunction with pencil graphite electrode [32]. Schematic illustration for the label-free voltammetric immunosensors. The primary antibody is coated on the electrode surface and the electro-active residues in the antibody structure give a specific current response. Upon the binding of the target antigen with the antibody, the specific current response of the antibody layer changes. This change can either be an increase or a decrease depending on the structure of the antigen and/or the conformational changes that would occur upon the formation of the antigen-antibody complex. Then, the current responses would be further altered with the binding of the secondary antibody, which would also contribute to the current response with its intrinsic electro-activity. For label-free protein detection the use of carbon nanotubes (CNTs) has provided improved sensitivity. CNTs were first discovered in 1991 [33]. They are cylindrical graphite sheets with properties that make them potentially useful in a wide variety of applications in fields of materials science. They exhibit extraordinary strength and unique electrical properties and are used in electrochemistry for promoting electron transfer reactions with electroactive species. There are two groups of CNTs: SWCNTs and MWCNTs [34]. SWCNTs are comprised of a cylindrical graphite sheet of nanoscale diameter (~ 1 nm) capped by hemispherical ends. MWCNTs comprise several to tens of concentric cylinders of these graphite shells with a layer spacing of 3-4 o A. They have a diameter between 2-100 nm [35]. Since their first application to the study of dopamine [36], CNTs increasingly show potential in bioelectrochemistry. Over the past few years, their preparation and purification have received much attention [37,38]. An immunosensor of IgE based on SWCNT-modified field-effect transistor (FET) was successfully developed [39]. Another label-free electrochemical immunosensor was fabricated using microelectrode arrays modified with single-walled carbon nanotubes (SWCNTs) as transducer surfaces for the detection of a cancer marker, total prostate-specific antigen (T-PSA) using DPV. The current signals, derived from the oxidation of Tyr and Trp residues, increased with the interaction between T-PSA on T-PSA-mAb covalently immobilized on SWCNTs. The selectivity of the biosensor was challenged using bovine serum albumin as the target protein. The detection limit for T-PSA was determined as 0.25 ng/mL. Since the cut-off limit of T-PSA between prostate hyperplasia and cancer is 4 ng/mL, the performance of this immunosensor is promising for further clinical applications (Fig. 5) [40]. Figure 5. Schematic illustration of the label-free immunosensors using single-walled carbon nanotubes (SWCNTs). SWCNTs were grown directly on the platinum electrode surface. Upon the covalent attachment of the antibodies on the SWCNTs, a current response is recorded from the intrinsic electroactivity of the antibodies. As the antigens are bound with the antibodies, the electrochemical responses change. The current height at the peak oxidation potential (~0.5 V vs. Ag/AgCl) increases or decreases depending on the concentration of the bound antigens. Label-free electrochemical impedance spectroscopy (EIS) has been explored widely due to its high sensitivity for detection of small proteins such as interferon-γ (IFN-γ) [41]. EIS provides accurate mechanistic and kinetic information from repeatable adsorption and desorption measurements on the sensor transduction surface. The surface, represented in a form of an electrode, displays both resistive and capacitive properties when a small amplitude sinusoidal excitation perturbs the system at equilibrium. As a result, antibody-based biosensors in EIS are increasingly attracting interest. They allow for direct and label-free electrochemical immunosensing, potentially speeding up detection and analysis of biomarkers, without losing sensitivity. An immunosensor was developed by immobilising anti-IFN-γ antibodies on a self-assembled monolayer (SAM) of acetylcysteine on polycrystalline Au [42]. EIS and cyclic voltammetry (CV) was employed for analysis of carcinoembryonic antigen (CEA). Antibody to CEA (Ab-CEA) was covalently attached on a glucothione monolayer-modified Au nanoparticle (NP). The resulting conjugate was immobilised on Au electrode by electro-copolymerisation with O-aminophenol, completing the sensor development. Introduction of CEA increased the electron transfer resistance of [Fe(CN) 6 ] 3-/4redox pair [43]. Using EIS, a sensitive immunosensor was developed via covalent coupling of the antibody with functionalized AuNPs for apolipoprotein A-I (Apo A-I) detection. The hybrid AuNPs were prepared using SAM and sol-gel techniques for improved performance, ~6-17 times higher than that fabricated by normal SAM technique did. The detection limit of the immunosensor was 50 pg mL −1 Apo A-I, two orders of magnitude lower than that of the traditional methods [44]. A label-free electrochemical impedance immunosensor for rapid detection of Escherichia coli O157:H7 was developed by immobilizing anti-E. coli antibodies onto an indium-tin oxide interdigitated array (IDA) microelectrode. Similar to Tang et al. [44], immobilization of antibodies and the binding of E. coli cells to the IDA microelectrode surface resulted in an increase in the electron-transfer resistance, directly measured with electrochemical in the presence of [Fe(CN)6] 3-/4as a redox probe [45]. Aptamer-based protein detection Aptamers are synthetic oligonucleotides that can be generated to selectively bind to low-molecular weight organic and inorganic substrates and to macromolecules such as proteins, drugs, with high affinity [46][47][48][49][50][51]. A reusable label-free aptasensor for detection of small molecules using EIS was recently reported [52]. The affinity constants for aptamers are comparable to the binding constants of antibodies to antigens, i.e., it is in the micromolar to nanomolar scale [53]. Aptamer-protein interactions can be monitored using intrinsic DNA and protein oxidation signals on carbon electrodes. Recent reports on electrical/electrochemical aptasensors have highlighted the promise that aptamers hold for detection of proteins and their involved interactions. Their selection by the SELEX (systematic evolution of ligands by exponential enrichment) confers them a high affinity for their substrate. They could even substitute antibodies as primary candidates in biosensing. Their advantages over antibodies include: (i) no need for complex Ag:Ab sandwich-assay needed for labeled techniques, as modification of aptamer with label is possible for direct analysis of aptamer:substrate interaction, (ii) the synthesis of aptamers leads to highly reproducible structures of the binding ligands compared to antibodies, and (iii) chemical modification of aptamers with labels or functional groups that enable them to tether to transducers is easier than with antibodies. Still, the sensitivity of DNA and RNA to nucleases makes them more susceptible to denaturation and may require purer environments for their applications than antibodies [4,54]. We have already discussed intrinsic redox-active amino acids in section 2. Here, we will briefly discuss electrochemical oxidation of DNA bases. Guanine and adenine are the most electro-active DNA bases because they can easily be adsorbed and oxidized on carbon electrodes. Guanine and adenine oxidation signals on carbon electrodes can be observed at around 1.0 and 1.3 V in 0.50 M acetate buffer solution at pH 4.80, respectively [55]. Guanine is the most redox-active of the DNA bases and the mechanism for its oxidation has been studied in detail [56][57][58]. The oxidation of guanine and adenine shows peak currents at ~0.9 V and ~1.2 V, respectively, depending on the pH and ionic strength of the electrolyte and the electrode material. The process was shown to have two steps, involving the total loss of four electrons and four protons. A recent review by Palecek et al. (2005) discusses the electrochemical mechanisms for the oxidation and reduction of DNA bases on carbon and mercury electrodes [59]. Monitoring the changes in these signals upon duplex formation enabled the detection of hybridization [60,61]. The electrochemical signals obtained from free adenine and guanine bases decreased on binding to their complementary thymine and cytosine bases after hybridization. Label-free interaction of DNA with drugs, metals and proteins have been monitored using the intrinsic oxidation signal of DNA by a large number of groups [62][63][64]. Electrocatalytic oxidation of guanine, guanosine and guanosine monophosphate has also been recently reported [65]. Rodriquez et al. [66] developed the EIS-based label-free biosensor for monitoring aptamer interactions. A biotinylated aptamer for lysozyme was linked to streptavidin-functionalised electrode, acting as an aptasensor the lysozyme protein. The same group developed a highly specific and sensitive aptasensor for detecting protein interactions using aptamer-coated magnetic beads in conjunction with chronopotentiometric stripping [67]. Using EIS, an aptamer-based sensor was developed, in an Au electrode array configuration, for the detection of IgE [68]. Thrombin was detected down to 0.1 nM First, an aptamer was self-assembled on a microfabricated thin film Au electrode. Then the protein was introduced onto the recognition surface for a binding event, monitored using EIS [69]. Nanotechnology and label-free protein sensors In our recent review article, we discussed the current role and future prospects of nanosensor technology in the study of Alzheimer's disease biomarkers [70] without particular focus to the electrochemical label-free protein sensors. The reader is also referred to a recent review by Pumera and colleagues for an in-depth discussion on the main techniques and methods which use nanoscale materials for construction of electrochemical biosensors [71]. Here, we confine our review to the exploitation of nanotechnology for application to label-free protein sensing. Au and Ag NPs, have been exploited in the fabrication of localised surface plasmon resonance chips for both label and labelfree protein detection [72][73][74] because of their unique optical properties [75][76][77]. These metal NPs show specific changes in their absorbance responses in the visible region of the spectrum upon binding with various molecules such as nucleic acids or proteins (Fig. 6). In electrochemistry, Au NPs have also been employed, but as labels for detection of proteins and other target molecules based on monitoring the reduction current signal of Au in HCl [78][79][80][81][82][83]. For label-free protein detection the use of carbon nanotubes (CNTs) has provided improved sensitivity. Fabregas group developed a multiwalled carbon nanotube (MWCNT)/polysulfone (PS) biocomposite membrane modified thick-film screen-printed electrochemical biosensor. The fabricated CNT/PS strips were reported to be mechanically stable, and to exhibit high electrochemical activity. Furthermore, the biocompatibility of CNT/PS composite allowed easy incorporation of the biological functional moiety of horseradish peroxidase [84], providing simplicity and robustness. Exploiting the electrical properties of CNTs with nanotechnology, aptamers are increasingly becoming one of the most promising candidates for protein biosensors. In our group, a sensor based on aptamer-modified carbon nanotube-FET was developed for label-free detection of immunoglobulin E (IgE). Briefly, 5'-amino-modified aptamers were immobilised on CNT channels and the electrical properties monitored in real-time. Upon introduction of IgE, a sharp decrease in the source-drain current was observed (Fig. 7). After optimisation, the limit of detection was determined at 250 pM IgE [39]. Compared with performance of this aptasensor and the IgE immunosensor described earlier in the same article [39], under similar conditions, the aptasensor provided better sensitivity. So et al. [65] also utilised the real-time detection of protein using single-walled carbon nanotube (SWCNT)-FETbased aptasensors. Anti-thrombin aptamers, highly specific to serine protein thrombin were immobilized on the sidewall of a SWCNT-FET using carbodiimidazole-activated Tween 20 as the linking molecules. The binding of thrombin aptamers to SWCNT-FETs caused a rightward shift of the threshold gate voltages, presumably due to the negatively charged backbone of the DNA aptamers. While the addition of thrombin solution caused an abrupt decrease in the conductance of the thrombin aptamer immobilized SWCNT-FET [66]. Challenges for development of point-of-care biosensors We will conclude by briefly touching on current and future challenges in the development and application of biosensor technology for point -of-care testing (POCT). Already there are several commercially available biosensing tools for home use such as monitoring of glucose levels for diabetic patients [5]. Most commercially available POCTs are based on immunochromatographic membrane strips (immunostrips). Examples include home-use pregnancy testing kits, influenza test strips carried out at local practitioners, and strips for cancer biomarker prostate specific antigen (PSA) [85]. Although this technology is simple and works reasonably well, the main problem is sensitivity. Most immunostrips have nanomolar level sensitivity whereas electrochemical analysis of the equivalent tests go down to picomolar concentration levels [31,34,84]. Gold NP-enhanced immunostrips and resinbased micropipette tips showed increased sensitivity for hCG and PSA [86,87]. Electrochemical analysis of the AuNP-enhanced strips gave an even higher sensitivity, showing unequivocally, the superiority of electrochemical analysis over immunostrip-based assay [88]. The challenge remains that of appropriate miniaturisation and integration. Another current challenge is utilisation of biosensors for clinical samples, in particular, blood. Advances in sample pretreatment (extraction and purification) to counter matrix interference are needed. Detection of small and low abundant biomarkers in clinical samples is problematic mainly due to the presence of high abundant proteins such as albumin (HSA) and immunoglobulin G (IgG), accounting for at least 65% of plasma proteins [89]. Depletion of these proteins may not only facilitate detection but also biomarker discovery. Depletion should aim to provide selective removal of high abundant proteins, and concentrate low abundant components. Although commercially pre-packed columns for HSA and IgG removal are available, they lack the flexibility to accommodate issues pertaining to sample size, for example. In addition, some rely on use of pumps and other equipment to operate smoothly, limiting their use in many health centres [90,91]. In our group, we developed a simple rapid method for removal of both IgG and HSA based on affinity chromatography principles. Depletion of the proteins from serum samples substantially lowered the detection limit of PSA [92]. We are presently in the early days of the emerging technology of using label-free electrochemistry of proteins in the development of biosensors. So far, antibody-based biosensors have been intensively developed, although many improvements are required in reproducibility and sensitivity. More, there is no doubt that a growing number of aptamers and, it is to be hoped, nanomaterial-based label-free biosensors will soon be used for the diagnosis and therapeutic follow-up of diseases.
2014-10-01T00:00:00.000Z
2007-12-01T00:00:00.000
{ "year": 2007, "sha1": "0f3697c57a46e178f1e8f5bb572489b0de8ff10f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/7/12/3442/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0f3697c57a46e178f1e8f5bb572489b0de8ff10f", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
18068661
pes2o/s2orc
v3-fos-license
The first fermi in a high energy nuclear collision At very high energies, weak coupling, non-perturbative methods can be used to study classical gluon production in nuclear collisions. One observes in numerical simulations that after an initial ``formation'' time, the produced partons are on shell, and their subsequent evolution can be studied using transport theory. At the initial formation time, a simple non-perturbative relation exists between the energy and number densities of the produced partons, and a scale determined by the saturated parton density in the nucleus. An outstanding problem in high energy scattering is the problem of initial conditions for particle production 1 . In perturbative QCD, for processes which involve a hard scale Q 2 ≫ Λ 2 QCD , the hard and soft contributions can be factorized. The soft contributions are lumped into non-perturbative, process independent parton distribution functions, while the hard contributions are computed for each physical process of interest. For a fixed hard scale of interest Q 2 , there is a center of mass energy √ s beyond which this approach in particular, and the operator product expansion (OPE) in general breaks down 2 . However, since the parton densities in this regime are large, weak coupling classical methods may be applicable 3 . Wilson renormalization group methods have been developed for this high parton density regime 4 . At small x, classical parton distributions in a nucleus can be computed in a model with a dimensionful scale µ 2 proportional to the gluon density per unit transverse area. In this model, parton distributions saturate at a scale Q s ∝ g 2 µ. For Au-Au collisions at RHIC, one can estimate Q s ∼ 1 GeV, and at the LHC, the saturation scale will be Q s ∼ 2-3 GeV. Most of the gluons produced therefore have transverse momenta k t ∼ Q s , and since this scale for RHIC and LHC is at least marginally a weak coupling scale, these classical methods may be applied to study the production and initial evolution of partons at RHIC and LHC. These classical methods were first applied to nuclear collisions by Kovner, McLerran and Weigert 5 . For an interesting alternative approach, see Ref. 6 . Assuming boost invariance, and matching the equations of motion in the forward and backward light cone, they obtained the following initial conditions for the gauge fields in the A τ = 0 gauge: Here A i 1,2 (ρ ± ) (i = 1, 2) are the pure gauge transverse gauge fields corresponding to small x modes of incoming nuclei (with light cone sources ρ ± δ(x ∓ )) in the θ(±x − )θ(∓x + ) regions respectively of the light cone. The sum of two pure gauges in QCD is not a pure gauge-the initial conditions therefore give rise to classical gluon radiation in the forward light cone. For p t >> α S µ, the Yang-Mills equations may be solved perturbatively to quadratic order in α S µ/p t . After averaging over the Gaussian random sources of color charge ρ ± on the light cone, the perturbative energy and number distributions of physical gluons were computed by several authors 5,7 . In the small x limit, it was shown that the classical Yang-Mills result agreed with the quantum Bremsstrahlung result of Gunion and Bertsch 8 . In Ref. 9 , we suggested a lattice discretization of the classical EFT, suitable for a non-perturbative numerical solution. Assuming boost invariance, we showed that in A τ = 0 gauge, the real time evolution of the small x gauge fields A ⊥ (x t , τ ), A η (x t , τ ) is described by the Kogut-Susskind Hamiltonian in 2+1dimensions coupled to an adjoint scalar field. The lattice equations of motion for the fields are then determined straightforwardly by computing the Poisson brackets. The initial conditions for the evolution are provided by the lattice analogue of the continuum relations discussed earlier in the text. We impose periodic boundary conditions on an N × N transverse lattice, where N denotes the number of sites. The physical linear size of the system is L = a N , where a is the lattice spacing. It was shown in Ref. 10 that numerical computations on a transverse lattice agreed with lattice perturbation theory at large transverse momentum. For details of the numerical procedure, and other details, we refer the reader to Ref. 10 . In our numerical simulations, all the relevant physical information is compressed in g 2 µ and L, and in their dimensionless product g 2 µL 11 . The strong coupling constant g depends on the hard scale of interest; µ ∝ A 1/6 depends on the nuclear size, the center of mass energy, and the hard scale of interest; L 2 is the transverse area of the nucleus. Assuming g = 2 (or α S = 1/π), µ = 0.5 GeV (1.0 GeV) for RHIC (LHC), and L = 11.6 fm for Au-nuclei, we find g 2 µL ≈ 120 for RHIC and ≈ 240 for LHC. (The latter number would be smaller for a smaller value of g at the typical LHC momentum scale.) As will be discussed later, these values of g 2 µL correspond to a region in which one expects large non-perturbative contributions from a sum to all orders in ∼ 6 α S µ/p t , even if α S ≪ 1. We should mention here that deviations from lattice perturbation theory, as a function of increasing g 2 µL, were observed in our earlier work 10 . In Ref. 12 , we computed the energy density ε as a function of the proper time τ . This computation on the lattice is straightforward. To obtain this result, we computed the Hamiltonian density on the lattice for each ρ ± , and then took the Gaussian average (with the weight µ 2 ) over between 40 ρ trajectories for the larger lattices and 160 ρ trajectories for the smallest ones. The dependence of ετ as a function of τ was investigated in our numerical simulations. For larger values of g 2 µL, ετ increases rapidly, develops a transient peak at τ ∼ 1/g 2 µ, and decays exponentially there onwards, satisfying the relation α + β e −γτ , to the asymptotic value α (equal to the lattice dE/L 2 /dη!). This behavior is satisfied for all g 2 µL ≥ 8.84, independently of N . One can interpret the decay time τ D = 1/γ/g 2 µ as the appropriate scale controlling the formation of gluons with a physically well defined energy. In other words, τ D is the "formation time"in the sense used by Bjorken 13 . The physical energy per unit area per unit rapidity of produced gluons can be defined in terms of a function f (g 2 µL) as The function f here is obtained by extrapolating our results for finite lattice spacings to the continuum limit. In the region of physical interest for heavy ion collisions, f varies very slowly. It changes by ∼ 25% for nearly an order of magnitude change in g 2 µL. The saturation scale Q s ∼ 6 α S µ-one can therefore re-write our result for the energy density in terms of Q s . Doing so, we confirmed that our results are consistent with an estimate by A. H. Mueller 14 for the number of produced gluons per unit area per unit rapidity. He obtains dN/L 2 /dη = c (N 2 c −1) Q 2 s /4π 2 α S N c , and argues that the number c is a non-perturbative constant of order unity. If most of the gluons have p t ∼ Q s , then dE/L 2 /dη = c ′ (N 2 c − 1) Q 3 s /4π 2 α S N c which is of the same form as our Eq. 1. In the g 2 µL region of interest, our function f ≈ 0.23-0.26. We obtain c ′ = 4.3-4.9. Since one expects a distribution in momenta about Q s , it is very likely that c ′ is at least a factor of 2 greater than c-thereby yielding a number of order unity for c as estimated by Mueller. This coefficient can be determined more precisely when we compute the non-perturbative number and energy distributions 15 . In Ref. 12 , we estimated the initial energy per unit rapidity of produced gluons at RHIC and LHC energies. We did so by extrapolating from our SU(2) results to SU(3) assuming the N c dependence to be (N 2 c −1)/N c as in Mueller's formula. At late times, the energy density is ε = (g 2 µ) 4 f (g 2 µL) γ(g 2 µL)/g 2 , where the formation time is τ D = 1/γ(g 2 µL)/g 2 µ as discussed earlier. We find ε RHIC ≈ 66.49 GeV/fm 3 and ε LHC ≈ 1315.56 GeV/fm 3 . Multiplying these numbers by the initial volumes at the formation time τ D , we obtained the classical Yang-Mills estimate for the initial energies per unit rapidity E T to be E RHIC T ≈ 2703 GeV and E LHC T ≈ 24572 GeV respectively. Compare these numbers to results presented recently by Kajantie 16 for the mini-jet energy (computed for p t > p sat , where p sat is a saturation scale akin to Q s ) in the pQCD mini-jet approach 17 . He obtains E RHIC T = 2500 GeV and E LHC T = 12000. The remarkable closeness between our results for RHIC is very likely a coincidence. Kajantie's result includes a K factor of 1.5-estimates range from 1.5-2.5 18 . For the latest estimates from our Finnish colleagues, see the preprint of Eskola et al. 20 . If we pick a recent value of K ≈ 2 19 , we obtain as our final estimate, E RHIC T ≈ 5406 GeV and E LHC T ≈ 49144 GeV. We can also boldly estimate the number of produced gluons at central rapidities. As mentioned in the preceding text, the value of the constant c in the expression for the number distribution is currently being computed numerically. We obtain for Au-Au collisions in one unit of rapidity the result that N RHIC = 714 · c and N LHC = 2855 · c. Given that the corresponding constant for the energy density was larger, we would anticipate that it is more likely that c = 2-2.5. Taking the higher value, we obtain N RHIC = 1785 and N LHC = 7138. Again, these values are very close to those of Eskola et al. 20 . Note that they consider Pb-Pb collisions and their results include a K factor of 2. The purpose of this simple exercise is primarily to confirm that our numbers are not wildly divergent from mini-jet calculations. Our results are typically a factor of two larger (more at LHC) but this is easily understood because our results include all momentum modes. The number density of these out-of-equilibrium gluons can be related to the equilibrium entropy: S glue = 3.6·N glue . This is particularly so at the LHC, where because Q 2 s ≫ Λ 2 QCD , elastic scattering dominates. The equilibrium entropy of gluons is, to within a factor of two, (which can be quantified in one's thermal+hydro model of choice), the entropy of pions.
2014-10-01T00:00:00.000Z
1999-08-09T00:00:00.000
{ "year": 1999, "sha1": "159b6ad1eb258960b129dce6a44cd28c1b627b97", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a173200eea09f5069de38a8c20eb9ea170dace20", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
18691976
pes2o/s2orc
v3-fos-license
Supported treadmill training to establish walking in non-ambulatory patients early after stroke Background It has been reported that only half of the non-ambulatory stroke patients admitted to inpatient rehabilitation in Australia learn to walk again [1]. Treadmill walking with partial weight support via an overhead harness is a relatively new intervention that is designed to train walking. The main objective of this randomised controlled trail is to determine whether treadmill walking with partial weight support via an overhead harness is effective at establishing independent walking (i) more often, (ii) earlier and (iii) with a better quality of walking, than current physiotherapy intervention for non-ambulatory stroke patients. Methods A prospective, randomised controlled trial of inpatient intervention with a 6 month follow-up with blinded assessment will be conducted. 130 stroke patients who are unable to walk independently early after stroke will be recruited and randomly allocated to a control group or an experimental group. The control group will undertake 30 min of routine assisted overground walking while the experimental group will undertake 30 min of treadmill walking with partial weight support via an overhead harness per day. The proportion of participants achieving independent walking, the quality of walking, and community participation will be measured. The study has obtained ethical approval from the Human Research Ethics Committees of each of the sites involved in the study. Discussion Given that the Australian population is ageing and people after stroke can expect to live for longer, attainment of safe, independent walking is more likely to be associated with long-term health and well being. In its National Research Priorities, the Government has recognised that it will be important to promote healthy ageing and that this endeavour will be underpinned by research. The results of this study will clearly identify effective intervention to establish early quality walking, thereby promoting an increase in community participation in the longer term. Trial Registration The protocol for this study is registered with US NIH Clinical trials registry (NCT00167531) Methods: A prospective, randomised controlled trial of inpatient intervention with a 6 month follow-up with blinded assessment will be conducted. 130 stroke patients who are unable to walk independently early after stroke will be recruited and randomly allocated to a control group or an experimental group. The control group will undertake 30 min of routine assisted overground walking while the experimental group will undertake 30 min of treadmill walking with partial weight support via an overhead harness per day. The proportion of participants achieving independent walking, the quality of walking, and community participation will be measured. The study has obtained ethical approval from the Human Research Ethics Committees of each of the sites involved in the study. Discussion: Given that the Australian population is ageing and people after stroke can expect to live for longer, attainment of safe, independent walking is more likely to be associated with longterm health and well being. In its National Research Priorities, the Government has recognised that it will be important to promote healthy ageing and that this endeavour will be underpinned by research. The results of this study will clearly identify effective intervention to establish early quality walking, thereby promoting an increase in community participation in the longer term. Trial Registration: The protocol for this study is registered with US NIH Clinical trials registry (NCT00167531) Background It has been reported that only half of the non-ambulatory stroke patients admitted to inpatient rehabilitation in Australia learn to walk again [1]. Treadmill walking with partial weight support via an overhead harness is a relatively new intervention that is designed to train walking. In order to optimise the outcome of walking, practice is critical because skill in performance improves as a function of practice [2]. For non-ambulatory stroke patients, treadmill walking with partial weight support provides the opportunity to complete more walking practice than would be possible using assisted overground walking. There is evidence from systematic reviews that outcome following stroke is associated with the amount of practice undertaken [3,4]. However, Australian research has shown that little practice is completed in rehabilitation [5,6]. One of the barriers to completion of more walking practice in non-ambulatory stroke patients is marked muscle weakness and poor coordination, which results in an inability to practice the whole task. Even with the assistance of a therapist, they may not be able to complete even a few steps of overground walking. Treadmill walking with partial weight support via an overhead harness provides the opportunity to complete larger amounts of walking practice, eg, even if patients only walk for 5 min at a slow speed of 0.2 m/s supported on a treadmill, they will 'walk' 60 m. The provision of weight support seems crucial since Visintin et al [7], in a randomised controlled trial, found a small beneficial effect when treadmill training was combined with partial weight support compared with no weight support. Moreover, treadmill walking with partial weight support via an overhead harness means that the therapist can provide the opportunity for subjects to complete large amounts of walking practice without contravening occupational health and safety standards in that the therapist is less likely to injure their back and the subject is less likely to fall. Furthermore, in using this equipment [8], we have developed innovative equipment and procedures to enable one therapist to deliver the intervention safely. For example, we have modified a chair to support the therapist while lifting the affected foot through, modified footwear to allow for easy lifting of the affected foot, and designed an attachment to support the affected hand. The efficacy of treadmill walking with partial body weight support in non-ambulatory patients after stroke is unclear. A Cochrane Collaboration Systematic Review [9] concludes that there is as yet no definitive answer about whether this intervention helps more non-ambulatory patients learn to walk compared to assisted overground walking. There are randomised controlled trials examining whether treadmill walking and partial body weight support is effective in a mixed group of ambulatory and non-ambulatory patients [7,[10][11][12][13]. The only RCTs which examine non-ambulatory patients, compare treadmill training and partial body weight support with alternative interventions such as a mechanised gait trainer [14] or aggressive bracing [15] rather than current practice. This situation has led to the Cochrane Collaboration Systematic Review [9] to recommend that separate, large number, high quality studies for non-ambulatory patients be undertaken to examine the efficacy of treadmill walking with partial body weight support after stroke. The main objective of this randomised controlled trail is to determine whether treadmill walking with partial weight support via an overhead harness is effective at establishing independent walking (i) more often, (ii) earlier and (iii) with a better quality of walking, than current physiotherapy intervention for non-ambulatory stroke patients. The study targets stroke patients who are unable to walk independently on admission to rehabilitation. It will determine the efficacy of treadmill walking with partial weight support via an overhead harness on the establishment and quality of independent walking. Furthermore, by assessing community participation six months later, it will evaluate the longer-term effect of this intervention. Design A prospective, randomised controlled trial will be carried out. 130 subjects will be randomly allocated into either an experimental group (treadmill walking and partial weight support with one therapist) or a control group (assisted overground walking with one therapist) by a recruiter blinded to the sequence of group allocation. All outcome measures and data analysis will be completed by a researcher who is blinded to participant group allocation. The study has obtained ethical approval from the Human Research Ethics Committees of each of the sites involved in the study. Participants Stroke patients will be screened and invited to participate if they: • are within 3 weeks of their first stroke • are aged between 50 and 85 years of age • are diagnosed clinically with hemiparesis or hemiplegia of acute onset, and • are non-ambulatory defined as scoring 0 or 1 on the Motor Assessment Scale for stroke. Participants will be excluded if they: • have clinically evident brainstem signs • have severe cognitive and/or language deficits which preclude them from following instructions in training sessions • have unstable cardiac status which would preclude participation in a rehabilitation program, or • have any pre-morbid history of orthopaedic conditions of the lower limbs which would preclude them from relearning to walk. The presence of sensory loss, neglect and/or spasticity will not be exclusion criteria. However, their severity will be recorded using the Nottingham Sensory Assessment for sensory loss, the line bisection test for neglect, and the Ashworth Scale for spasticity. In addition, information about site and size of lesion will be collected. Randomization Participants will be randomised into an experimental or a control group. We will stratify the randomisation. First, given the potential confounder of site, participants at each site will be randomised separately. Second, at each site, participants will be stratified according to initial level of motor disability since it has been found to affect outcome [16]. Since all the participants will be unable to walk on admission to the study, sitting balance will be used to stratify the allocation of participants to groups because it has been found to be a useful prognostic indicator of walking outcome [17][18][19]. Therefore, participants will be stratified according to Item 3 (Sitting Balance) of the Motor Assessment Scale for stroke [20] so that those with a score of 0-3 will be classified as severely disabled and those with a score of 4-6 will be classified as moderately disabled. Within each of the two strata (moderate versus severe level of disability), participants will be allocated randomly to one of two groups, the experimental group or the control group. Random permuted blocks will be used so that after every block (of 6-10 participants), the experimental and control groups will contain equal numbers. In summary, stratification will occur according to site (three sites) and level of disability (two levels). Therefore, there will be 6 strata and participants will be randomised separately within each stratum. The random sequence of group allocation will be concealed from the person recruiting participants. Intervention Both the experimental and the control group will undergo a maximum of 30 minutes per day of walking practice with assistance from one therapist, five days a week until they walk or until discharge from rehabilitation. The total daily time of intervention will be 30 minutes from beginning (ie, from when the participant is in the gym) to end (ie, when the participant is back in the wheelchair). This time therefore includes transferring, putting on aids or setting up equipment, ie, training does not have to be continuous so that rests may be taken. The amount of assistance during walking will be standardised to one therapist, however, additional help will be allowed during setting up walking (ie, getting the participant onto the treadmill for treadmill walking or into standing for overground walking). The rationale for this protocol is based on clinical observation of how much time and how many therapists are currently used in trying to get a non-ambulatory person to walk. Other intervention involving lower limb function, (ie, strengthening exercises, practice loading the affected leg during activities such as sitting, standing up and standing) will be standardised to 60 min per day. No other part of the multidisciplinary rehabilitation program will be controlled. Randomization should ensure that any effect of other interventions will be the same for both groups, therefore, other therapies will not be withheld. Experimental group Training for the experimental group will primarily involve walking on a treadmill supported in a body harness. Treadmill training with partial weight support via an overhead harness will be conducted using commercially available systems such as the Spacetrainer (TR Equipment, Tranas, Sweden) and the Lite-Gait (Mobility Research, USA). These systems have an access ramp so that a wheelchair can be wheeled onto the treadmill and an automatic lifter so that the harness can be prefitted in sitting or lying and the patient lifted into standing. In addition, there is good access to the patient's legs and the treadmills can run extremely slowly allowing adequate time to assist the legs to swing through. There will be guidelines to determine the progression of training both in terms of increasing treadmill speed and reducing weight support. At the start, support from the harness will be as little as possible but up to a maximum of 40% of body weight since Hesse et al [21,22] have found this to be the maximum support which does not dramatically alter the kinematic and kinetic features of walking. The actual weight relief will be determined by observation of whether the knee can extend in midstance. If the knee remains flexed, then the affected lower limb muscles are too weak to support the body weight and indicates that more weight relief is required. At the start, the speed of the treadmill will be as fast as comfortable while still maintaining a reasonable step length. If a participant is too disabled to walk on a treadmill moving at 0.1 m/s with the assistance of one therapist, they will walk on the spot practising lifting their feet rhythmically. When participants attain a speed of 0.4 m/s, a reduction in weight support will occur if participants can (i) swing their affected leg through without help, (ii) maintain a straight knee during stance phase without hyperextension, and (iii) maintain an adequate step length (rather than a high cadence) without help. These guidelines have been tested for feasibility and published [8]. Information describing the specific features of the training session (such as treadmill speed, amount of weight support, distance walked, assistance required) will be recorded to monitor adherence to the guidelines and to be able to describe the intervention accurately. Control group Training for the control group will involve current practice of assisted overground walking. If a participant is too disabled to walk with the help of one therapist, they will practise stepping forwards and backwards or standing with a knee splint and practising shifting weight from leg to leg. Aids such as knee splints, ankle-foot orthoses, parallel bars, forearm support frames and walking sticks can be utilised as part of training. Training aims to produce independent walking. Therefore, progression of training encompasses both increasing speed and reducing assistance from both aids and the therapist. Information describing the specific features of the training session (such as use of aids, distance walked, assistance required) will be recorded to monitor adherence to the guidelines and to be able to describe the intervention accurately. The end-point of the training phase of the study will be either the attainment of independent walking or discharge from the rehabilitation unit. The end-point of the followup phase of the study will be 6 months after admission to the study. Measurement The initial outcome measures will be: Proportion of participants achieving independent walking Independent walking will be operationally defined as 'being able to walk 15 m continuously barefoot across flat ground without any aids'. Participants will be tested once a week in the morning (ie, before the training session). Participants will continue to be tested until they achieve independent walking or are discharged. Quality of independent walking Quality of walking will be measured by quantifying parameters such as speed, affected and intact step length, step width, and cadence. These parameters are the result of the timing and magnitude of the angular displacements during walking and are therefore global measures which reflect qualitative aspects of walking. When participants achieve independent walking, their overground walking will be measured by placing markers on the heels of both the unaffected and affected legs, so that step length of both the affected and unaffected leg, walking speed, cadence, and step width can be determined. The 6-month outcome measures will be: Proportion of participants achieving independent walking Quality of independent walking Community participation Two aspects of community participation will be measured. First, mobility status will be assessed using the 6min walk test, number of falls since discharge, a selfefficacy questionnaire about walking capability, and the Adelaide Activities Index. Second, living arrangements will be assessed by recording the type of residence and the amount of support within the residence. Sample size 130 participants will be recruited. The sample size has been calculated to reliably detect a treatment effect size of a 25% increase in proportion of independent walkers with 80% power at a two-tailed significance level of 0.05. For non-ambulatory patients, it takes an average of 3 months to achieve independent walking with about 50% walking independently at six weeks [1]. We are interested in being able to detect a 25% increase, from 50% to 75%, proportion of non-ambulatory patients walking independently by six weeks. Only an effect such as this is clinically significant enough to warrant a change in the implementation of services which would involve the re-education of physiotherapists and the expense of purchasing a treadmill and overhead harness system. The smallest number of participants to detect this difference between two proportions estimated from independent samples is 65 participants per group, ie, 130 participants in total [23]. However, since the analysis of the data is survival curve analysis, there is greater power than the calculation would suggest because the effect of dropouts is minimal in this analysis. For example, in a pre-post design trial of 6 weeks intervention, if a subject dies or is lost to follow-up (ie, drops out) at 5 weeks, there is no measurement at the post-test on which to perform an analysis. In this trial, which ascertains once a week whether independent walking has been established, they would be censored at five weeks so that they no longer form part of the total sample. However, this participant's data would be available for the five weeks they participated in the trial thereby minimising the effect of their dropping out. In addition to, but separate from, the difference in the proportion of participants walking between the groups, there may also be a difference in the quality of walking. On the assumption that 20% of participants may be lost to follow-up and that 80% of participants entering reha-bilitation achieve basic independent walking [1], there are likely to be 84 participants with "quality of walking" data at 6 months. Goldie et al [24] has suggested that a minimum difference in walking speed worth detecting is 0.2 m/s. The walking speed of a population of stroke patients who have recently completed rehabilitation [25] on entry to a randomised controlled trial was 0.56 (SD, 0.27) m/s using measurement procedures similar to the present proposal. 60 participants are needed to detect a treatment effect size of 0.2 m/s difference in walking speed between the groups at 6 months with 80% power at a two-tailed significance level of 0.05 therefore 84 participants gives over 90% power. Statistical analysis The proportion of independent walkers and the time to achieve independent walking will be compared between the two groups using the logrank test, in which those who do not achieve independent walking are censored at the time they are discharged. Survival analysis using Cox's regression will be used to compare the times in the two groups while allowing for possible confounding variables, such as other interventions received, and baseline sitting balance. The five variables that reflect the quality of walking: speed, affected and intact step length, step width and cadence, will be compared between the two groups using Student's t-test, or Wilcoxon's rank-sum test for variables that are clearly not Normally distributed. The four variables that reflect the mobility status: 6-min distance, number of falls since discharge, self-efficacy questionnaire about walking capability, and the Adelaide Activities Index will be compared between the two groups using Student's t-test, or Wilcoxon's rank-sum test for variables that are clearly not Normally distributed. The two variables which reflect living arrangements: type of residence and the amount of support within the residence, will be analysed descriptively. Descriptive data about lesion, neglect, spasticity and sensation will be used in post-hoc multiple regression analysis to examine if these factors affected walking outcome. Discussion Australia, like many other developed nations, is undergoing a major demographic shift involving significant growth in the aged population. The Australian Government has recognised that a revolution is underway at the end of the life cycle. In its National Research Priorities, the Government has recognised that it will be important to promote healthy ageing and that this endeavour will be underpinned by research. Only by establishing evidence-based interventions, such as the one outlined in this project, will the severity of many health problems be reduced. In economic terms, if non-ambulatory patients do not learn to walk after their stroke, it is likely that they will require assisted care, placing a high burden on the community. Furthermore, given that independence in walking is a major factor in the decision to discharge patients from inpatient care, earlier independent walking should result in a reduction in length of hospital stay which should result in cost saving. The Queensland Health Report "Hospital benchmarking pricing model" has estimated the cost of a day in hospital for a neurological patient at $880. In summary, this study has the potential to reduce disability following stroke by improving the outcome of walking and thereby to reduce the burden of this condition on the community. Increasing the number of people who can walk after stroke should reduce the demand for nursing home placement.
2017-06-21T19:11:15.371Z
2007-09-06T00:00:00.000
{ "year": 2007, "sha1": "a9f3dea9e87c90c4ee1347b2bd7b89cdd6dc1b22", "oa_license": "CCBY", "oa_url": "https://bmcneurol.biomedcentral.com/track/pdf/10.1186/1471-2377-7-29", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "be6ec9cb480a88779a02118ba58185b475c521be", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
53449394
pes2o/s2orc
v3-fos-license
Prospects for near-infrared characterisation of hot Jupiters with VSI In this paper, we study the feasibility of obtaining near-infrared spectra of bright extrasolar planets with the 2nd generation VLTI Spectro-Imager instrument (VSI), which has the required angular resolution to resolve nearby hot Extrasolar Giant Planets (EGPs) from their host stars. Taking into account fundamental noises, we simulate closure phase measurements of several extrasolar systems using four 8-m telescopes at the VLT and a low spectral resolution (R = 100). Synthetic planetary spectra from T. Barman are used as an input. Standard chi2-fitting methods are then used to reconstruct planetary spectra from the simulated data. These simulations show that low-resolution spectra in the H and K bands can be retrieved with a good fidelity for half a dozen targets in a reasonable observing time (about 10 hours, spread over a few nights). Such observations would strongly constrain the planetary temperature and albedo, the energy redistribution mechanisms, as well as the chemical composition of their atmospheres. Systematic errors, not included in our simulations, could be a serious limitation to these performance estimations. The use of integrated optics is however expected to provide the required instrumental stability (around 10^-4 on the closure phase) to enable the first thorough characterisation of extrasolar planetary emission spectra in the near-infrared. INTRODUCTION Since the discovery by Mayor & Queloz 1 of the first exoplanet around 51 Pegasi, the study of planetary systems has received an increasing attention, with the continuous development of new techniques. Among the direct detection techniques, interferometry is one of the most promising for the near future. It already provides the required angular resolution, but the dynamic range needs to be improved. The detection and characterization of extrasolar planets is one of the main science cases of the 2nd generation VLTI Spectro-Imager instrument (VSI). 2 At a distance of a ≈ 0.05 AU, most of the detected exoplanets, called hot Extrasolar Giant Planets (EGPs), receive from their parent star about 10 4 times the amount of radiation intercepted by Jupiter from our Sun. So close to the star, they are also believed to be tidally locked such that half of the planet faces permanently the star while the other half stays in the dark. The fraction of incident light absorbed by the atmosphere -parametrized by its albedo A -heats the planets. This heat is redistributed to the night side by strong wind and reradiated by the planet. Besides providing their heat source, the strong radiation illuminating hot EGPs also structures their atmosphere. It suppresses the convection to depths well below the photosphere, leading to a fully radiative photosphere across most of the day side. 3 Whether silicate clouds can persist in the photosphere then results from the competition between the sedimentation and advective timescales. The sedimentation timescale of radiative photosphere is short and the winds are believed not to be strong enough to prevent dust from settling. Hence, the atmospheres of hot EGPs are pictured as being free of clouds. Finally to reproduce hot-Jupiter spectra, their chemical composition should be taken into account. As the opacity of each species regulates the emergent flux as a function of wavelength, their emergent spectra strongly depart from an ideal black body assumption. They display large molecular bands and spectral features (see Fig. 1). The transit technique revolutionised the field of exoplanetology. It was a breakthrough to peer into hot EGPs' structure and to already give a glimpse about their composition. However, the error bars remain large to discriminate between different models and most of the planetary spectrum remains unknown. Particulary, the 1−2.4 µm spectrum is very rich in spectral information and may provide unprecedented constraints on our understanding of planetary atmosphere. VSI will have the ability to observe hot EGPs in J, H and K bands, with a low (R = 100) or medium (R = 1000) resolution. Model fitting of low-resolution spectra gives a measurement of their albedo and test the cloud-free assumption. The phase dependance of the measured signal shall constrain the heat redistribution (through the temperature across the surface) and the weather conditions. At medium resolution, it will be possible to measure the abundance of CO and test the presence of CH 4 . Contrarily to current characterisation techniques, VSI will not be restricted to transiting planets. The sample of favourable targets counts already 7 planets today and may count at least twice this number at the time of VSI will be in operation. Therefore, VSI will not only enhance our knowledge on few transiting planets, it will also allow statistical studies to be carried out and literally enable comparative exoplanetology. The goal of this work is to study the feasibility of obtaining near-infrared emission spectra of bright extrasolar giant planets (EGPs) with VSI. In Sect. 2, we explain the method used for the simulation, from the choice of the targets to the simulations. Sect. 3 contains the results of the simulation, and Sect. 4 the discussion and the conclusions of the study. METHOD In the following simulations, we use the synthetic spectra developped by Burrows 5 in order to assess the feasability of planetary spectra characterisation with VSI. The latest models for the brightest and closest hot EGPs have been kindly provided by T. Barman and are illustrated in Fig. 2. Choice of targets To determine the feasibility of EGP spectroscopy with VSI, we started to simulate interferometric observations of several EGPs that have been discovered by radial velocity surveys. The suitable targets for an interferometric study are the hot extrasolar giant planets that orbit close to their parent stars, for which the star/planet contrast does not exceed a few 10 4 in the near-infrared. Another criterion is that the target must be close enough so that VSI, with its angular resolution of a few milli-arcsec, can resolve the star-planet system. Typically, hot Jupiter systems further Table 1. Parameters of the host stars for the selected extrasolar planetary systems. 7,8 The planetary radii are derived from transit measurements when available, while the values followed by an asterik are estimates using a mean planet density 9 of 0.7 g cm −3 and a upper limit of 1.5RJ . The semi-major axis is given in AU and in mas, assuming the planet to be at maximum elongation. The estimate temperature and flux of the hot EGPs are computed using a grey body assumption with a Bond albedo of 0.1. The stellar and planetary fluxes, as well as the planet/star contrast, are given in the centre of the K band. than 50 pc fall outside the resolving power of the VLTI. Finally, we have restricted the list target to declinations ranging between -84˚and +36˚, in order to be observable from Cerro Paranal. A list of six targets, based on these criteria, has been compiled in Table 1. The properties of the planetary companions are listed in Table 2. These targets were expected to give better results in terms of detectability. In order to carry out performance simulations we have used the synthetic spectra computed by Barman. 5 The method : differential closure phases Because hot EGPs are very close to their parent star, they are the planets with both the largest source of heat and the largest reflected starlight. In brief, they are the most luminous amoung known exoplanets. Nevertheless, they remain very difficult targets for direct observations. The typical contrast between hot EGPs and their parent star ranges from 10 −6 in the visible to 10 −3 beyond 10 µm while typical angular separations are of the order of 1 mas. In the wavelength domain of VSI, common hot EGPs have a typical contrast of 10 −4 and the best targets reach a contrast of 10 −3 (see Fig. 2). Observations at such contrast are challenging and different strategies are currently investigated. The resolving power of interferometry provides the means to achieve observations of hot EGPs and two approaches have been investigated: differential phase and differential closure phase. The first approach measures the photo-center of the planet-star system (as a function of wavelength) whereas the second measures the fraction of light that is not pointsymmetric (as a function of wavelength too). In both cases, it provides the differential planet to star contrast ratio as a function of wavelength. When a star comes with a faint companion and when both fall in the field of view of an interferometer, their fringe patterns sum together incoherently. The presence of a planet decreases the fringe contrast and changes the phase by tiny amounts (see Fig. 3). From the ground, the main problem is the Earth atmosphere, where transmission varies chromatically and on time scales shorter than the time required to perform the observations. Such wavelength-dependent phase shifts essentially prevent the use of the "differential phase" technique for high contrast observations because this technique requires therefore an extremely good control and calibration of the atmospheric and instrumental stability. 10-12 However, with three or more telescopes, one can build an interferometric observable that is robust to phase shifts: the closure phase, which presents the nice property of cancelling atmospheric systematics. As shown in Fig. 4, a differential optical path above one telescope (here the 2 nd ) introduces phase shifts on the fringes measured on two baselines (here baselines b 1−2 and b 2−3 ). Because, the phase shifts have opposite signs, they cancel out when summed together. Since the reasoning holds for a phase delay introduced above any of the telescope, the sum of phases measured on baselines b 1−2 , b 2−3 and b 1−3 cancels all telescope-based phase systematics. Actually, the closure phase technique does not only cancel the phase shifts introduced by the atmosphere but also the ones introduced by the instrumental optics, up to the recombination. Therefore, differential closure phase appears to be one of the best methods to obtain medium-resolution nearinfrared spectra of known hot EGPs, because it is very sensitive to faint companions while not corrupted by random atmospheric phase fluctuations. Steps of the simulation To perform the detection of exoplanets from closure phase data, we rely on model fitting. Image reconstruction is not needed in our case, as the system of a hot Jupiter plus a star is just the case of a binary, though a highly contrasted one. The performance simulations will therefore consist in the following steps: 1. To simulate the observation of a star-planet system under typical conditions : we assume that the system is observed on three consecutive nights with the four UTs and that four data points are acquired each night at a rate of one data point per hour. Each data point consist in a 10-min on-source integration, and is followed by calibration measurements (not simulated here). 2. Estimate the fundamental noises (shot noise and detector noise) for each individual measurement, using the lowest spectral resolution of VSI (R = 100). The error bars on individual data points typically range between 3 × 10 −5 and 10 −4 radians for stellar magnitude between 3.5 and 6.5 in the K band. Using these estimated error bars, we draw random data points using a gaussian distribution centered around the noiseless closure phase and with a standard deviation equal to the error bar. This results in a collection of data points with associated error bars (see Fig. 6), which are used as inputs for the fitting procedure. 3. Fit the simulated observations and their associated error bars with a binary model for the closure phase of the planetary system as a function of time. The time evolution of the closure phase simultaneously captures the motion of the star-planet system on the night sky and the orbit of the planet around its host star (typical period of 3 days for the systems considered here). Determination of the orbital parameters Various free parameters can be used to perform the fit of the simulated closure phase observations. Here, we select the three most important parameters that are not known from radial velocity measurements : the planet/star The closure phase equals the sum of phases measured along baselines formed by at least three telescopes. 14 contrast, the orbital inclination and the position angle of the orbit on the plane of the sky * . In a first stage, we fit the three parameters globally on the whole spectral domain. Because the contrast significantly changes between individual spectral channels, we replace the first fitting parameter (the contrast) by the planetary radius. Fitting the planet/star contrast is indeed equivalent to fitting a planetary radius in each channel if one makes the following assumptions : • The thermal emission follows a grey body emission law. • The albedo is constant and fixed to given value (0.1 in our case). • The temperature of the planet is computed from radiative equilibrum. It must be noted that the choice to compute the fit on the planet radius rather than on the albedo is due to the fact that, when fitting the data, the albedo changes rapidly for small variations of the radius and can quickly reach non-physical values if a poor estimation of the radius is chosen. This come from F thermal ∝ (1 − A b ) 1/4 R 2 p , so that a variation of R p has a bigger influence on F thermal than a variation of A b . The fit is performed in two successive steps. In a first step, we investigate the whole parameter space and compute the χ 2 between the observations and the model for a whole range of values for the radius, inclination and position angle. From the resulting χ 2 hypersurface, we determine an approximate position for the global minimum, which we will use as an initial guess in the second step of the fit. The graphical representations of the χ 2 cube (see Fig. 5) allow us to evaluate the sharpness of the 1/χ 2 peak and the possible occurence of local minima. The second step consists in a classical Levenberg-Marquardt χ 2 minimisation with three free parameters. This step usually converges quickly towards the best-fit solution, as the initial guess is generally robustly determined during the first step of the fitting procedure. While the output best-fit radius does not have a real physical meaning under the present asumptions (grey body with fixed albedo and temperature), the inclination and position angle of the orbit are generally well reproduced by * The latter is counted East of North and is actually equivalent to the longitude of the ascending node of the orbit with respect to the plane of the sky. the fitting procedure, both in the H and in the K band (see Tables 3 & 4). These two parameters are generally unknown for the simulated planets, so that arbitrary values have been used in this study. Determination of the planetary spectra In a second step, we fix the best-fit orbital parameters obtained under the black body assumption and perform a fit on the only planetary radius individually for each spectral channel. In this step, we allow the planetary radius to change across the various spectral channels, using a black body assumption on each individual channel. The obtained planetary radii are then converted to the value of the planet/star contrast, which is the interesting quantity in this case. This fit is illustrated in Fig. 7 for the K band and in Fig. 8 for the H band. DISCUSSION From the results of the fit of the planetary spectra, it becomes evident that VSI will be a powerful tool to characterize hot EGPs. The simultaneous measurement of the contrast at various wavelengths provides an insight into the thermal, Figure 7. Fit to the planet/star contrast from the simulated closure phase data in K band (from left to right and top to bottom : τ Boo, HD 179949 b, HD 189733 b, HD 73256 b, 51 Peg b and HD 209458 b). Red curves are used for the best-fit model when the input synthetic spectrum assumes heat redistribution around the whole planet, while green curves are used when the input spectrum assumes heat redistribution on the day side only. physical and dynamical structure of their atmospheres. In particular, the slope of the spectrum in H and K band directly informs on the presence of CH 4 in the planetary atmosphere, while the CO absorption feature around 2.3 µm could also be detected from some of the selected targets. Furthermore, the repeated observations at various orbital phases provide an important information to constrain the heat distribution mechanisms by measuring the temperature and atmospheric composition around the planet. We note that the simulated observations are more successful and constraining if the star-planet system is close to the observer (≤ 20pc). For such targets, the VLTI angular resolution well matches the star-planet separation and the planet is bright enough to provide a good signal-to-noise ratio. It is thus recommended to choose the closest systems as the first targets of VSI. In the light of these results, one can safely conclude that the prime criterion for the selection of additional targets is the magnitude of the host star: it drives the signal to noise ratio on the closure phases and therefore the quality of the fit of the planetary data. The semi-major axis of the planetary orbit, or more precisely, the temperature of the planetary companion, is of course another critical parameter, as well as its radius (which is generally unknown). From these observations, additional targets can be proposed for the VSI exoplanet sample such as HD 75286 b and HD 160691 d (from the hot Jupiter family) or 55 Cnc e (a hot Neptune). All give satisfactory results when repeating the above simulation procedure, yet with larger relative error bars on the measured planet/star contrast. A few other planets may be added to the list in the coming years. Systematic errors, not included in our simulations, could be a serious limitation to these performance estimations. The use of integrated optics is however expected to provide the required instrumental stability (around 10 −4 ) to enable the first thorough characterisation of extrasolar planetary spectra in the near-infrared. The direct detection of hot EGPs is undoubtely one of the most challenging VSI programs. This program was already one of AMBER's goal. However VSI intrinsic design offers multiple improvements with respect to AMBER which rely on two main axes : • Improvement in the observable signal-to-noise ratio and accuracy : with a combining core made of an integrated optics circuit in which the incoming beams are spatially filtered out and routed so that each of the 4UT beams are carefully interfered with each other, the intrinsic stability of VSI is much higher than a classical bulk optics solution like AMBER. Moreover, unlike AMBER, VSI has included in its study an internal fringe tracker located as close as possible to the science instrument in order to control the stabilisation of the fringes and allow cophasing. This fringe tracker significantly increases the signal-to-noise ratio on the closure phases. 15 • Increasing the number of simultaneous points: using 4 telescopes allows 4 closure phases to be measured simultaneously while AMBER only permits one such measurement. This simultaneity reduces time dependant drifts, improves the calibration and permits to constrain the flux ratio by 4 points in a single measurements instead of 1 for AMBER, improving dramatically the quality of the fit.
2008-07-18T17:21:26.000Z
2008-07-18T00:00:00.000
{ "year": 2009, "sha1": "7568709d4224cd7776e8f48d877864cc07f14da9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0807.3014", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7568709d4224cd7776e8f48d877864cc07f14da9", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
79026111
pes2o/s2orc
v3-fos-license
EFFECT OF DAIDZEIN 120 MG SUPPLEMENTATION TO MENOPAUSAL SYMPTOMS AND QUALITY OF LIFE IN NON EQUOL PRODUCER WOMEN Objectives: To investigate and compare symptom changes and quality of life (QOL) in non equol producer postmenopausal women after consuming daidzein supplementation. Methods: This was a single randomized clinical trial. It involved menopausal women. They were divided into two groups, one received placebo that contains calcium glycerophosphate 500 mg, vitamin D3 35 IU and daidzein group contain daidzein 120 mg, contain calcium glycerophosphate 500 mg, vitamin D3 140 IU for 8 weeks. Plasma equol was measured before supplementation. Menopause QOL (MenQOL) questionnaires have been utilized in the beginning and the end of treatment to assess the QOL. Results: A total of 41 women age 45-63 years old were included in this trial, 19 (47.5%) of them receive daidzein supplementation and others received control treatment. Menopausal symptoms decreased but not statistically significant compare to control group. Conclusion: About 8 weeks daidzein supplementation was not statistically improved MenQOL status in non equol producer postmenopausal women. INTRODUCTION Menopause is defined as cessation of menstruation, which represents the end of ovulation, causes reduction in estradiol production [1,2].Average menopausal age varies in range between 50 and 52 years old [1].By 2012 in Indonesia, 44% of women underwent menopausal phase by the age of 48-49 years old [3].If life expectancy in Indonesian women was 73 years old, it implied women would spent one-third of her life cycle within menopausal phase, dealing with its consequences.This triggers awareness toward action to elevate women's quality of life (QOL) in this concerning period [3,4]. Vasomotor symptoms, physical, psychological, and sexual dysfunction became major complains due to physiological changes during menopause [4][5][6].Duration and intensity of these symptoms vary among women.In European countries, complaints of sleep disturbance, depression, and vasomotor symptoms frequently found.On the other hand, Australian women mostly experience vasomotor symptoms and sexual dysfunction [7].Interestingly, studies in Asia held in 2006 discovered that vasomotor symptoms were not the major concern but muscular and joint problems followed by memory derangement [8].In Indonesia, only 5% of women complained of hot flushes, while 93% had joint muscular symptoms [8]. To overcome climacteric symptoms, some women chose to have hormonal or non-hormonal treatment [17][18][19][20].Despite its subtle advantage, many Asian women prefer to consume pill of natural substance rather than hormonal pill to cure the according symptoms [17].Even though hormonal therapy is the most effective way to reduce climacteric symptoms, some women are reluctant to undergo hormonal treatment due to society belief that it may cause cancer; the high spending cost of hormonal treatment interferes tendency to choose another alternative, such as herbal [18][19][20][21][22][23].Asian Menopause Survey in 2010 declared that only 19% from 1000 menopause women have hormone pill as their chosen treatment [17]. Soy-consuming habit forms a hypothesis that Asian women has ability to produce equol from soybean daidzein isoflavone which effects positively in reducing climacteric symptoms [11].The study by Ishiwata, in 2009, stated that equol supplementation for 12 weeks significantly reduces mood alteration symptoms due to menopause compared to control group consisted of pre-and post-menopause [9].Usage of daidzein supplementation is to elevate QOL in menopausal women who experience climacteric symptoms without worrying side effects that may occur unlike hormonal treatment [24]. METHODS This research was randomized controlled trial and subjects were into two groups.The subjects received randomized allocation of treatment, calcium glycerophosphate 500 mg+vitamin D3 35 IU as control group, and calcium glycerophosphate 500 mg+vitamin D3 140 IU along with daidzein 120 mg as treatment group for 8 weeks.This was a singleblind researchers acknowledge subject allocation and subjects did not aware of which group they were belong. Drug had given every day and drug administration monitored by field coordinator to each respondent to minimalize compliance bias.In initial and end of treatment, patient fulfills the Menopause QOL (MenQOL) questionnaire, after being translated and validated, based on their actual complaint.Research conducted in November 2015-March 2016. The subjects were women between 45 and 65 years old who underwent natural menopausal state had no menstruation at least 12-month after last period.Respondents who had been actively smoking, the previously under hormonal therapy for the last 6 weeks before test were taken, allergic to drug substances, liver and/or renal dysfunction, history of breast, endometrium, and/or cervix were excluded. The samples enrolled by consecutive sampling.Researcher obtained subjects from local menopause association and each had symptoms according to questionnaire filled by subject until reach targeted number of sample and used block randomization.After consenting, peripheral blood was taken to measure daidzein and equol level using HPLC analysis.Intention to treat method was implemented in this research.This research was approved by Ethic Committee in Faculty of Medicine, Universitas Indonesia, Cipto Mangunkusumo Hospital. RESULTS A total of 41 respondents filled out questionnaire given and passed inclusion criteria.One respondent rejected consent.Daidzein supplementation was given to 19 subjects (47.5%) and 21 subjects received control tablet (Table 1). During research, four subjects dropped out treatment.There were no side effects reported from treatment group.From these selected samples, none was equol producers (Table 2). From characteristic demography table, respondents' median age was 53-year-old in control group and 57-year-old in the treatment group with age of menopause consecutively were 50-49 years old.Most disturbing climacteric symptoms were physical domain.Based on MenQOL questionnaire, joint and muscular pain became the most disturbing complaint.There was no significant difference found in menopausal symptoms in control and treatment groups before intervention (Table 3). Even though participants were not equol producers, research observed significant reduction clinically and statistically from menopause complaint score in all domains in the treatment group after 8 weeks supplementation (Table 4).This phenomenon did not occur in control group.Yet, menopause score reduction between before and after supplementation did not statistically significant compared to control group with p>0.05 (Table 5). DISCUSSION Based on descriptive data, the most complained climacteric symptoms were physical domain, followed by sexual, vasomotor, and psychosocial symptoms.87.5% of respondents had experienced joint and muscular pain.Same result was also seen in the study by Pan-Asia Menopause, which stated that 86.3% of Asian women underwent joint and muscular problem during menopause [8,24,25].Multivariate analysis by Kalarhoudi mentioned that frequent exercise, physical activity, educational background, satisfaction in family life, income, age, and duration of menopause were influencing factors that determine quality life of menopausal women [26]. Referred to Hong, around 50-60% of Asian women produced equol.Subject consumed soy-contain diet at least twice a week in average although it was not be recorded on food recall.Yet, interestingly, laboratory examination could not detect any equol in blood plasma, Data regarding complain domain before and after treatment was not normally distributed, thus be provided in median (min-max), data with normal distribution be provided in mean (±SD).Score differences were normally distributed, thus be provided in mean (±SD).Score differences analysis using paired t-test.CI: Confidence interval, *: Mann Whitney test for every p value, significant if p<0.05 therefore all participants were in homogen category, the equol non producer group [26,27].Setchell stated that duration and amount of daidzein-containing food, type of intestinal microbiota population, such as bacteria with ability to reduce sulfa, amount of polyunsaturated fatty acid, and vitamin A and E in food influenced level of equol in vivo.Elimination of equol within plasma was 7-8 hrs [28].This may be the reason the previous studies conducted "challenge" by giving soycontaining food prior examining respondent's equol level just to be assessed by urine collection or blood withdraw.Even so, the previous similar study by Botefilia et al., using isoflavone 120 mg without prior soy challenge, 62% of respondents were equol producers before supplementation be given [29]. The food recall was not conducted in this research and this raised bias because contain of substance could not be measured quantitatively.Equol level after supplementation was not measured, thus researcher could not detect changes in equol level.Yet, study by Botefilia et al. stated that there were no significant changes found in equol level between equol-producer and non equol producer before and after supplementing 120 mg daidzein for 6 weeks [29]. After 8 weeks of treatment and observation, no side effect occurred in supplementing daidzein 120 mg/day.Until recent, no studies had shown any side effect by taking daidzein supplementation.High doses administration of phytochemical shown no effective effect or may not safeand Ikegami's study showed high-dose isoflavone (1 g per kg body weight) in pregnant mouse related to low birth weight to the fetus [30,31].Usage of plant-based derived preparations, such as soy isoflavone needs further investigations to reveal its real pharmacological and physiological effect for number of dosage in certain period of administration [32]. MenQOL assessment of daidzein group showed reduction for 1.73 (±2.84) point in vasomotor domain with CI 95% 0.36-3.10,and p=0.016.This implied that if measurements re-conduct in population, menopause score difference before and after 8 weeks supplementation of daidzein 120 mg, calcium glycerophosphate 500 mg, and vitamin D 140 IU would range between 0.36 and 3.10.Significant reduction occurred in all symptom domains, including overall QOL, as listed in Table 4.However, this result did not occur in control group.Based on research conducted by Basaria, consumption of 20 g soy in 12 weeks decreased menopausal symptoms in four MenQOL domains [33]. Other study concluded that most Asian women complained less and milder regarding menopausal symptoms compared to Europeans.Phytoestrogen consumption was believed as the root of this phenomenon [34].Even though in average, participants consumed soy-containing food more than 2 days a week, lab found to equol in blood plasma.This portrayed that participants had no ability to transform soy isoflavones into its metabolites, equol which is a metabolic degradation by intestinal flora that has proven possesses beneficial impact in menopausal physiological changes [35].Weakness of this report was no food recall data to measure soy consumption per day quantitatively.This study's strength compared to the previous studies was control group also received active substance due to respondents were patients with complaints.The patient compliance was be guaranteed because field coordinator distributed pills every day and attended pills administration per person.Almost no bias in filling out questionnaires, researchers accompanied each participant and explained each given question.The study analysis with intention to threat, so those who dropped out study remained involve in statistical analysis.Analytical method provided non-bias inspection regarding efficacy from intervention until treatment compliance.The patient compliance in this study represents patient compliance in the community. CONCLUSION Research found physical domain, joint and muscular pain, was the most complained menopausal symptoms.According to result, complaints of menopausal symptoms were reduced insignificantly after soy germ isoflavone daidzein for 8 weeks compared to control group.Trial did not record any side effects during supplementation of daidzein 120 mg, glycerophosphate 500 mg, and vitamin D 140 IU.Further investigation is important to understand other factors that may affect ability to produce equol metabolites other than soy consumption. Table 2 : Respondents laboratory profile Data were normally distributed and provided in median (min-max); categorical data were provided in percentage.*Mann-Whitney test
2019-01-29T19:37:55.901Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "f162df6cb93ae3a342675769ebda03358a5d28f4", "oa_license": "CCBYNC", "oa_url": "https://innovareacademics.in/journals/index.php/ajpcr/article/download/14023/8806", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "f162df6cb93ae3a342675769ebda03358a5d28f4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15294001
pes2o/s2orc
v3-fos-license
Exotic holonomies $\E_7^{(a)}$ It is proved that the Lie groups $\E_7^{(5)}$ and $\E^{(7)}_7$ represented in $\R^{56}$ and the Lie group $\E_7^{\C}$ represented in $\R^{112}$ occur as holonomies of torsion-free affine connections. It is also shown that the moduli spaces of torsion-free affine connnections with these holonomies are finite dimensional, and that every such connection has a local symmetry group of positive dimension. §1 Introduction The notion of the Holonomy of an affine connection was introduced originally byÉlie Cartan in the 1920s who used it as an important tool in his attempt to classify all locally symmetric manifolds. Over time, the holonomy group proved to be one of the most informative and useful characteristics of an affine connection and found many applications in both mathematics and physics. By definition, the holonomy of an affine connection on a connected manifold M is the subgroup of all linear automorphisms of T p M which are induced by parallel translation along p-based loops. In 1955, Berger [4] showed that the list of irreducibly acting matrix Lie groups which can, in principle, occur as the holonomy of a torsion-free affine connection is very restricted. Berger presented his classification of all possible candidates for irreducible holonomies in two parts. The first part contains all possible groups preserving a non-degenerate symmetric bilinear form, the second part consists of those groups which do not preserve such a form; the latter part was stated to be complete up to a finite number of missing terms and was given without a proof. Bryant [5] was the first to discover the incompleteness of the second part of Berger's list, and referred to the missing entries as exotic holonomies. Since then, several other families of exotic holonomies have been found [6,7,8]. In this paper we present one more family of exotic holonomies associated with various real forms of the complex 56dimensional representation of E C 7 . occur as holonomies of torsion-free affine connections. (ii) Any torsion-free affine connection with one of these holonomies is analytic. (iii) The moduli space of torsion-free affine connections with one of these holonomies is finite dimensional. (iv) Any such connection has a (local) symmetry group of positive dimension. This theorem is proved by combining twistor techniques of [10] used to compute all the necessary E 7 -modules K(e 7 ), K 1 (e 7 ) and P (1) (e 7 ) with the construction of torsion-free affine connections with prescribed holonomy via deformations of a certain linear Poisson structure [8]. §2 Borel-Weil approach to E (a) 7 Let V be a vector space and g an irreducible Lie subalgebra of gl(V ) ≃ V ⊗ V * . In the holonomy group context, one is interested in the following three g-modules: (iii) the 2nd curvature space K 1 (g) := ker i 2 , where i 2 is the composition Note that if ∂ is the composition The geometric meaning of g (1) is that if there exists a (local) torsion-free affine connection ∇ on a manifold M with holonomy algebra g then, for any (local) function Γ : M → g (1) , the affine connection ∇ + Γ is again torsion-free and has holonomy algebra g; thus, in some sense, g (1) measures the non-uniqueness of torsion-free affine connections with holonomy g on a fixed manifold. The significance of K(g) and K 1 (g) is that the curvature tensor (the covariant derivative of the curvature tensor respectively) of a torsion-free affine connection ∇ with holonomy g at a point p ∈ M is represented by an element of K(g) (K 1 (g) respectively). Therefore, g can be a candidate to the holonomy algebra of a torsion-free affine connection only if K(g) = 0. The question then remains how to compute K(g). With any real irreducible representation of a real reductive Lie algebra one may associate an irreducible complex representation of a complex reductive Lie algebra. Since all the above g-modules behave reasonably well under this association, we may assume from now on that V is a finite dimensional complex vector space and g ⊂ gl(V ) is an irreducible representation of a complex reductive Lie algebra. Clearly, G = exp(g) acts irreducibly in V * via the dual representation. LetX be the G-orbit of a highest weight vector in V * \0. Then the quotient X :=X/C * is a compact complex homogeneous-rational manifold canonically embedded into P(V * ), and there is a commutative diagram In fact, X = G s /P , where G s is the semisimple part of G and P is the parabolic subgroup of G s leaving a highest weight vector in V * invariant up to a scalar. Let L be the restriction of the hyperplane section bundle O(1) on P(V * ) to the submanifold X. Clearly, L is an ample homogeneous line bundle on X. We call (X, L) the Borel-Weil data associated with (g, V ). According to Borel-Weil, the representation space V can be easily reconstructed from (X, L) as V = H 0 (X, L). What about g? The Lie algebra of the Lie group of all global biholomorphisms of the line bundle L which commute with the projection L → X is isomorphic to H 0 (X, L ⊗ (J 1 L) * ) -a central extension of the Lie algebra H 0 (X, T X). Whence, as a complex Lie algebra, H 0 (X, L ⊗ (J 1 L) * ) has a natural complex irreducible representation in H 0 (X, L) = V ; with very few (and well studied in the holonomy context) exceptions [2], this representation is, up to a central extension, isomorphic to the original g. Remarkably enough, the basic g-modules defined above fit nicely into the Borel-Weil paradigm as well. Proposition 1 [10] For a compact complex homogeneous-rational manifold X and an ample line bundle L → X, there is an isomorphism and an exact sequence of g-modules, Proof. The result follows easily from the exact sequences where arrows are a combination of a natural monomorphism N * −→V * ⊗ O X (which holds due to ampleness of L) with the antisymmetrization. ✷ It is well known that the complex exceptional Lie algebra e C 7 has four real forms e 7 , e 7 and e 7 with signatures 0, 54, 64 and 70 respectively (see, e.g., [9,12]). Two of these, e 7 and e (6) 7 , can be irreducibly represented in R 56 . Let ρ denote the irreducible real representation e (a) 7 ) denote the adjoint representation. Proof. We shall prove this statement for the complex representation only. That it is true for real representations as well will follow from the invariance of all the constructions under the associated real structures in e C 7 . Let (X, L) be the Borel-Weil data associated to ρ : e C 7 → gl(V ), V ≃ C 56 . Then X = E C 7 /P is a 27-dimensional compact complex homogeneous-rational manifold whose tangent bundle has, as an irreducible homogeneous vector bundle, the Dynkin diagram representation [3] T X = s s s s s × Here and below the weights of irreducible homogeneous vector bundles are given in the basis of fundamental weights. Using Konstant's formula and Table 5 in the reference chapter of [12] to find irreducible decompositions of tensor powers of the simplest 27-dimensional irreducible representation of E C 6 (which, in our case, is isomorphic to the semisimple part of the parabolic P ), one obtains the Bott-Borel-Weil theorem and Proposition 1, one easily finds Let us find next the explicit form of K(ρ(e C 7 + C)) as a subset of all elements in ρ(e C 7 + C)) ⊗ Λ 2 V * satisfying the first Bianchi identities. Recall [1] that ρ : e C 7 → gl(V ) enjoys a non-zero invariant skew symmetric invariant product and a non-zero invariant symmetric map which are unique up to non-zero scalar factor and satisfy for all A ∈ e C 7 and u, v, s, t ∈ V . Here λ and µ are fixed non-zero constants and B( , ) is the Killing form. Then it is not hard to check that, for any fixed A ∈ e C 7 , the following map defines an element of ρ(e C 7 + C) ⊗ Λ 2 V * which lies in the kernel of the composition Thus, the above formula gives an explicit realization of the isomorphism K(ρ(e C 7 + C)) = Ad(e C 7 ). In particular, it shows that K(ρ(e C 7 + C)) = K(ρ(e C 7 )). Having obtained an explicit structure of K(ρ(e C 7 )), it is straightforward to show that a generic element of for some fixed w ∈ V ≃ V * . This establishes the isomorphism K 1 (ρ(e C 7 )) = V * . ✷ §3 A construction of torsion-free connections We briefly describe here the construction of torsion-free connections with prescribed holonomy which was presented in [8]. Let g ⊂ gl(V ) a Lie sub-algebra where V is a finite-dimensional vector space. A G-equivariant C ∞ -map φ : g * → Λ 2 V * is called admissible if for every p ∈ g * , the map dφ * p : Λ 2 V −→T * p g * ≃ g lies in K(g). For a given admissible map φ, one may define the following Poisson structure on the dual W * of the semi-direct Lie algebra W = g ⊕ V : where df = A + x and dg = B + y are the decompositions of df, dg ∈ T * W * ≃ g ⊕ V , and p ∈ g * , ν ∈ V * . This Poisson structure may be regarded as a deformation of the natural linear Poisson structure on W * . Let π : S → U be a symplectic realization of an open subset U ⊂ W * , i.e. π is a submersion from a symplectic manifold S with symplectic 2-form Ω such that where { , } S is the Poisson structure on S induced by the symplectic structure. At those points where the rank of the Poisson structure is maximal, such a symplectic realization exists at least locally. Regarding each element w ∈ W ≃ T * W * as a 1-form on W , we define the distribution where # is the index-raising map induced by Ω. Since Ω is non-degenerate, rank D = dim W . Moreover, for the bracket relations one calculates where A, B ∈ g, x, y ∈ V and p = π(s). Let F ⊂ S be an integral leaf of D. By the very definition of D, F comes equipped with a W -valued coframe θ + ω, where θ and ω take values in V and g respectively, defined by the equation (ω + θ)(ξ w ) = w. Note that by the first equation in (3), the vector fields ξ A , A ∈ g, induce a free local group action of G on F , where G ⊂ Gl(V ) is the connected Lie subgroup corresponding to g ⊂ gl(V ). After shrinking F as necessary, we may assume that M := F/G is a manifold. Standard arguments then imply that there is a unique embedding of ı : F ֒→ F V , where F V denotes the V -valued coframe bundle of M and a torsion-free connection on M such that ı * (θ + ω) = θ + ω, where θ and ω denote the tautological and the connection 1form on F V , respectively. Clearly, the holonomy of this connection is contained in G; in fact, by the Ambrose-Singer-Holonomy Theorem, the holonomy algebra is generated by {dφ * p (x, y) | x, y ∈ V, p ∈ π(F )}. A connection which comes from this construction is called a Poisson connection. This leads to the following Theorem 2 [8] Let g ⊂ gl(V ) be a Lie sub-algebra where V is a finite-dimensional vector space, and let span{R(x, y), all x, y ∈ V } = g} . If φ : g * → Λ 2 V * is admissible, and if the open set U 0 ⊂ g * given by is non-empty, then there exist Poisson connections induced by φ whose holonomy representations are equivalent to g. Moreover, if φ| U 0 is not affine, then not all of these connections are locally symmetric. It is not clear at present how general the class of Poisson connections is, nor how many irreducible Lie subalgebras g ⊂ gl(V ) admit admissible maps φ : g * → Λ 2 V * which are not affine. However, there is a class of Lie subalgebras for which the above construction exhausts all possible torsion-free connections with this holonomy. Namely, we define the g-module and regard elements φ 2 ∈ P (1) (g) as polynomial maps g * → Λ 2 V * of degree 2. It is then obvious that each G-invariant φ 2 ∈ P (1) (g) is admissible, and we have the following result. Theorem 3 [8] Let g ⊂ gl(V ) be an irreducibly acting subalgebra, and suppose that there is an invariant element φ 2 ∈ P (1) (g) such that the associated G-equivariant linear maps are isomorphisms. Then every torsion-free affine connection whose holonomy algebra is contained in g is a Poisson connection induced by an admissible map where τ ∈ Λ 2 V * is a (possibly vanishing) g-invariant 2-form. In particluar, the moduli space of such connections is finite dimensional, and each such connection is analytic. Also, the dimension of the symmetry group of this connection equals dim W * − 2k where k is the half-rank of the Poisson structure on W * induced by φ in (2). At first sight, the premise that the maps (4) be isomorphisms looks like an unreasonably strong condition in order to utilize this Theorem. Nevertheless, this premise does hold for the exotic holonomies SO(p, q)SL(2, R) and SO(n, C)SL(2, C) which were discovered in [8]. Also, we will show in §4 that it also holds for the representations E For the proof, we shall need the following version of Schur's Lemma: Lemma 4 Let g be a reductive Lie algebra, and suppose that g acts irreducibly on the finite dimensional vector spaces V and W . If ρ : V → W is a linear map satisfying then ρ = 0. Proof of Theorem 3 Let F ⊂ F V be a G-structure on the manifold M where F V → M is the V -valued coframe bundle of M, and denote the tautological V -valued 1-form on F by θ. Suppose that F is equipped with a torsion-free connection, i.e. a g-valued 1-form ω on F . Since φ ′ 2 is an isomorphism, the first and second structure equations read where a : F → g * is a G-equivariant map. Differentiating (6) and using that φ ′′ 2 is an isomorphism yields the third structure equation for the differential of a: for some G-equivariant map b : F → V * , where  : V * ⊗ V → g * is the natural projection. The multiplication in the first term refers to the coadjoint action of g on g * . Let us define the map c : Differentiation of (7) yields c p (x, Ay) = c p (y, Ax) for all x, y ∈ V and all A ∈ g. Let s ⊂ X(F ) be the Lie algebra of infinitesimal symmetries. Let f : W * ⊃ U → F be a local function which is constant on the symplectic leaves. Then it is easy to see that #π * (df ) is an infinitesimal symmetry. It follows that dim s ≥ dim W * − 2k. On the other hand, if X ∈ s then π * (X) = 0, hence dim s ≤ dim W * − 2k. The statements about analyticity and the moduli space are now immediate. Proof of Lemma 4 Throughout the proof, we make the simplifying assumption that rank g > 1, as the case rank g = 1 is straightforward. Let P ⊂ V * ⊗ W be the subspace of all maps ρ : V → W satisfying (5). It is easy to verify that P is g-invariant. We complexify g, V and W and pick Cartan and weight space decompositions Let ρ ∈ P , and let x µ ∈ V µ with µ = 0. Then choosing A, B ∈ t, with µ(A) = 0, µ(B) = 0, (5) implies that Aρx µ = 0, and therefore, where the sum is taken over all weights of W which are scalar multiples of µ. Now let ρ λ ∈ P be an element of weight λ = 0. Then ρ λ x µ ∈ W λ+µ , and thus from (11) we conclude: ρ λ x µ = 0 whenever λ, µ are linearly independent. Thus, P has no weights = 0, i.e. P is acted on trivially by g, and from there it is easy to conclude that P = 0. ✷ §4 Proof of the main theorem Let g ⊂ gl(V ) be one of the representations in the Main Theorem. Evidently, the Main Theorem will follow from Theorems 2 and 3 if we can find an element φ 2 ∈ P (1) (g) such that K 0 (g) is dense in K(g), and the corresponding maps in (4) are isomorphisms. In particular, (iv) of the main Theorem follows since, in each case, dim W * = dim V + dim g = 56 + 133 is odd.
2014-10-01T00:00:00.000Z
1996-07-18T00:00:00.000
{ "year": 1996, "sha1": "ccc76dd72bdc9cdd1042bc1b5eb9b02c092b26fd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ccc76dd72bdc9cdd1042bc1b5eb9b02c092b26fd", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
118706476
pes2o/s2orc
v3-fos-license
Effective action of three-dimensional extended supersymmetric matter on gauge superfield background We study the low-energy effective actions for gauge superfields induced by quantum N=2 and N=4 supersymmetric matter fields in three-dimensional Minkowski space. Analyzing the superconformal invariants in the N=2 superspace we propose a general form of the N=2 gauge invariant and superconformal effective action. The leading terms in this action are fixed by the symmetry up to the coefficients while the higher order terms with respect to the Maxwell field strength are found up to one arbitrary function of quasi-primary N=2 superfields constructed from the superfield strength and its covariant spinor derivatives. Then we find this function and the coefficients by direct quantum computations in the N=2 superspace. The effective action of N=4 gauge multiplet is obtained by generalizing the N=2 effective action. Introduction Modern interest to three-dimensional supergauge models with extended supersymmetry is motivated mainly by recent progress in constructing and studying the field theories describing the worldvolume degrees of freedom of M2 branes. Such models are usually referred to as the Bagger-Lambert-Gustavsson (BLG) [1] and Aharony-Bergman-Jafferis-Maldacena (ABJM) [2] theories which are the superconformal Chern-Simons-matter models with N = 8 and N = 6 supersymmetry, respectively. Since the superconformal symmetry is preserved on the quantum level, these theories are dual to the superstring theory on the corresponding background within the AdS 4 /CFT 3 correspondences. One of the general problems for the BLG and ABJM models is to study the effective action which would describe an effective quantum dynamics of M2 branes. In particular, such effective actions receive contributions in the gauge field sector induced by quantum matter fields which can be studied independently of the other contributions. A good starting point for understanding this general issue is the effective action for the Abelian gauge superfield induced by quantum matter superfields. In the present paper we explore the three-dimensional supersymmetric Euler-Heisenberg-type effective action which appears as a result of one-loop contributions from quantum supersymmetric matter. This problem is interesting not only from the point of view of BLG and ABJM models, but also as a part of the effective action in the three-dimensional supersymmetric electrodynamics. In the non-supersymmetric case this problem was studied in [3], but superspace analysis has never been done (c.f. the effective action in the fourdimensional supersymmetric electrodynamics which was studied in superspace in [4,5,6]). In the present paper we fill this gap by deriving the Euler-Heisenberg effective actions for the model of N = 2 chiral superfield and N = 4 charged hypermultiplet interacting with the background gauge superfields. In our work we employ the N = 2, d = 3 superspace approach which is similar to the N = 1, d = 4 superspace. In particular, the N = 2, d = 3 chiral and vector multiplets appear by dimensional reduction from the four-dimensional N = 1 supersymmetric ones while the hypermultiplet and the N = 4 vector multiplet in three dimensional Minkowski space originate from the N = 2, d = 4 hypermultiplet and gauge superfield, respectively. We consider the background N = 2, d = 3 gauge superfield constrained by D α W β = D (α W β) = const, where W α is the superfield strength. In components, this constraint corresponds to the constant Maxwell field strength, F mn = const. As soon as the classical action of the chiral superfield in the background gauge superfield is superconformal, the resulting effective action should be superconformal as well. We show that the gauge and superconformal invariance restrict the functional form of the leading terms in the effective action uniquely, up to coefficients, while the higher order terms with respect to the Maxwell field strength are encoded in a single arbitrary function of one superconformal quasi-primary superfield. Then we find this function as well as the coefficients by direct quantum computations in the N = 2, d = 3 superspace. A straightforward generalization of these results to the N = 4 case leads to the effective action of the N = 4 charged hypermultiplet interacting with background gauge superfield. The paper is organized as follows. We begin Section 2 with a short review of the chiral superfield model in the N = 2 superspace and specify the constraints on the background gauge superfield under considerations. Then we discuss general structure of the gaugesuperfield-dependent N = 2 supersymmetric effective action subject to the constraints of gauge and superconformal invariance. In Section 3 we compute the one-loop effective actions in the models of N = 2 chiral superfield interacting with the background gauge superfield as well as for the N = 4 charged hypermultiplet using the Fock-Schwinger's proper-time technique in the N = 2 superspace. In the last section we discuss the obtained results and their possible generalizations. Appendix A contains basic formulae concerning the N = 2, d = 3 superspace in our conventions. In Appendix B we consider a representation of the superconformal group on the superfields in the N = 2 superspace. 2 General structure of superconformal effective action in N = 2 superspace Classical action of chiral superfield interacting with the gauge superfield In this subsection we review some features of the N = 2, d = 3 chiral and gauge superfield models which will be used in the next sections. Our conventions for the N = 2 superspace are collected in the Appendix A. Let us consider a classical action for the chiral superfield Q interacting with the Abelian background gauge superfield V , which is invariant under the following gauge transformations with Λ andΛ being (anti)chiral superfield gauge parameters. The chiral multiplet consists of the complex scalar f , complex spinor ψ α and complex auxiliary scalar F , The vector multiplet in three dimensions is built from one real scalar φ, one complex spinor λ α , one vector field A αβ = γ m αβ A m and one real auxiliary scalar D. In the Wess-Zumino gauge the component decomposition for V is given by It is important to specify the background gauge superfield under considerations. In general, the vector multiplet arises within standard geometric approach based on covariantization of the flat superspace derivatives, where the following superfield constraints are imposed [7,8,9] (2.8) The superfield strengths in the rhs in (2.6)-(2.8) satisfy the following reality properties As usual, there are many Bianchi identities for these superfield strengths which are important for studies of the effective action and quantization. In particular, the superfield strengths W α andW α are (anti)chiral, and obey An important feature of the N = 2, d = 3 superspace formulation of the gauge multiplet is that the superfield strengths W α ,W α are expressed in terms of the scalar superfield strength G, subject to the following constraints These constraints mean that G is a linear superfield. There are also the following useful relations among the superfield strengths In the Abelian case the gauge connections for covariant spinor derivatives in (2.5) can be expressed in terms of one real gauge superfield V , As a consequence of the algebra (2.6,2.7), the superfield strengths are given by where f αβ = ∂ ρ α A βρ + ∂ ρ β A αρ and dots stand for the terms with derivatives of the fields. Now we specify the constraints on the background gauge superfield under considerations: i) The gauge superfield obeys the N = 2 supersymmetric free Maxwell equations, ii) Within the derivative expansion of the effective action we look for the leading terms without space-time derivatives of the gauge superfields. Such a long-wave approximation is effectively taken into account by considering the constant background, This approximation suffices to study the Euler-Heisenberg-type effective action which is induced by the N = 2 supersymmetric quantum matter fields. Superconformal invariance and the effective action In this subsection we analyse the general structure of the effective action in the model (2.1) employing the constraints imposed by the gauge and superconformal invariance. Similar analysis for the N = 2, d = 4 superconformal models [10] appeared very useful because it helped to construct an off-shell extension of the terms in the gauge superfield effective action computed in the on-shell approximation. Here we will follow similar lines using the realization of the superconformal group in the N = 2, d = 3 superspace developed in [11] which is a three-dimensional extension of the general method described in [12]. In general, the effective Lagrangian depends on the gauge superfield V , its superfield strengths G, W α ,W α and their derivatives. The only gauge invariant term with explicit dependence on the gauge superfield V and which cannot be rewritten in terms of the superfield strengths is the Chern-Simons term [8,9,13], where k is the Chern-Simons level. All other terms in the effective Lagrangian depend only on the superfield strengths and their derivatives. Recall that we restricted ourself to the long-wave approximation (2.20) which means that we omit all terms with space-time derivatives of superfields, but the covariant spinor derivatives can appear in the effective Lagrangian. In this approximation there is very limited number of building blocks, i.e., the superfield combinations which the effective action can depend on. First of all, it depends on the superfield strength G as well as on W α andW α which involve first covariant spinor derivatives of G, (2.12). Next, there are the objects with two covariant spinor derivatives of G, Note that it is sufficient to consider the objects (2.22) with symmetryzed spinor indices since D [α W β] = 1 2 ε αβ D γ W γ = 0 for the considered background (2.19). Note also that owing to the identity (2.15),N αβ coincides with N αβ up to a sign, when ∂ m U = 0. Finally, it is clear that any further spinor derivatives of the superfield strengths vanish in the long-wave approximation (2.20), e.g., We conclude that the general structure of the gauge invariant effective action is given by where c 0 is an arbitrary coefficient and L eff is an effective Lagrangian being a real scalar superfield. Further restrictions on the structure of the function L eff come from the requirement of the superconformal invariance. As a warming up exercise we check the superconformal invariance of the classical action (2.1). Indeed, using the explicit realization of the superconformal group in the N = 2 superspace given in the Appendix B, we consider the superconformal transformations of the gauge and matter superfields, i.e., G is a quasi-primary superfield. Using (B.26) we immediately find Hence, the superconformal invariance imposes only constraints on the function L eff in (2.25). In general, the effective Lagrangian contains the effective potential term F (G), where F (G) is a holomorphic function of G only whileL eff takes into account the superfield strength with covariant spinor derivatives. The superconformal invariance restricts the form of the effective potential F (G) uniquely, up to a constant. Indeed, the general condition of superconformal invariance (B.27) applied to the effective potential reads where the function K(G) should be linear, with α and β being some (complex) constants. Up to the terms vanishing under integral over full N = 2 superspace, the general solution of (2.31) is given by 1 where c 1 is some constant. This effective potential is responsible for a superconformal generalization of the Maxwell term in its component decomposition, where dots stand for other component terms. Note that the Lagrangian (2.33) being considered in the N = 1, d = 4 superspace is responsible for the classical action of the improved tensor multiplet model [12]. It is much more difficult to make general analysis of the admissible form of the functioñ L eff in (2.30) subject to the superconformal invariance of the corresponding action. The problem is that the superfields W α ,W α and N αβ are not quasi-primary, e.g., where ω αβ =D (αξβ) = −D (α ξ β) are the parameters of 'local' Lorentz transformations. Equation (2.35) shows that W α transforms inhomogeneously because of the last term in (2.35). This is a new feature of three-dimensional supergauge models as compared to the N = 1, d = 4 ones in which the superfield strengths are chiral quasi-primary, [12,14,15]. Therefore the superfields W α andW α are rather inconvenient for constructing superconformal actions and we are forced to introduce the following quasi-primary superfields 2 Indeed, using (2.28) and the relations (B.15,B.16) one can readily check that both these superfields are quasi-primary with zeroth scaling dimension, This allows us to construct a superconformal action with these superfields, where U(Ψ, Ω 2 ) is an arbitrary function. Neither the gauge invariance nor the superconformal symmetry impose any restrictions on possible form of the function U(Ψ, Ω 2 ) in (2.38). However, for the background gauge superfield under considerations (2.19,2.20) the form of this functions can be further reduced. Indeed, for such a background there are the following equivalent representations for Ψ and Ω 2 , Owing to the odd statistics of superfield strengths W α andW α , the power expansion of U(Ψ, Ω 2 ) over Ψ terminates at the second order, Under the integral over N = 2 superspace the first two terms in the rhs of (2.41) can be brought to the form of the last term, whereŨ 2 is some function. Indeed, to check (2.42) one has to take the covariant spinor derivatives from N αβ = D α W β in (2.40) and integrate them by parts. Note that these derivatives hit only the superfield G but not N αβ because of the restrictions (2.20). Doing so, one can accumulate the factor W 2W 2 in the nominator resulting in Ψ 2 according to (2.39) while the remaining factors can be represented by some functionŨ 2 (Ω 2 ). These considerations show that in the long-wave approximation the superconformal action (2.38) simplifies such that it is described by a single function H(Ω 2 ) of one real variable. There are no any more constraints on the form of this function. This function will be computed explicitly in the next subsection, but in general, it is represented by a power series H(Ω 2 ) = ∞ n=0 a n Ω 2n with some coefficients a n . The action (2.43) contains the following terms in its component decomposition Summing up all together, we conclude that the general form of the superconformal effective action in the long-wave approximation is given by In components, this action contain the Chern-Simons term (2.21), the Maxwell F 2 term (2.34) and all higher order terms F 2n with n ≥ 2 which are written down in (2.44). The undefined coefficients c 0 , c 1 and the arbitrary function H will be found in the next section by explicit quantum computations. In conclusion of this subsection we comment on the uniqueness of the form of the superconformal action (2.45). The Chern-Simons and the Maxwell terms in this action are fixed by the gauge and superconformal invariance uniquely, up to the coefficients, but the form of the last term Γ higher is not unique. Indeed, we used the ansatz (2.38) which involves only two quasi-primary superfields (2.36), but the other ansätze are also possible. In particular, there is a consequence of descendant quasi-primary superfields for (2.36) given by Ψ n = ( i GD α D α ) n ln G , δΨ n = ξΨ n , n = 3, 4, 5, . . . , (2.49) which can also be used for constructing superconformal invariants in the N = 2 super- The four-dimensional N = 1 and N = 2 supersymmetric Euler-Heisenberg effective actions were studied in [4,10] at one loop. The two-loop refining of these results was given in [5] and [6] owing to the powerful covariant perturbation theory in superspace elaborated in [16]. Here we apply some of the methods developed in [4,5,6] for studying the structure of the effective action in three-dimensional model of chiral superfield interacting with the background gauge superfield. The effective action for the model (2.1) can be divided into parity odd and parity even parts, As soon as the classical action (2.1) is parity even, the appearance of the odd part in the effective action can be only due to the parity anomaly which is studied in details in [3,17] and reviewed in [18]. The anomaly appears owing to the regularization of infrared divergent momentum integrals and yields the term proportional to the Chern-Simons action Γ odd ∝ S CS . We will compute Γ odd in the end of this subsection while now we concentrate on Γ even . For Q andQ it is convenient to introduce the covariantly (anti)chiral superfields [19], which are annihilated by the gauge covariant derivatives (2.16), In terms of these superfields the action (2.1) is simply The matrix of second variational derivatives of this action is given by where δ + and δ − are covariantly (anti)chiral delta-functions, The matrix (3.5) leads to the following one-loop effective action, Introducing the covariant (anti)chiral d'Alembertians the effective action (3.7) can be rewritten as where Tr + and Tr − denote the functional traces of the corresponding operators in the chiral and antichiral superspaces, respectively. The operators + and − acting on the covariantly (anti)chiral superfields have the following representations The terms D α W α andD αW α in (3.10) and (3.11) can be omitted as soon as we consider the special background, (2.19). Then the operators (3.10) and (3.11) obey the following important properties There are covariantly (anti)chiral Greens functions G + and G − for these operators, These Greens functions can be represented by their heat kernels, (3.14) Further we will omit the factor e −ǫs in the integrals over the proper time s for brevity assuming the limit ǫ → +0 after calculating the integrals. In terms of the chiral heat kernel the effective action (3.9) reads As a result, the problem of computing the effective action is reduced to finding the coincidence limit of the chiral heat kernel, with the Greens function G v and associated heat kernel For the special background under considerations, D α W α =D αW α = 0, this operator has the following important properties which are used to relate the (anti)chiral Greens functions (3.13) with G v , as well as the corresponding heat kernels, Therefore it is sufficient to study the heat kernel K v while the (anti)chiral ones are deduced from K v by (3.21). The heat kernel K v can be represented as For the constant field background (2.19,2.20) there are the following identities which allow us to factorize the exponent in (3.22), where O(s) = e s(W α∇ α−W α ∇α) (3.25) and the reduced kernelK(z, z ′ |s) solves the equation Let us consider the following representation for the delta-function in full superspace 3 where ζ A is N = 2 supersymmetric interval, (3.28) Using (3.26) and (3.27) we arrive at the following representation for the heat kernelK, The integration over d 3 k in (3.29) can be explicitly done, see [16] for the details of similar computations in four-dimensional case, (3.30) The determinant in (3.30) is over the Lorentz indices of the matrix F m n introduced in (2.8). This determinant can be explicitly evaluated (see [3] for analogous computations in the non-supersymmetric three-dimensional electrodynamics) where Here the identity (2.14) has been used. Now we return to the computation of the heat kernel K v which is expressed in terms ofK as in (3.24). For this purpose we need to push the operator O(s) through the components of the superinterval ζ A in (3.30). Using the identities we arrive at the following final expression for K v Recall that we need the chiral heat kernel K + for the effective action (3.15), which is related to K v by (3.21). At coincident points, z = z ′ , it is easy to argue that the operator ∇ 2 in (3.21) hits onlyζ 2 (s), Finally, using the identity we get K + (s) = K + (z, z|s) = 1 8(iπs) 3/2 s 2 W 2 e isG 2 tanh(sB/2) sB/2 . (3.39) The corresponding one-loop effective action (3.15) reads where B is given by (3.32), Finally, we rewrite (3.40) in the full N = 2 superspace, The superfield strength G in (3.40) serves as an effective massive regularizator for infrared divergencies. In fact, giving a non-zero vev G = 0 generates a mass for the matter superfield which is equal to the central charge of the N = 2 superalgebra. In other words, we derived the effective action (3.40) in the Coulomb branch of the N = 2 supergauge theory. Alternatively, one can consider standard mass term m d 3 xd 2 θQ 2 , but it violates the parity and requires more accurate considerations. These issues were studied in details in [20]. We will consider the hypermultiplet model with the complex mass in the next section. Now we come back to the derivation of the parity odd part of the effective action (3.1). The reason why both Γ odd and Γ even cannot be derived in the unified procedure given above is quite similar to the non-supersymmetric case considered in [3]: The Chern-Simons term formally vanishes in the approximation of the constant fields (2.20), but the variation of the Chern-Simons term with respect to the gauge superfield produces a non-vanishing current. Therefore the Chern-Simons term in the effective action can be obtained by integrating the variation where J is the effective current, The propagator Q Q is expressed in terms of the Greens function (3.18), Using the explicit form (3.36) for G v , we find The effective current (3.45) contains both finite and infrared divergent parts. All finite contributions to the effective action are parity even and are already taken into account in (3.40). The parity odd contributions arise from the divergent part which reads (3.46) Regularizing this integral appropriately we find Substituting this current into (3.42) we obtain the odd part of the effective action, Representing the effective action (3.41) in the superconformal form (3.50,3.51) allows us to relax the on-shell constraint (2.19). Indeed, there are infinitely many ways of complementing the effective action (3.41) by the terms vanishing on the classical equations of motion, but the superconformal invariance fixes this freedom and gives the unique answer (3.50,3.51) for such an action. Therefore we conclude that (3.49,3.50,3.51) are correct offshell contributions to the low-energy effective action of the chiral superfield interacting with the background gauge superfield. These conclusions are completely analogous to the ones in [10] for the four-dimensional N = 2 superconformal theories. In principle, one can think that the superconformal invariance allows one to go beyond the long-wave approximation (2.20), but it is not completely true because when the space-time derivatives are taken into account the terms (3.50) and (3.51) in the effective action may be corrected by some contributions involving the higher-order superconformal invariants (2.50). The analysis of contributions to the effective action with space-time derivatives is a hard task and therefore we restrict ourself to the long-wave approximation (2.20). An interesting feature of the three-dimensional theory is that the proper time integral in (3.40) can be expressed in terms of special functions. In particular, for real B (constant electric field) this integral is represented by the following combination of generalized Riemann zeta functions, 4 ) . (3.52) This representation allows us to consider strong electric field background, B ≫ 1, The non-vanishing imaginary part of the effective action shows the vacuum instability for strong electric field. For imaginary B (constant magnetic field) one can replace B → −iB in (3.52) to see that the effective action is real for any value of the field. Low-energy effective action for N = 4 gauge multiplet The classical action for N = 4, d = 3 hypermultiplet appears by dimensional reduction from N = 2, d = 4 hypermultiplet which is described in N = 1, d = 4 superspace in [12,19]. In our case it is given by a pair of chiral superfields (Q + , Q − ) in the N = 2, d = 3 superspace where the subscripts '+' and '−' stress that these superfields have corresponding charges with respect to the gauge superfield. We consider the minimal gauge interaction of the hypermultiplet with the N = 4 vector multiplet described by the pair (V, Φ), where V is a real gauge N = 2 superfield and Φ is a chiral N = 2 superfield. The corresponding massless action reads The massive case can be obtained from (3.54) by the shift Φ → Φ + m with m being a complex mass parameter. It is convenient to unify the chiral superfields Q + and Q − with opposite charges to a chiral doublet Q [5], while for the gauge superfield V and its superfield strengths we introduce There is a definition of the generalized Riemann zeta function, ζ(s, q) = ∞ n=0 (q + n) −s , valid for Re(s) > 1, Re(q) > 0, but it can be analytically continued for other values of arguments. This function is also referred to as Hurwitz zeta function (see, e.g., [21]). where σ 1 , σ 2 , σ 3 are the Pauli matrices. Then the action (3.54) reads We will also use the covariant spinor derivatives covariantized by the matrix gauge superfield (3.56), as well as the covariantly chiral superfields, With these notations the action (3.57) takes the form We are interested in the one-loop effective action Γ N =4 [V, Φ] in the model (3.54) which is obtained by integrating out the charged hypermultiplet with V and Φ being the background superfields. The constraints on the considered vector background (2.19,2.20) should be extended by the following constraint on Φ, Similarly as in the previous subsection, we compute the matrix of second variational derivatives, where δ + and δ − are gauge covariant (anti)chiral delta-functions defined with respect to the gauge covariant derivatives (3.58). Then the one-loop effective action reads [5] Here Tr takes into account not only the functional trace of the corresponding operators, but also the matrix trace since we deal with the matrix gauge superfield (3.56). One can easily see that the matrix trace gives extra coefficient 2 in (3.63) as compared to (3.9). For the considered background (3.61) the expression (3.63) simplifies, Hence, we can immediately write down the answer for the effective action (3.64) by making the R-invariant shift G 2 → G 2 +ΦΦ in the action (3.40), or, in full N = 2 superspace it reads We point out that there is no Chern-Simons term induced by the quantum corrections from N = 4 hypermultiplet as soon as this model has no parity anomaly, see, e.g., [18] for a review. This was also checked in [22] by explicit quantum computations in N = 3, d = 3 harmonic superspace. Similarly as in the N = 2 case, the effective action (3.66) should be superconformal. The terms in the first line of (3.66) obviously respect the N = 2 superconformal symmetry because the superfieldΦΦ transforms under the superconformal group in the same way as G 2 . However, the second line of (3.66) needs to be rewritten in a superconformal form. For this purpose we consider the following generalizations of the superconformal quasi-primary superfields (2.36) , It is easy to see that these superfields are N = 2 quasi-primary and transform as in (2.37). When the gauge multiplet is constrained by (2.20,2.19,3.61), the superfields (3.67) can be represented as follows These representations allow us to rewrite the effective action (3.66) in the N = 2 superconformal form, It is easy to see that (3.70) can be obtained by the dimensional reduction from the action of N = 2, d = 4 improved tensor multiplet formulated in the N = 1, d = 4 superspace in [23] which was recently revisited in [24]. In the three-dimensional case the action of the form (3.70) was studied in [7]. 5 It is interesting to note that (3.70) was recently obtained in [25] as a dual representation of the classical action of the Abelian Gaiotto-Witten model. The Gaiotto-Witten model [26] is the N = 4 supersymmetric Chern-Simons-matter theory with one hypermultiplet in the bifundamental representation of the twisted gauge group G 1 × G 2 where the gauge superfields corresponding to these two groups G 1 and G 2 have Chern-Simons rather SYM kinetic terms. The authors of [25] showed that in the Abelian case one of these gauge superfields together with the hypermultiplet can be eliminated from the classical action resulting in the action for the second gauge superfield which appeared to have the form (3.70). Hence, the classical action of the Abelian Gaiotto-Witten model in the representation (3.70) arises as the leading term in the effective action in the charged hypermultiplet model. Our final comment is that the effective action (3.70) hints the form of the effective Kähler superpotential. Indeed, for the vanishing gauge superfield, G = 0, the expression (3.70) reduces to where ϕ is the lowest component of Φ and dots stand for the terms involving other component fields. It would be interesting to do an independent computation of the Kähler superpotential as a part of the effective action in the three-dimensional N = 2 Wess-Zumino model. Summary and discussion In this paper we studied the one-loop effective action for three-dimensional N = 2 and N = 4 gauge superfields induced by quantum supersymmetric matter fields. We restrict ourself to the long-wave approximation when the background gauge superfield is constant with respect to the space-time coordinates and obeys the free supersymmetric Maxwell equations. In the non-supersymmetric case such an action is known as the Euler-Heisenberg effective action which was studied for the three-dimensional electrodynamics in [3]. The present work is a supersymmetric generalization of the results of [3]. Before computing the effective action in the model of the N = 2 chiral superfield interacting with the background gauge superfield we found a general form (2.45) of such an action subject to the constraints of the gauge and superconformal invariance. The leading terms in this action are given by the Chern-Simons term (2.46) and by a superconformal generalization of the Maxwell action (2.47). The functional form of these two terms is fixed by the superconformal invariance uniquely, up to the coefficients. The higher order terms with respect to the Maxwell field strength are taken into account by the action (2.48) which is found up to one arbitrary function H of quasi-primary superfields (2.36) in the N = 2 superspace which are constructed in terms of the superfield strength G and its covariant spinor derivatives. This analysis is quite similar to [10] where the lowenergy effective action in the N = 2, d = 4 supergauge theory was expressed in terms of superconformal invariants. After considering the general structure of the superconformal action we explicitly compute it by integrating out the chiral superfields interacting with the background gauge superfield. The results of the calculations match the previously proposed form (2.45): The coefficients in Chern-Simons and Maxwell terms are fixed as in (3.49) and (3.50) while the higher-order contributions with respect to the Maxwell field strengths are represented by the action (3.51) which is expressed in terms of the quasi-primary superfields (2.36). The effective action for the N = 4 gauge superfield is obtained in the form (3.66). It has no Chern-Simons term since there is no parity anomaly for the model of charged hypermultiplet (see, e.g., [18] for a review). The absence of the Chern-Simons term in the charged hypermultiplet was also checked in our recent work [22] using direct quantum computations in the N = 3, d = 3 harmonic superspace. Therefore the effective action (3.66) starts from the Maxwell term (written in the N = 2 superspace in a superconformal form) as well as contains all higher orders of the Maxwell field strength in components. It is interesting to note that the leading terms without derivatives in the effective action for the N = 4 gauge superfield coincide with the classical action of the Abelian Gaiotto-Witten model rewritten in [25] in terms of dynamical gauge superfield. Therefore one can consider the Abelian Gaiotto-Witten model as the effective theory induced by quantum hypermultiplet superfield. One of the applications of the obtained effective actions for the N = 2 and N = 4 gauge theories may be given within the study of the mirror symmetry [27] for three-dimensional gauge theories. The mirror symmetry is a kind of dualities for three-dimensional gauge theories which relates one field theory at strong coupling with another theory in the perturbative regime. In particular, the leading term (3.50) in the N = 2 gauge superfield effective action is known to be dual to the Kähler sigma model which was studied in [28]. As soon as we derived not only the leading term (3.50), but also a number of derivative contributions (3.51) in the N = 2 effective action, it is natural to find the corrections to the sigma model considered in [28] due to the terms (3.51). In a similar way it would be interesting to explore the duality for the N = 4 gauge superfield effective action (3.69). Note that modern applications of the mirror symmetry for three-dimensional models with N = 2 and N = 4 supersymmetry are helpful for the studies of the ABJM-like theories [29]. It is natural to consider the N = 2 chiral superfield interacting with the background gauge superfield and the N = 4 charged hypermultiplet as the parts of the N = 2 and N = 4 supersymmetric three-dimensional electrodynamics, respectively. In this case the one-loop Euler-Heisenberg-type effective actions obtained in the present paper receive two-loop (as well as all higher-loop) corrections which are tempting to study. For the four-dimensional supersymmetric electrodynamics the two-loop corrections to the supersymmetric Euler-Heisenberg effective action were computed in [5,6], but in the three-dimensional case this problem has never been addressed. Finally, it is interesting to study the effective action in the non-Abelian N = 2 and N = 4 three-dimensional supergauge models and then to extend these results to the theories with N = 6 and N = 8 supersymmetry which are worldvolume field theories of M2 and D2 branes. It would open the possibility to study the effective actions in the BLG and ABJM theories which would give an effective quantum description of multiple M2 branes. There are also various deformations of the BLG and ABJM models [30,31] which are interesting from the point of view of the AdS 4 /CFT 3 correspondence because they correspond to the infrared stable superconformal points in the three-dimensional N = 2 supergauge theories [31,32,33]. It is natural to study the problem of effective action in these models as well. In the present paper we use the conventions for the three-dimensional gamma matrices following our previous works [22,34]. In particular, the gamma matrices (γ 0 ) β α = −iσ 2 , (γ 1 ) β α = σ 3 , (γ 2 ) β α = σ 1 obey the Clifford algebra {γ m , γ n } = −2η mn , η mn = diag(1, −1, −1) , (A.1) and the following orthogonality and completeness relations We raise and lower the spinor indices with the ε-tensor, e.g., (γ m ) αβ = ε ασ (γ m ) σ β , ε 12 = 1. Any vector index can be converted into a pair of spinor ones by the following rules The N = 2, d = 3 superspace is parametrized by the coordinates z M = (x m , θ α ,θ α ) withθ α = (θ α ) * . The covariant spinor derivatives obey the standard anticommutation relation The integration measure in the full N = 2, d = 3 superspace is defined as for some field f (x). Here we use the following conventions for contractions of the spinor indices The chiral subspace is parametrized by z + = (x m + , θ α ), where x m ± = x m ± iγ m αβ θ αθβ . The chiral superfields are defined as usual,D α Φ = 0 ⇒ Φ = Φ(x m + , θ α ). The integration measure in the chiral superspace d 5 z ≡ d 3 xd 2 θ is related to the full superspace measure (A.7) as B. Superconformal transformations in N = 2 superspace Here we review a representation of the superconformal group on the superfields in the N = 2, d = 3 superspace which was used in Section 2.2 (see some details in [11], the analogous construction for N = 1, d = 4 superspace was given in [12]). Let us consider the infinitesimal superconformal transformations of coordinates of the N = 2 superspace z A = (x αβ , θ α ,θ α ), where δ sc z A explicitly reads Here a, b, k αβ , η α ,η α are the parameters of dilatations, U(1) transformations, special conformal transformations and S-supersymmetry transformations, respectively. These transformations can be shown to obey the superconformal algebra osp(2, R|2).
2010-04-01T04:05:41.000Z
2010-03-25T00:00:00.000
{ "year": 2010, "sha1": "011978e66dd3ce79f7dec9137c36327dd25644fa", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1003.4806", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "011978e66dd3ce79f7dec9137c36327dd25644fa", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
269106872
pes2o/s2orc
v3-fos-license
Effect of Hot-Pressing Temperature on the Properties of Eco-Friendly Fiberboard Panels Bonded with Hydrolysis Lignin and Phenol–Formaldehyde Resin Lignin is the natural binder in wood and lignocellulosic plants and is regarded as the main natural and renewable source of phenolic compounds. Its incorporation in the composition of fiberboards will enhance both the environmental performance of the panels and the complex use of natural resources. In recent years, the increased valorization of hydrolysis lignin in value-added applications, including adhesives for bonding fiberboard panels, has gained significant research interest. Markedly, a major drawback is the retention of lignin in the pulp until the hot-pressing process. This problem could be overcome by using a small content of phenol–formaldehyde (PF) resin in the adhesive mixture as an auxiliary binder. The aim of this research work was to investigate and evaluate the effect of the hot-pressing temperature, varied from 150 °C to 200 °C, in a modified hot-press cycle on the main physical and mechanical properties of fiberboard panels bonded with unmodified technical hydrolysis lignin (THL) as the main binder and PF resin as an auxiliary one. It was found that panels with very good mechanical properties can be fabricated even at a hot-pressing temperature of 160 °C, while to provide the panels with satisfactory waterproof properties, it is necessary to have a hot-pressing temperature of at least 190 °C. Introduction The fiberboard industry is characterized by sustainable production levels, as the dry-process method is currently dominant [1].In the dry process, the properties of the panels are mainly attributed to the adhesion bonds.Currently, the production of woodbased panels is dominated by synthetic, formaldehyde-based resins [2,3].Notably, due to the environmental aspects of manufactured wood-based composites, the content of free formaldehyde in the panels is significantly limited, and from 2026, emission class E 0 (formaldehyde content up to 4.0 mg/100 g) will be mandatory for all EU member states [4,5].A viable solution to address this issue is the use of sustainable, formaldehydefree, bio-based wood adhesives as partial or complete replacements for the commonly used thermosetting formaldehyde-based adhesives [6][7][8][9].In recent years, lignin has attracted tremendous interest from both academia and industry as an abundant, renewable and environmentally friendly alternative to petroleum-based products for a number of end uses, including wood adhesives [10,11]. It should be noted that significant amounts of technical lignin, estimated to amount to approximately 100 million tons per year, are generated as a by-product of the pulp and paper industry, of which only about 2% is used for value-added applications, while the rest is primarily used as a fuel [12,13].The types of industrial lignin, depending on the production method, are divided into Kraft lignin, organosolv lignin, hydrolysis lignin and lignosulfonates [8].Kraft lignin is the most common type of technical lignin Polymers 2024, 16, 1059 2 of 17 obtained on a commercial scale.Kraft lignin has been used for manufacturing water-soluble products, e.g., polyurethane, carbon fiber, flame retardants, etc.However, it is mainly burned in production facilities to obtain thermal energy and partially regenerate some chemical reagents.Organosolv lignin is obtained in very small amounts, and when using lignosulfonates as binders in the production of wood-based panels, significant deterioration of the waterproof properties of the materials is observed [14][15][16].Technical hydrolysis lignin is obtained as a residual product during acid or enzymatic biomass hydrolysis to sugars to produce bioethanol or other valuable raw materials [17].In Bulgaria, the plants located in Razlog and Dolna Mitropoliya process waste wood and agricultural lignocellulosic raw materials into sugars with an estimated yield of 38%.The production capacity is 12,000 t/y of yeast with 50% protein content and 600 t/y of furfural.The remaining technical hydrolysis lignin is deposited in the landfill between the towns of Bansko and Razlog.At the beginning of 2023, the available quantities of technical lignin in the landfill were about 200,000 tons.Waste hydrolysis lignin represents about 40% of the raw material in the hydrolysis process described above.This makes hydrolysis lignin a particularly promising candidate for wood adhesive applications. In general, lignin is a natural and renewable biopolymer that plays a significant role as a binder in wood and lignocellulosic plants.The different types of linkages of the lignin precursors are presented in Figure 1 [18]. production method, are divided into Kraft lignin, оrganosolv lignin, hydrolysis lignin and lignosulfonates [8].Kraft lignin is the most common type of technical lignin obtained on a commercial scale.Kraft lignin has been used for manufacturing water-soluble products, e.g., polyurethane, carbon fiber, flame retardants, etc.However, it is mainly burned in production facilities to obtain thermal energy and partially regenerate some chemical reagents.Organosolv lignin is obtained in very small amounts, and when using lignosulfonates as binders in the production of wood-based panels, significant deterioration of the waterproof properties of the materials is observed [14][15][16].Technical hydrolysis lignin is obtained as a residual product during acid or enzymatic biomass hydrolysis to sugars to produce bioethanol or other valuable raw materials [17].In Bulgaria, the plants located in Razlog and Dolna Mitropoliya process waste wood and agricultural lignocellulosic raw materials into sugars with an estimated yield of 38%.The production capacity is 12,000 t/y of yeast with 50% protein content and 600 t/y of furfural.The remaining technical hydrolysis lignin is deposited in the landfill between the towns of Bansko and Razlog.At the beginning of 2023, the available quantities of technical lignin in the landfill were about 200,000 tons.Waste hydrolysis lignin represents about 40% of the raw material in the hydrolysis process described above.This makes hydrolysis lignin a particularly promising candidate for wood adhesive applications. In general, lignin is a natural and renewable biopolymer that plays a significant role as a binder in wood and lignocellulosic plants.The different types of linkages of the lignin precursors are presented in Figure 1 [18].Figure 1 shows that surface-active groups are of primary importance for lignin in its capacity as a binder.Research in the field has proven that lignin is connected to cellulose mainly through ether bonds, acetal bonds and ester bonds.Ether and acetal bonds are formed between Cα of lignin and C6 of cellulose, while ester bonds are formed between Cγ of lignin and C6 of cellulose [19].In general, lignin is not highly reactive, which, in some research in the field, is overcome through various modifications [20][21][22][23]. Another option is the modification of hot-pressing technology, applying initial low pressure, followed by increased pressure [24] and subsequent cooling [15,25].One of the main problems with this approach is lignin retention in the wood fibers until it is plasticized and activated.This drawback can be successfully overcome by using an auxiliary binder [15,26].Phenol formaldehyde (PF) resin is suitable for this purpose because of its use in the industrial manufacture of wood-based panels and because of the ability of lignin, under certain conditions, to bond with phenol [27]. In all these processes, the correct hot-pressing temperature is essential for optimizing lignin bonds, lignin-cellulose bonds, and overall fiberboard properties.It should be emphasized that increasing the hot-pressing temperature has a beneficial effect on the properties of fiberboard panels fabricated with standard formaldehyde-based resins [28,29].However, the influence of hot-pressing temperature has not been investigated in detail Figure 1 shows that surface-active groups are of primary importance for lignin in its capacity as a binder.Research in the field has proven that lignin is connected to cellulose mainly through ether bonds, acetal bonds and ester bonds.Ether and acetal bonds are formed between Cα of lignin and C6 of cellulose, while ester bonds are formed between Cγ of lignin and C6 of cellulose [19].In general, lignin is not highly reactive, which, in some research in the field, is overcome through various modifications [20][21][22][23]. Another option is the modification of hot-pressing technology, applying initial low pressure, followed by increased pressure [24] and subsequent cooling [15,25].One of the main problems with this approach is lignin retention in the wood fibers until it is plasticized and activated.This drawback can be successfully overcome by using an auxiliary binder [15,26].Phenol formaldehyde (PF) resin is suitable for this purpose because of its use in the industrial manufacture of wood-based panels and because of the ability of lignin, under certain conditions, to bond with phenol [27]. In all these processes, the correct hot-pressing temperature is essential for optimizing lignin bonds, lignin-cellulose bonds, and overall fiberboard properties.It should be emphasized that increasing the hot-pressing temperature has a beneficial effect on the properties of fiberboard panels fabricated with standard formaldehyde-based resins [28,29].However, the influence of hot-pressing temperature has not been investigated in detail when using a modified pressing cycle and an adhesive composition with hydrolysis lignin as the main binder and phenol-formaldehyde resin as the auxiliary, which determines the main aim and novelty of the present work. Materials Wood pulp fabricated by the "Asplund" (Stockholm, Sweden) thermomechanical method of refining was obtained from Welde-Bulgaria AD Troyan, Troyan, Bulgaria, and used to produce fiberboard panels.The pulp was composed of beech and Turkish oak in a ratio of 2:1.The supplied fibers were characterized by a bulk density of 29 kg•m −3 and a moisture content of 11.2%. The phenol-formaldehyde (PF) resin used was produced by Prefere Resins Romania SRL (Rasnov, Romania) and also provided by Welde Bulgaria AD (Troyan, Bulgaria).The PF resin had the following main characteristics: solids content of 46%; viscosity of 358 MPa.s; and pH value of 6.8. Batch technical hydrolysis lignin (THL), supplied by the depot in Razlog, Bulgaria, was used to carry out the experimental work.THL was a residual product from acid hydrolysis.Hydrolysis sugars and furfural were obtained in the production process, and were further subjected to chemical and biochemical processing.The process was carried out with diluted sulfuric acid at a concentration of 0.5-1%, and the duration of hydrolysis was 200-240 min at a maximum temperature of 190 • C. When taking the batch of THL from the landfill for the study, the surface layer of the lignin pile was removed to reduce the ash content.In previous studies conducted by the authors, the lignin from this batch was investigated regarding its potential applicability as biochar [30], with a view to using it for optimizing a modified hot-pressing cycle in the production of fiberboard panels with THL as a primary binder [15], and to assess the effect of the THL-to-PF resin ratio on the properties of eco-friendly panels [25].Markedly, the effect of the hot-pressing temperature on the physical and mechanical properties of fiberboards with THL as a primary binder has not been investigated, which justifies the aim of the present research. The chemical composition of THL was determined using previous methods of verifying the content of cellulose [31], lignin [32] and ash [33].The C, N, S and H content was determined using a Euro EA 3000 Elemental Analyzer (EuroVector, Pavia, Italy).The results obtained for the chemical composition of the THL are presented in Table 1.After fractionation, only hydrolysis lignin with a fraction below 100 µm was taken for the experiment. In addition, the THL used was also characterized by Fourier transform infrared (FTIR) spectroscopy.Fourier transform infrared spectroscopy (FTIR) analysis was carried out using a Varian 600-IR (Palo Alto, CA, USA).For this purpose, 1 g of dry hydrolysis lignin was used.This study was conducted at the Laboratory of Thermomechanical and Thermophysical Analysis at the University of Chemical Technology and Metallurgy, Sofia, Bulgaria.The spectra were taken in the mid-infrared region of 400-4000 cm −1 with a resolution of 4 cm −1 . A graphical representation of the FTIR results is shown in Figure 2. The binding functions of lignin are mainly the result of available O-H stretching (phenolic and aliphatic OH), C-H stretching (CH 3 and CH 2 ) and aromatic C-H in-plane deformation (S-ring).The comparative analysis of the absorption at spectra of 3391 cm −1 and 1382 cm −1 gives information about the relative ratio of phenolic to aliphatic OH [33].This characteristic shows that in hydrolysis lignin, the most abundant hydroxyl group is phenolic OH.This observation is another reason for the better compatibility of lignin The FTIR data with the main functional groups in the technical hydrolysis lignin are presented in Table 2. [30]. The FTIR data with the main functional groups in the technical hydrolysis lignin are presented in Table 2. Thermogravimetric analysis/differential thermogravimetry (TGA/DTG) was conducted using a STAPT1600 TG-DTA/DSC (STA Simultaneous Thermal Analysis) manufactured by LINSEIS Messgeräte GmbH, Selb, Germany, Figure 3. and 1382 cm −1 gives information about the relative ratio of phenolic to aliphatic OH [33] This characteristic shows that in hydrolysis lignin, the most abundant hydroxyl group is phenolic OH.This observation is another reason for the better compatibility of lignin with PF resin compared to the other traditional formaldehyde-based synthetic resins used in the wood-based panel production industry, such as urea-formaldehyde or melamine-for maldehyde resin. Thermogravimetric analysis/differential thermogravimetry (TGA/DTG) was con ducted using a STAPT1600 TG-DTA/DSC (STA Simultaneous Thermal Analysis) manu factured by LINSEIS Messgeräte GmbH, Selb, Germany, Figure 3.The parameters of this study were as follows: a temperature range of 20 ÷ 1000 °C; a heating rate of 10 °C/min; gas environment: static air environment; type of thermocouple TypeS (Pt10%/Pt-Rh); types of crucibles: stabilized co-round crucibles.The thermogravi metric analysis/differential thermogravimetry (TGA/DTG) curves of THL are presented in Figure 4. Methods The hot-pressing temperature was determined to vary from 150 • C to 200 • C, in steps of variation of 10 • C (Table 3).The panels had dimensions of 200 mm × 200 mm × 4 mm.After determining the required amounts of materials, THL and PF resin were brought to a concentration of 30% of the solution and suspension, respectively.Then, they were mixed and immediately injected into the pulp.A fast-rotating laboratory blender (a prototype, University of Forestry, Sofia, Bulgaria) at 850 rpm with needle-shaped paddles was used.The adhesive system was injected through a nozzle with a diameter of 1.5 mm at a pressure of 0.4 MPa.The entire gluing process lasted 1 min.The hot-pressing process was carried out on a laboratory press "Servitec-Polystat 200 T" (Servitec Maschinenservice GmbH, Wustermark, Germany).The hot-pressing temperature varied from 150 • C to 200 • C. The press factor applied was 2 min•mm −1 .A two-stage hot-pressing cycle with subsequent cooling was used.The first stage was at a pressure of 1.2 MPa and lasted 360 s.The second stage was at a pressure of 4.0 MPa and lasted 120 s.Cooling was carried out while maintaining high pressure (4.0 MPa) until a temperature below 100 • C was reached.In this case, the cooling time was 360 s. The laboratory-fabricated fiberboard panels were conditioned for two weeks at a room temperature of 20 ± 2 • C and a relative humidity of 65%. The physical and mechanical characteristics of the fiberboard panels were determined in accordance with the applicable EN standards [35][36][37][38].A universal testing machine, Zwick/Roell Z010 (ZwickRoell GmbH, Ulm, Germany), was used to determine the mechanical properties of the fiberboard panels. Preliminary Results The preliminary studies carried out in laboratory conditions to fabricate eco-friendly fiberboard panels, bonded with hydrolysis lignin as the primary binder and a low PF resin content of 3% (based on the dry fibers), demonstrated a significant effect of panel density on the mechanical properties, i.e., the modulus of elasticity (MOE) and bending strength (MOR) of the laboratory-made fiberboards (Figures 5 and 6). The preliminary data on the effect of panel density on the main mechanical properties of the fiberboard panels clearly demonstrate that without further modification, the target density of the boards should be increased to activate the lignin to act as a binder. Thus, at density values of about 800 kg•m −3 , practically no action of the hydrolysis lignin as a binder was observed.The panels bonded with an adhesive system comprising 3% PF resin and 7% THL, and the panels bonded only with 3% PF resin exhibited similar MOE and MOR values.Under these conditions, the distances between the fibers are such that they do not form stable bonds with the hydrolysis lignin.When the target density of the panels was increased to 870-880 kg•m −3 , bonds between the lignin and the fibers began to form.Thus, at these densities, the fiberboard panels bonded with hydrolysis lignin had nearly 1.3 times higher MOE and MOR values compared to the panels bonded with PF resin alone.This tendency was even more clearly expressed when the target density of the panels was set to 900 ÷ 950 kg•m −3 .Thus, at a density of 950 kg•m −3 , the fiberboards fabricated with THL exhibited 1.7 times higher MOE and MOR values.The preliminary data on the effect of panel density on the main mechanical properties of the fiberboard panels clearly demonstrate that without further modification, the target density of the boards should be increased to activate the lignin to act as a binder. Thus, at density values of about 800 kg•m −3 , practically no action of the hydrolysis lignin as a binder was observed.The panels bonded with an adhesive system comprising 3% PF resin and 7% THL, and the panels bonded only with 3% PF resin exhibited similar MOE and MOR values.Under these conditions, the distances between the fibers are such that they do not form stable bonds with the hydrolysis lignin.When the target density of the panels was increased to 870-880 kg•m −3 , bonds between the lignin and the fibers began to form.Thus, at these densities, the fiberboard panels bonded with hydrolysis lignin had nearly 1.3 times higher MOE and MOR values compared to the panels bonded with PF The preliminary data on the effect of panel density on the main mechanical properties of the fiberboard panels clearly demonstrate that without further modification, the target density of the boards should be increased to activate the lignin to act as a binder. Thus, at density values of about 800 kg•m −3 , practically no action of the hydrolysis lignin as a binder was observed.The panels bonded with an adhesive system comprising 3% PF resin and 7% THL, and the panels bonded only with 3% PF resin exhibited similar MOE and MOR values.Under these conditions, the distances between the fibers are such that they do not form stable bonds with the hydrolysis lignin.When the target density of the panels was increased to 870-880 kg•m −3 , bonds between the lignin and the fibers began to form.Thus, at these densities, the fiberboard panels bonded with hydrolysis lignin had nearly 1.3 times higher MOE and MOR values compared to the panels bonded with PF The study of Mancera et al. [39] also confirmed the positive effect of lignin at increased fiberboard density, where fiberboard panels bonded with various types of unmodified lignin had 1.5-1.8times higher mechanical and 1.4-2.0times better waterproof properties than the panels without lignin.However, this was obtained at a relatively high fiberboard density of 1358 to 1380 kg•m −3 .The positive effect of lignin addition at a panel density of 900 kg•m −3 was also confirmed by the study of Westin et al. [40], who reported that fiberboard panels fabricated with lignin had properties similar to those bonded with an 8% working solution of PF resin.Very good properties of fiberboard panels fabricated with different types of unmodified lignin were also reported by Tupciauskas et al. [41], but again, at significantly increased density values of about 1300 ± 50 kg•m −3 .Velásquez et al. [42] Polymers 2024, 16, 1059 9 of 17 also reported similar data when manufacturing fiberboard panels with a density that varied from 1200 to 1300 kg•m −3 using Kraft lignin as a binder. Therefore, in the absence of lignin modification and without prior cross-linking with PF resin, the target density of the panels should be at least 900 ÷ 950 kg•m −3 in order to achieve satisfactory physical and mechanical properties of the composites produced.A subsequent increase in density above 1000 kg•m −3 is undesirable, given the need to use higher pressure during hot-pressing and the greater consumption of raw materials.Therefore, for the purposes of this research, namely establishing the effect of hot-pressing temperature on the properties of fiberboard panels fabricated with hydrolysis lignin as a primary binder, it was determined that the panels would have a target density of 950 kg•m −3 . Effect of Hot-Pressing Temperature The results obtained for the density of the fabricated panels are presented in Figure 7. fiberboard density of 1358 to 1380 kg•m −3 .The positive effect of lignin addition at a panel density of 900 kg•m −3 was also confirmed by the study of Westin et al. [40], who reported that fiberboard panels fabricated with lignin had properties similar to those bonded with an 8% working solution of PF resin.Very good properties of fiberboard panels fabricated with different types of unmodified lignin were also reported by Tupciauskas et al. [41], but again, at significantly increased density values of about 1300 ± 50 kg•m −3 .Velásquez et al. [42] also reported similar data when manufacturing fiberboard panels with a density that varied from 1200 to 1300 kg•m −3 using Kraft lignin as a binder. Therefore, in the absence of lignin modification and without prior cross-linking with PF resin, the target density of the panels should be at least 900 ÷ 950 kg•m −3 in order to achieve satisfactory physical and mechanical properties of the composites produced.A subsequent increase in density above 1000 kg•m −3 is undesirable, given the need to use higher pressure during hot-pressing and the greater consumption of raw materials.Therefore, for the purposes of this research, namely establishing the effect of hot-pressing temperature on the properties of fiberboard panels fabricated with hydrolysis lignin as a primary binder, it was determined that the panels would have a target density of 950 kg•m −3 . Effect of Hot-Pressing Temperature The results obtained for the density of the fabricated panels are presented in Figure 7.The density of the laboratory-produced panels varied from 939 kg•m −3 to 967 kg•m −3 .That is, the difference in the maximum and minimum density values of the fiberboard panels was only 3.0%, or well below the statistical error of 5%.The density of the manufactured boards was very close to the target value of 950 kg•m −3 .The conducted ANOVA, the results of which are shown in Table 4, also confirmed the non-significance of hot-pressing temperature on the fiberboard densities.The density of the laboratory-produced panels varied from 939 kg•m −3 to 967 kg•m −3 .That is, the difference in the maximum and minimum density values of the fiberboard panels was only 3.0%, or well below the statistical error of 5%.The density of the manufactured boards was very close to the target value of 950 kg•m −3 .The conducted ANOVA, the results of which are shown in Table 4, also confirmed the non-significance of hot-pressing temperature on the fiberboard densities.The absence of an influence of the hot-pressing temperature on the densities of the panels might be attributed to the selected method of hot-pressing, namely using metal bars to set the fiberboard thickness.The direct consequence of this is that the density of the panels is practically the same and will not reflect on the other physical and mechanical properties.Therefore, the variation in the other fiberboard properties could be due to the hot-pressing temperature.Water absorption (WA) and thickness swelling (TS) are critical physical characteristics of wood-based panels, related to their dimensional stability, which provide important information on composite behavior in humid environments. The variation in the water absorption (WA) of fiberboard panels bonded with THL as the main binder as a function of hot-pressing temperature is presented in Figure 8. The absence of an influence of the hot-pressing temperature on the densities of the panels might be attributed to the selected method of hot-pressing, namely using metal bars to set the fiberboard thickness.The direct consequence of this is that the density of the panels is practically the same and will not reflect on the other physical and mechanical properties.Therefore, the variation in the other fiberboard properties could be due to the hot-pressing temperature. Water absorption (WA) and thickness swelling (TS) are critical physical characteristics of wood-based panels, related to their dimensional stability, which provide important information on composite behavior in humid environments. The variation in the water absorption (WA) of fiberboard panels bonded with THL as the main binder as a function of hot-pressing temperature is presented in Figure 8.As the hot-pressing temperature increased from 150 °C to 200 °C, the WA of the panels decreased from 86.31% to 66.51%, representing a total 1.3-fold decrease.However, in practice, two significant declines (improvements) in this property were observed.The first considerable decrease in WA occurred when the hot-pressing temperature was increased from 160 °C to 170 °C, resulting in 1.11 times better WA values.The second significant improvement in WA was determined when the hot-pressing temperature was increased from 190 °C to 200 °C, resulting in a 1.12-fold reduction in WA values. The determined WA values at hot-pressing temperatures of 150 °C and 160 °C were almost the same.The conducted t-test also confirmed this observation, as the p-value was 0.260.The panels fabricated at temperatures of 170 °C, 180 °C and 190 °C had similar WA values, which was again confirmed by the conducted t-tests.The corresponding p-values were 1.000 and 0.812, respectively. The determined effect of hot-pressing temperature on the WA values of the fiberboard panels was also consistent with the study of Wang et al. [43], where binderless fiberboard panels were fabricated.The cited research found a significantly smaller improvement in WA of about 1.08-fold as the hot-pressing temperature increased from 160 As the hot-pressing temperature increased from 150 • C to 200 • C, the WA of the panels decreased from 86.31% to 66.51%, representing a total 1.3-fold decrease.However, in practice, two significant declines (improvements) in this property were observed.The first considerable decrease in WA occurred when the hot-pressing temperature was increased from 160 • C to 170 • C, resulting in 1.11 times better WA values.The second significant improvement in WA was determined when the hot-pressing temperature was increased from 190 • C to 200 • C, resulting in a 1.12-fold reduction in WA values. The determined WA values at hot-pressing temperatures of 150 • C and 160 • C were almost the same.The conducted t-test also confirmed this observation, as the p-value was 0.260.The panels fabricated at temperatures of 170 • C, 180 • C and 190 • C had similar WA values, which was again confirmed by the conducted t-tests.The corresponding p-values were 1.000 and 0.812, respectively. The determined effect of hot-pressing temperature on the WA values of the fiberboard panels was also consistent with the study of Wang et al. [43], where binderless fiberboard panels were fabricated.The cited research found a significantly smaller improvement in WA of about 1.08-fold as the hot-pressing temperature increased from 160 • C to 190 • C, and a 1.08-fold improvement again upon increasing the temperature of hot-pressing to 200 • C. Satisfactory WA values of the panels fabricated with different types of technical lignins at pressing temperatures in the order of 200 • C were also confirmed by the studies of Mancera et al. [39], Tupciauskas et al. [41] and Westin et al. [40].Very good WA values of fiberboard panels at an elevated hot-pressing temperature of 230 • C were also reported by Theng et al. [44].In the cited study, panels made with Kraft lignin resulted in 1.5 to 2.0 times lower WA values than commercial fiberboards.The need to increase the temperature of hot-pressing to the order of 200 • C to improve the WA of fiberboard panels bonded with hydrolysis lignin can be explained by the improved plasticization of lignin at these temperatures [45]. The results obtained for the TS (24 h) of the fiberboard panels bonded with THL and PF resin are presented in Figure 9. sulted in 1.5 to 2.0 times lower WA values than commercial fiberboards.The need to increase the temperature of hot-pressing to the order of 200 °C to improve the WA of fiberboard panels bonded with hydrolysis lignin can be explained by the improved plasticization of lignin at these temperatures [45]. The results obtained for the TS (24 h) of the fiberboard panels bonded with THL and PF resin are presented in Figure 9.The increase in the hot-pressing temperature from 150 °C to 200 °C resulted in significantly decreased TS values ranging from 56.64% to 28.01%, i.e., an 1.99-fold improvement. The slightest improvement in TS values was determined when the pressing temperature was increased from 150 °C to 160 °C.In this case, the drop in TS was 1.08-fold.However, the improvement was statistically significant, which was confirmed by the conducted t-test (p-value of 0.055). Subsequently, as the hot-pressing temperature increased from 160 °C to 190 °C, the TS was observed to improve (decrease) 1.13-fold as the temperature increased from 160 °C to 170 °C, 1.12-fold as the temperature increased from 170 °C to 180 °C and 1.18-fold when the temperature increased from 180 °C to 190 °C, respectively.The most significant improvement in TS values of 1.23 times was observed when the hot-pressing temperature was increased from 180 °C to 190 °C.Despite the substantial improvement in the property, due to the bio-based nature of the main binder used (technical lignin), only the panels fabricated at temperatures of 190 °C and 200 °C met the standard requirement for fiberboard panels with general purpose and use in dry conditions-a TS of, at most, 35%-and only the fiberboard panels fabricated at a hot-pressing temperature of 200 °C fulfilled the standard requirement for use in humid conditions, i.e., 30% [46]. The positive effect of increased hot-pressing temperature on the dimensional stability of fiberboard panels fabricated without the involvement of formaldehyde-based binders was also reported by Wang et al. [43].In the cited study, increasing the hot-pressing The increase in the hot-pressing temperature from 150 • C to 200 • C resulted in significantly decreased TS values ranging from 56.64% to 28.01%, i.e., an 1.99-fold improvement. The slightest improvement in TS values was determined when the pressing temperature was increased from 150 • C to 160 • C. In this case, the drop in TS was 1.08-fold.However, the improvement was statistically significant, which was confirmed by the conducted t-test (p-value of 0.055). Subsequently, as the hot-pressing temperature increased from 160 • C to 190 • C, the TS was observed to improve (decrease) 1.13-fold as the temperature increased from 160 • C to 170 • C, 1.12-fold as the temperature increased from 170 • C to 180 • C and 1.18-fold when the temperature increased from 180 • C to 190 • C, respectively.The most significant improvement in TS values of 1.23 times was observed when the hot-pressing temperature was increased from 180 • C to 190 • C. Despite the substantial improvement in the property, due to the bio-based nature of the main binder used (technical lignin), only the panels fabricated at temperatures of 190 • C and 200 • C met the standard requirement for fiberboard panels with general purpose and use in dry conditions-a TS of, at most, 35%-and only the fiberboard panels fabricated at a hot-pressing temperature of 200 • C fulfilled the standard requirement for use in humid conditions, i.e., 30% [46]. The positive effect of increased hot-pressing temperature on the dimensional stability of fiberboard panels fabricated without the involvement of formaldehyde-based binders was also reported by Wang et al. [43].In the cited study, increasing the hot-pressing temperature from 160 • C to 200 • C resulted in a 1.18-fold decrease (improvement) in TS values.Satisfactory TS values were also reported by Mancera et al. [39], Tupciauskas et al. [41] and Westin et al. [40].Thus, the study of Westin et al., where the hot-pressing temperature was 190 • C, reported almost two times lower TS values of panels bonded with Kraft lignin compared to those made with PF resin.Tupciauskas et al. also reported satisfactory TS values of about 4% in fiberboard panels fabricated with lignin at ahot-pressing temperature of 235 • C. In the study by Mancera et al., using a hot-pressing temperature of 205 • C, panels bonded with THL exhibited about 1.5 times lower TS values than the control panels.In the study by Theng et al. [44], using a hot-pressing temperature of 230 • C, the fiberboards bonded with Kraft lignin had about two times better TS values than the control counterparts. Similar to the WA of the panels, the improvement in the TS of fiberboard panels made with technical lignin at increased hot-pressing temperatures can be explained by the plasticization of the lignin under these conditions (increased temperature and pressure). The effect of hot-pressing temperature on the modulus of elasticity (MOE) of fiberboard panels fabricated with THL as a primary binder is presented in Figure 10. trol counterparts. Similar to the WA of the panels, the improvement in the TS of fiberboard panels made with technical lignin at increased hot-pressing temperatures can be explained by the plasticization of the lignin under these conditions (increased temperature and pressure). The effect of hot-pressing temperature on the modulus of elasticity (MOE) of fiberboard panels fabricated with THL as a primary binder is presented in Figure 10.The fiberboards bonded with hydrolysis lignin as a primary binder were characterized by relatively high MOE values.The laboratory-fabricated fiberboard panels produced in this work exhibited MOE values varying from 3332 N•mm −2 to 4337 N•mm −2 .These high MOE values can be explained by the significantly extended compression factor of 2 min•mm −1 .The lowest MOE value was recorded at a hot-pressing temperature of 150 °C, and the highest modulus was obtained for the panels fabricated at a hot-pressing temperature of 200 °C, i.e., the difference between the MOE values of the panels fabricated at these two temperatures was 1.30.However, it should be emphasized that the only significant difference in MOE was when the hot-pressing temperature was increased from 150 °C to 160 °C, resulting in a 1.22-fold increase in the property.The conducted t-tests showed that there was no statistically significant difference in the MOE of the panels with a subsequent increase in the hot-pressing temperature.Thus, the p-value at temperatures of 160 °C and 170 °C was 0.549.At hot-pressing temperatures of 170 °C and 180 °C, the p-value was 0.250; at 180-190 °C, the p-value was 0.410; and at temperatures of 190-200 °C, the pvalue was 0.600.An explanation for this can be sought in the relatively extended press The fiberboards bonded with hydrolysis lignin as a primary binder were characterized by relatively high MOE values.The laboratory-fabricated fiberboard panels produced in this work exhibited MOE values varying from 3332 N• mm −2 to 4337 N• mm −2 .These high MOE values can be explained by the significantly extended compression factor of 2 min•mm −1 .The lowest MOE value was recorded at a hot-pressing temperature of 150 • C, and the highest modulus was obtained for the panels fabricated at a hot-pressing temperature of 200 • C, i.e., the difference between the MOE values of the panels fabricated at these two temperatures was 1.30.However, it should be emphasized that the only significant difference in MOE was when the hot-pressing temperature was increased from 150 • C to 160 • C, resulting in a 1.22-fold increase in the property.The conducted t-tests showed that there was no statistically significant difference in the MOE of the panels with a subsequent increase in the hot-pressing temperature.Thus, the p-value at temperatures of 160 • C and 170 • C was 0.549.At hot-pressing temperatures of 170 • C and 180 • C, the p-value was 0.250; at 180-190 • C, the p-value was 0.410; and at temperatures of 190-200 • C, the p-value was 0.600.An explanation for this can be sought in the relatively extended press factor and the achievement of significant stiffness of the face layers of the panels even at a temperature of 160 • C. All fiberboard panels fulfilled the strictest standard requirements for MOE, i.e., for use in load-bearing structures in humid conditions-a MOE of at least 3000 N• mm −2 [46]. Similar MOE values were reported in the study carried out by Wang et al. [43], who fabricated binderless fiberboard panels.In this research, an increase in the hot-pressing temperature from 160 • C to 200 • C resulted in a 1.66-fold improvement in MOE values. Very good MOE values of fiberboard panels fabricated with lignin at elevated hotpressing temperatures were also reported by Mancera et al. [39], Tupciauskas al. [41] and Westin et al. [39].Thus, in the study by Mancera et al., the fiberboards panels made with hydrolysis lignin had 1.25 times higher MOE values than the control panels.Tupciauskas et al. reported MOE values between 5000 and 7000 N• mm −2 for fiberboards with densities of 1300 kg•m −3 fabricated with lignin at hot-pressing temperatures of 235 • C. The study by Theng et al. [44] reported 1.2 times higher MOE values of fiberboard panels bonded with 9% lignin compared to commercial panels. The variation in the bending strength (MOR) of the panels with hydrolysis lignin as a primary binder depending on the hot-pressing temperature is given in Figure 11. et al. reported MOE values between 5000 and 7000 N•mm −2 for fiberboards with densities of 1300 kg•m −3 fabricated with lignin at hot-pressing temperatures of 235 °C.The study by Theng et al. [44] reported 1.2 times higher MOE values of fiberboard panels bonded with 9% lignin compared to commercial panels. The variation in the bending strength (MOR) of the panels with hydrolysis lignin as a primary binder depending on the hot-pressing temperature is given in Figure 11.The MOR data significantly replicated the trends for the effect of hot-pressing temperature on the MOE of the panels.Overall, the fabricated fiberboard panels were characterized by a high MOR.Under the experimental conditions, this property changed from 31.17 N•mm −2 to 40.81 N•mm −2 , or an overall 1.31-fold improvement was determined.Again, a significant difference in MOR values was observed between panels fabricated at a hot-pressing temperature of 150 °C and those manufactured at a hot-pressing temperature of 160 °C.The difference between the MOR values at these two temperatures was 1.14-fold.Unlike the MOE, in the MOR, a significant difference, i.e., a 1.08-fold improvement, was also observed when the hot-pressing temperature increased from 160 °C to 170 °C.An explanation here can be given by the fact that the MOE of the panels is only affected by the stiffness of the face layers.At the same time, the bending strength is partly influenced by the compressive resistance of the intermediate part of the panels (core layer). As a result of the conducted research, it was established that for fiberboard panels fabricated with THL as a primary binder and using a pressing factor of 2 min•mm −1 , an The MOR data significantly replicated the trends for the effect of hot-pressing temperature on the MOE of the panels.Overall, the fabricated fiberboard panels were characterized by a high MOR.Under the experimental conditions, this property changed from 31.17 N• mm −2 to 40.81 N• mm −2 , or an overall 1.31-fold improvement was determined.Again, a significant difference in MOR values was observed between panels fabricated at a hot-pressing temperature of 150 • C and those manufactured at a hot-pressing temperature of 160 • C. The difference between the MOR values at these two temperatures was 1.14-fold.Unlike the MOE, in the MOR, a significant difference, i.e., a 1.08-fold improvement, was also observed when the hot-pressing temperature increased from 160 • C to 170 • C.An explanation here can be given by the fact that the MOE of the panels is only affected by the stiffness of the face layers.At the same time, the bending strength is partly influenced by the compressive resistance of the intermediate part of the panels (core layer). As a result of the conducted research, it was established that for fiberboard panels fabricated with THL as a primary binder and using a pressing factor of 2 min•mm −1 , an increase in temperature above 170 • C during hot-pressing did not significantly affect MOR.The conducted t-tests confirmed this statement.Thus, the p-value for the MOR of panels fabricated at a hot-pressing temperature of 170 • C and 180 • C was 0.278.The corresponding values between the panels fabricated at hot-pressing temperatures of 180-190 • C and 190-200 • C were 0.975 and 0.490, respectively. Except for the panel fabricated at a hot-pressing temperature of 150 • C, all other fiberboards met the strictest standard requirements for the MOR, namely for load-bearing applications and use in humid conditions-a MOR of at least 35 N• mm −2 [46].Even the panel fabricated at a hot-pressing temperature of 150 • C met the MOR requirement for load-bearing structures and use in dry conditions-a MOR of at least 29 N• mm −2 [46]. The MOR data obtained in the present study are consistent with the findings reported by Wang et al. [43] for the fabrication of binderless fiberboard panels.In this study, the MOR of the panels increased 1.46-fold as the hot-pressing temperature was increased from 160 • C to 200 • C. A real improvement in the MOR values was determined when the hot-pressing temperature was increased to 190 • C, followed by a negligible difference.The slight difference obtained in the present study might be attributed to the slightly extended press factor; hence, it can be concluded that in terms of MOR, it is not justified to increase the hot-pressing temperature above 180 • C. The determined trend of high MOR values of fiberboards fabricated at higher hot-pressing temperatures using lignin as a binder was also reported in previous research works [39][40][41]44]. The variation in the internal bonds (IBs) of the fiberboard panels bonded with THL and PF resin is presented in Figure 12. by Wang et al. [43] for the fabrication of binderless fiberboard panels.In this study, the MOR of the panels increased 1.46-fold as the hot-pressing temperature was increased from 160 °C to 200 °C.A real improvement in the MOR values was determined when the hotpressing temperature was increased to 190 °C, followed by a negligible difference.The slight difference obtained in the present study might be attributed to the slightly extended press factor; hence, it can be concluded that in terms of MOR, it is not justified to increase the hot-pressing temperature above 180 °C.The determined trend of high MOR values of fiberboards fabricated at higher hot-pressing temperatures using lignin as a binder was also reported in previous research works [39][40][41]44]. The variation in the internal bonds (IBs) of the fiberboard panels bonded with THL and PF resin is presented in Figure 12.The IB refers to the bonding strength between fibers, which is of critical importance as it ensures that the fiberboards will not delaminate in post-processing.The internal bonding between wood fibers without the presence of synthetic adhesives is caused by the hydrogen bonds between the fibers, crosslinking between lignin and polysaccharides, and the condensation reaction of lignin [47][48][49][50].Markedly, wood fibers with lignin-rich surfaces positively affect the mechanical properties of the composites due to the entanglement of the lignin caused by the hot-pressing temperature and pressure applied, and the enhancement of the formation of covalent bonds [51,52].The IB refers to the bonding strength between fibers, which is of critical importance as it ensures that the fiberboards will not delaminate in post-processing.The internal bonding between wood fibers without the presence of synthetic adhesives is caused by the hydrogen bonds between the fibers, crosslinking between lignin and polysaccharides, and the condensation reaction of lignin [47][48][49][50].Markedly, wood fibers with lignin-rich surfaces positively affect the mechanical properties of the composites due to the entanglement of the lignin caused by the hot-pressing temperature and pressure applied, and the enhancement of the formation of covalent bonds [51,52]. The IB values of the laboratory-made fiberboard panels varied from 0.67 to 0.86 N• mm −2 , and the overall improvement in this property as the hot-pressing temperature increased from 150 • C to 200 • C was 1.30-fold.Two significant improvements in IB strength were observed, namely when the hot-pressing temperature was increased from 150 • C to 160 • C (a 1.12-fold improvement) and when the hot-pressing temperature was increased from 180 • C at 190 • C, i.e., an 1.11-fold increase.The conducted t-tests also confirmed the non-significance of the other values.The corresponding p-values were as follows: 160-170 • C-0.812, 170-180 • C-0.999, 190-200 • C-0.261.This observation leads to the conclusion that for a significant improvement in the core layer of the panels, the hot-pressing temperature should be increased to 190 • C. Except for the panels fabricated at a hot-pressing temperature of 150 • C, all other fiberboards fulfilled the strictest standard requirements for IB strength, namely for load-bearing applications and use in humid conditions-an IB strength of at least 0.70 N• mm −2 [46].Even the panels fabricated at a hot-pressing temperature of 150 • C met the requirements for this property for panels with a general purpose and those use in humid conditions-0.65N• mm −2 [46]. The improvement in the mechanical properties of the fiberboard panels, i.e., MOE, MOR and IB strength, with increased hot-pressing temperature were consistent with previous research works using lignin for bonding fiberboards [39][40][41]43,52].The improvement in IB strength with increasing hot-pressing temperature could be attributed to the plasticization of lignin in the core layer of the panels, where failure occurs [45]. Conclusions The present study confirmed that eco-friendly fiberboard panels with satisfactory water-related properties, that meet the standard requirements, and with excellent mechanical properties can be manufactured by using unmodified technical hydrolysis lignin and a low content of phenol-formaldehyde resin.As a result of the conducted preliminary experiments, a significant role of panel density in the activation of hydrolysis lignin as a binding substance was established.It was found that to utilize the adhesive abilities of lignin, the density of fiberboard panels should be about of 900 kg•m −3 . The main novelty of this study is that we conducted research on the effect of hotpressing temperature on the properties of fiberboard panels bonded with hydrolysis lignin and a small amount of phenol-formaldehyde resin.Regarding this effect, with a variation range of hot-pressing temperature of 150-200 • C and a press factor of 2 min•mm −1 , it was found that the optimal values of the factor were quite different regarding the waterproof and mechanical properties of the panels.Thus, to fabricate panels with very high mechanical properties, it is sufficient to use a hot-pressing temperature of 160 • C. Concerning the waterproof characteristics, especially for the thickness swelling of the panels, the hotpressing temperature should be at least 190 • C, and in order to produce panels suitable for use in humid conditions, the hot-pressing temperature should be 200 • C. Overall, this study confirmed one of the main disadvantages of bio-based binders-in this case, hydrolysis lignin-namely the difficulty of fabricating materials with good waterproof properties.Given the very good mechanical properties of the panels fabricated at not very high hot-pressing temperatures, the promising possibility of obtaining a structural eco-friendly material with reduced heat costs during production has been outlined. Figure 3 . 18 Figure 4 . Figure 3. STAPT1600 TG-DTA/DSC apparatus.The parameters of this study were as follows: a temperature range of 20 ÷ 1000 • C; a heating rate of 10 • C/min; gas environment: static air environment; type of thermocouple: TypeS (Pt10%/Pt-Rh); types of crucibles: stabilized co-round crucibles.The thermogravimetric analysis/differential thermogravimetry (TGA/DTG) curves of THL are presented in Figure 4.The TG/DTG curves of THL showed three stages of decomposition in the temperature range (30-1000 • C).The first stage (30-90 • C) was characterized by mass loss due to the evaporation of physically adsorbed moisture, and the peak temperature in this stage was 68.8 • C with a mass loss of about 25%.The second stage (90-200 • C) had a new peak at a temperature of 133.1 • C, which was mainly due to the evaporation of chemically bound water in the lignin, resulting in a mass loss of about 10%.The third stage (200-500 • C) showed that the actual phase transition of lignin began at a temperature of 200 • C, and at 378 • C, the lignin had already lost 80% of its mass.The organic elements of hydrolysis lignin burned completely at a temperature of 500 • C, and the inorganic residue was 8.45%.The glass-transition of technical hydrolysis occurred at 172.3 • C. Figure 5 . Figure 5. Modulus of elasticity (MOE) of fiberboard panels bonded with hydrolysis lignin as the main binder.Red dots represent the MOE values of the panels bonded with 7% THL and 3% PF resin.Orange dots represent the MOE values of the panels bonded only with 3% PF resin. Figure 6 . Figure 6.Bending strength (MOR) of fiberboard panels bonded with hydrolysis lignin as the main binder.Red dots represent the MOR values of the panels bonded with 7% THL and 3% PF resin.Orange dots represent the MOR values of the panels bonded only with 3% PF resin. − 2 3 y− 2 Density of fibreboard panels ρ, kg.m − 3 Figure 6 . Figure 6.Bending strength (MOR) of fiberboard panels bonded with hydrolysis lignin as the main binder.Red dots represent the MOR values of the panels bonded with 7% THL and 3% PF resin.Orange dots represent the MOR values of the panels bonded only with 3% PF resin. Figure 7 . Figure 7. Density of fiberboard panels bonded with THL and PF resin. Figure 7 . Figure 7. Density of fiberboard panels bonded with THL and PF resin. Figure 8 . Figure 8. Water absorption (24 h) of fiberboard panels bonded with THL and PF resin. Figure 8 . Figure 8. Water absorption (24 h) of fiberboard panels bonded with THL and PF resin. Figure 10 . Figure 10.Modulus of elasticity (MOE) of fiberboard panels bonded with THL and PF resin. Figure 10 . Figure 10.Modulus of elasticity (MOE) of fiberboard panels bonded with THL and PF resin. Figure 11 . Figure 11.Bending strength (MOR) of fiberboard panels bonded with THL and PF resin. Figure 11 . Figure 11.Bending strength (MOR) of fiberboard panels bonded with THL and PF resin. Figure 12 . Figure 12.Internal bond (IB) strength of fiberboard panels bonded with THL and PF resin. Figure 12 . Figure 12.Internal bond (IB) strength of fiberboard panels bonded with THL and PF resin. Table 2 . Functional groups in THL used in this work. Figure 5 . Modulus of elasticity (MOE) of fiberboard panels bonded with hydrolysis lignin as the main binder.Red dots represent the MOE values of the panels bonded with 7% THL and 3% PF resin.Orange dots represent the MOE values of the panels bonded only with 3% PF resin.Modulus of elasticity (MOE) of fiberboard panels bonded with hydrolysis lignin as the main binder.Red dots represent the MOE values of the panels bonded with 7% THL and 3% PF resin.Orange dots represent the MOE values of the panels bonded only with 3% PF resin.Bending strength (MOR) of fiberboard panels bonded with hydrolysis lignin as the main binder.Red dots represent the MOR values of the panels bonded with 7% THL and 3% PF resin.Orange dots represent the MOR values of the panels bonded only with 3% PF resin. Table 4 . ANOVA for the effect of hot-pressing temperature on the density of the panels.
2024-04-13T15:18:09.946Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "83ee9fdd82d636a0098044dec38655b4f6520de8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/16/8/1059/pdf?version=1712829983", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e43043f123fb847cd9d9975641d520791448dffd", "s2fieldsofstudy": [ "Environmental Science", "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
15096002
pes2o/s2orc
v3-fos-license
Producing the {\it a priori} pure entangled states by type II prametric downconversion We propose a scheme to produce the pure entangled states through type II downconversion. In the scheme, the vacuum states are excluded and the a prori pure entangled states are produced and verified without destroying the state itself. This can help to carry out the many unditional experiments related to quantum entanglement. The resource of quantum entangled entangled states play a fundamentally role in testing the quantum laws related to the nolocality and in many application in quantum information such as the quantum teleportation [1], quantum key distribution [2] and quantum computation [3]. Many applications of the quantum entanglement properties, e.g., the unconditional quantum teleportation [4,5] require the preshared entangled states. So far the parametric downconversion [6] seems to be the main way to yield the polarized entangled state However, in the downconversion process, actually most of the times it yields the vacuum. There is only a small probability to yield the anti-symmetric state defined above.Due to this fact, many experiments so far have been only carried out by the postselection [9,9,10]. The best way to overcome this draw back is to first produce many copies of any one of the a priori known pure Bell states as the following and without destroying measurement and then use the pure entangled states to carry out the quantum task. Here we show that we can in principle obtain the a prori pure entangled states by appropriately implementating the well known properties polarizing beam splitters. Let us first consider the well known properties of a polarizing beam splitter(PBS). As it was shown in Fig. 1, a PBS reflects vertical polarized photons and transmits the horizontally polarized photons. As it was shown in [7,8], this property can be applied to make an incomplete Bell measurement. Cosider the case in Fig.2. Suppose there is one incident photon in each side of the PBS with their polarization totally unknown in principle. If we can find a photon in the outcome in each side of the PBS, then the incident photons must be both reflected or both trasmitted. That is to say, they must have the same polarization. Therefore, once we find the fact of a photon in each side of the PBS, the incident beams must have collapsed to state |Φ + or |Φ − . To distinguish Φ + and |Φ − , we may take a Hadamad transformation to each of the outcome beams first and then let them pass aditional polarizing beam splitters and finally make a detection using 4 photon detectors(see Fig. 3). The Hadamad transformation is: i.e., and After this transformation, states |Φ + and |Φ − can then be distinguished by the observation about which one of D 1 and D 2 is fired and which one of D 3 and D 4 is fired(see in Fig. 3) . Now we see how to make yield the a prori pure entangled states by these properties of PBS. We propose the scheme in Fig. 4, which is actually a modified scheme of ref [10]. However, by the experiment in that paper, although the entanglement swapping is verified, no a priori entangled pairs can be produced and the entanglement swapping is only verified by the measurement which destroies the swapped entangled states. In the set up in Fig.4, one simply observe the coincidence that satisfies all the following three conditions 1. One and only one detector from D 1 and D 2 is fired, One and only one detector from D 3 and D 4 is fired All events does not satisfy any one of these conditions are excluded. In this way, if D 1 and where |nH, mV i indicates a state including n horizontally polarized photons and m vertically polarized photons in beam i. In our coincidence we require one from D 1 , D 2 and one from D 3 , D 4 are fired. For the term of |2H or |2V , they are always on the same side of PBS 1 and PBS 0 therefore these terms are ruled out by the requirement of our two conditions for the coincidence. For the term of |1H1V , after it reaches PBS 1 , it will distribute in different sides of PBS 1 . However, they will finally be at the same side of PBS 2 , because a PBS transmits state |H and reflects state |V . Therefore, whenever the event of two pairs in the same side of the crystal happens, photons in the upper beam will be finally at the same side of PBS 0 therefore either both D 3 and D 4 will be silent or both D 1 and D 2 will be silent. These unwanted events are totally excluded by our definition of the coincidence. So far we have had a scheme to produce the pure entangled states. This scheme itself can be regarded as a demonstration of the non-post selection of quantum swapping. Obviously, this scheme can also be used to demonstrate many other notrivial unconditional quantum tasks such as the unditional quantum teleportation [9], since this scheme can offer us a priori pure entangled states. This scheme has a very high feasibility in practice. In the set-up, even the photon detectors with very low efficiency can work very well, which differs totally from the cascaded method [5]. And also, our scheme is obviously only a slight modification of the exisiting quantum swapping experiment [10]. I believe it can be carried out by the exisiting technology easily.
2014-10-01T00:00:00.000Z
2002-06-07T00:00:00.000
{ "year": 2002, "sha1": "8cbacbc6163bab32dc8dace4f60d298ecea1b600", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "13220914b403a01fccb0d5335dc03cdb2a0e2049", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
109545375
pes2o/s2orc
v3-fos-license
Histopathological Aspects of Gastritis Patients on Gastric Mucosa : Mini-Review of Literature Gastritis is an inflammatory condition of the gastric mucosa that has several classifications and causes. The persistence of symptoms of the acute state can lead to the atrophic development of the disease, increasing the tissue injury and consequential the development of gastric cancer. The diagnosis is made by clinical and endoscopic information as well as histopathological analysis of samples obtained from biopsy. The purpose of this review is demonstrate the morphological aspects of gastric mucosa and gastric abnormalities found in the histopathological diagnosis of gastritis. INTRODUCTION Gastritis is considered a temporary or chronic inflammatory condition of stomach mucosa, which has several classifications, depending on its etiology and which causes high rates of morbidity in the population [1] . In the literature, the main causes described for its development are related to stress, unhealthy diet, excessive consumption of alcoholic beverages, prolonged use of medications (anti-inflammatories and antibiotics) and mainly by Helicobacter pylori infection [2,3] . The determination of the acute or chronic state of the disease occurs from the evaluation of the type of inflammatory infiltrate, the acute state is relataded to the presence of neutrophils in the mucosa in the other hand the chronic state is relataded to a predominance of macrophages, lymphocytes and plasma cells [4] . Among the various forms that the disease may exhibit, the chronic atrophic gastritis consists of the phase of persistence of acute phase symptoms and which can be classified in several stages [5] . The initial stage is composed of the slight involvement of the most superficial layer of the internal part of the organ that can evolve to deep lesions of mucosa with loss of glandular structures, as well as advance to the most serious stage of the disease that include in the total destruction of these structures, ulcer formation and increase the risk of gastric câncer [5,6,7] . The diagnosis can be made based on the clinical evaluation of the patient, serological tests, endoscopic examination and the histopathological evaluation of the gastric tissue, which represents a Histopathological Aspects of Gastritis Patients on Gastric Mucosa: Mini-Review of Literature great relevance in the differentiation of the atrophic and non-atrophic forms of the disease [8] . Thus, the main of this review is to report the histological aspects found in the mucosa of the stomach, as well as the possible alterations found in patients diagnosed with gastritis, in addition to assessment some diagnostic methods for this disease. MATERIALS AND METHODS The literature review was carried out from the analysis of scientific articles available in the SciELO, MedLine, PubMed and Science Direct databases. The following descriptors were used for the research: gastritis, gastric mucosa, morphological alterations, histopathological evaluation and diagnosis. The articles found in the research were analyzed according to the following inclusion criteria: (1) Studies that presented relevant information on the subject; (2) Publications until november 2018 with detailed description of the histopathological evaluation of gastritis; (3) Articles indexed in Portuguese, Spanish and English. At the end, as shown in Figure 1, twenty-one articles were selected to which they were read in their entirety, and information was extracted that could fill this review. THE GASTRIC MUCOSA The gastric mucosa is composed of a layer of superficial epithelial cells strongly connected by intercellular junctions, such as GAP type, in addition to a portion of lamina propria wich is highly vascularized and innervated [9] . The gastric epithelium has secretory glandular cells of substances and hormones and that are fundamental in the digestion process and are necessary to the mechanism of defense of the mucosa against aggressive agents [10] . Due to the arrangement of these cells in the epithelium, the gastric mucosa can be divided into three regions: cardiac, oxyntic and pyloric. In the cardiac region, the mucus secreting cells are concentrated. In the oxyntic region, there are parietal (hydrochloric acid) and peptic (pepsinogen secreting) cells, as well as endocrine cells such as somatostatin secretory cells and enterocromaffin (histamine secreting) cells. Finally, in the pyloric region, peptic cells, D cells and G cells (gastrin producing) can be found [11,12] . HISTOPATHOLOGICAL ALTERATIONS FOUND IN GASTRITIS The presence of the inflammatory infiltrate in the lamina propria, whether mononuclear or polymorphic cells, is the main finding of the non-atrophic state. Neutrophilic infiltration can indicate intense tissue damage to the mucosa. The presence of focal or dispersed lymphocytes and granulocytes in the glandular epithelium may be indicative of chronic gastritis, which can also be found intraglandular, forming nodules, characterizing a primary stage of gastric lymphoma [13] . The persistence of the inflammatory infiltrate may lead to the progression of the disease to the atrophic state. At this stage, analysis of the biopsy specimen reveals extensive loss of glandular epithelium, which can progress to dysplasia, epithelial tissue metaplasia, lamina propria fibrosis or even to adenocarcinoma [14,15] . In special gastritis, such as alcohol abuse and anti-inflammatory drugs, the formation of ulcers and edema in the mucosa can be seen soon in the endoscope examination, while in histology the loss of epithelial cells can be observed due to intense inflammatory infiltration and bleeding outbreaks due to loss of epithelium [16] (Table 1). DIAGNOSIS AND HISTOPATHOLOGICAL EVALUATION The diagnosis of gastritis begins with the evaluation of the clinical data of the patient (symptoms, age and family history) and through endoscopic examination. The confirmation and classification of the gastritis is given from the histopathological evaluation of the sample of tissue removed in the examination endoscope [4] . Although it does not have a universal classification system for gastritis, in the literature one can find some systems for evaluation, such as the Sidney system [17] and the Operative Link for Gastritis Assessment (OLGA) system [18] . The analyzes performed by these systems are restricted to etiological, topographic, morphological biopsy data and histological findings. Thus, according to the findings, Relationship between Helicobacter pylori infection, atrophic gastritis and gastric carcinoma in a Japanese population Intense inflammatory infiltrate on the mucosa with glandular loss, dysplasia and metaplasia of the epithelium, besides fibrosis of the lamina propria and formation of adenocarcinomas DIXON et al [14], 1996 Classification and grading of gastritis STEMMERMANN, 1994 [15] Intestinal metaplasia of the stomach Acid supression: optmizing therapy for gastroduodenal ulcer healing, gastroesophageal reflux disease and stressrelated erosive syndrome the classification of the disease is directed [19] . Created in 1990, the Sydney system standardized the language to be adopted by the pathologists in relation to the inflammatory alterations found and described from the findings of the gastric biopsies. This system consisted in the evaluation of five histological variables, the chronic inflammation, neutrophilic activity, glandular atrophy, intestinal metaplasia and presence of H. pylori, from biopsies made from the regions of the antrum and the body of the stomach. For each parameter to be evaluated, presence or absence was described, besides being classified in levels (mild, moderate or marked) if present in the tissue [17,20] . The quantity and standardization of biopsy sites, as well as some nomenclatures adopted by the system, generated some challenges through the clinical pathologists, resulting in the reformulation of the system and subsequent creation of the OLGA system. In 1996, the system considered adding biopsy of the region of angular notch in the evaluation, in addition to the other regions defined in the old system of Sydney, in view of the endoscopic reports that consisted of a high degree of mucosal atrophy and intestinal metaplasia, as well as presence of neoplastic lesions. In addition, the system introduced in the analyzed variables that when establishing chronic gastritis should be correlated with its region of predominance (body or antrum), as well as whether the atrophy and metaplasia present were diffuse or multifocal [14] . The OLGA system later appeared as a new proposal for evaluation of gastritis, where the analysis consisted of the observation of the extent of gastric atrophy, which is a result of the advanced stage of the disease, combined with the lesion sites evaluated. The atrophy is the main parameter evaluated in all biopsy regions, which is performed from the analysis of total mucosal thickness. Other secondary parameters, such as glandular atrophy (antrum and body region) and glandular shrinkage (lamina propria fibrosis and intestinal metaplasia in the region of the angular notch), are also analyzed, and for each finding a score value is assigned. In the evaluation of glandular atrophy, each sample is evaluated from the percentage of glandular loss. In both evaluations a score is determined for each analyzed region, being: (0) when there is 0% of atrophy; (1) when there is 1-30% atrophy (mild); (2) when there is 31-60% atrophy (moderate); (3) when there is > 60% atrophy (severe). From this separate evaluation of score, a general value of atrophy is obtained, which leads to a determined stage of gastritis [18,21] . The histopathological report should contain essential information that allows the identification of the sample used in the biopsy, such as the quantity and gastric sites from which the sample was obtained, according to endoscopic identification. In addition, clinical information of the patient, such as history or current treatment, should be reported along with endoscopic findings, if any. The description of the evaluation should mention all the findings, as well as to correlate the regions analyzed and the lesions found, as well as to provide semi-quantitative values for the following findings: mononuclear infiltrate, polymorphic infiltrate, glandular atrophy and H. pylori foci or absent). At the end, the possible etiology of the disease, based on the manifestations, and the stage of gastritis, based on the OLGA system, should be evaluated [21] . CONCLUSION Knowledge about alterations found in endoscopy and histopathological analysis of biopsy samples, such as edema, ulcer, intense inflammatory infiltration and loss of epithelial cells, can be indicated as an effective strategy in the diagnosis and prognosis of the patient, allowing the non-progression of the disease and reducing the risk of developing cancer in gastric tissue.
2019-04-12T13:29:45.735Z
2019-02-21T00:00:00.000
{ "year": 2019, "sha1": "044e1bd4e279822f418ca2dd5a8c5892c55284d7", "oa_license": "CCBYNC", "oa_url": "http://www.ghrnet.org/index.php/joghr/article/download/2476/2819", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "29b1c798d1f347bf65a15461dbb9c8628be5ae6a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119193765
pes2o/s2orc
v3-fos-license
Slab Bag Fermionic Casimir effect, Chiral Boundaries and Vector Boson - Majorana Fermion Pistons In this article we consider the Casimir energy and force of massless Majorana fermions and vector bosons between parallel plates. The vector bosons satisfy perfect electric conductor boundary conditions while the Majorana fermions satisfy bag and chiral boundary conditions. We consider various piston configurations containing one vector boson and one fermion. We present a new regularization mechanism the piston offers. In our case regularization occurs explicitly at the Casimir energy density and not at the Casimir force level, as in usual pistons. We make use of boundary broken supersymmetry to explain the number of fields that appear in all the studied cases. The effect of chiral boundary conditions in a fermion boson system is investigated. Concerning the supersymmetry issue and the vanishing Casimir energy, we study a massive Dirac fermion-scalar boson system with bag and specific Robin type boundary conditions for which the Casimir energy vanishes. Finally a two scalar bosons system between parallel plates is presented in which the singularities vanish in the total Casimir energy. A discussion on boundary broken supersymmetry follows. Introduction One of the most interesting and most studied phenomena in Quantum Field Theory is the Casimir effect. It is a quantitative proof of a quantum field quantum fluctuation. It originates from the "confinement" of a field in finite volume and many studies have been devoted on this phenomenon since H. Casimir's original work [1]. The Casimir energy, is closely related to the boundary conditions of the fields under consideration which modify the nature of the so-called Casimir force generated by the vacuum energy. Casimir calculated the electromagnetic force between two parallel conducting plates which he found to be attractive. A repulsive Casimir force, in the case of a conducting sphere, was calculated by Boyer [2] some time later, Of course the attractive and repulsive nature of the Casimir force has many applications in nanotubes and nanotechnology since a collapsing force can lead to the destruction of such a configuration. Therefore, the stabilization of such a system is highly important. The study was generalized to include other (apart from electromagnetic) quantum fields such as fermions, bosons and other scalar fields (see for example [3,4] and references therein). The boundary conditions modify the Casimir force for all the quantum field cases. The most used ones are Dirichlet and Neumann boundary conditions on the plates however this is not the case for fermion quantum fields. Dirichlet and Neumann boundary conditions have no direct generalization in the case of fermion fields and in general for fields with spin = 0 [5]. In that case the bag boundary conditions are used. These boundary conditions, in the case of fermion fields, were introduced to provide a solution to confinement [6]. In this paper we shall extensively use the bag boundary conditions (and their modified form known as the chiral bag boundary conditions) for a Majorana fermion field confined between two parallel plates and in piston configurations with the aforementioned boundary conditions. The extension of the two slabs to pistons is motivated mainly from the regularization that the Casimir piston offers. As is well known, the Casimir energy contains singularities that must be regularized in order to acquire a finite result There are two ways to compute the Casimir energy, the cutoff method and the zeta function regularization method. In the former case the singularities are regularized by introducing suitable counter-terms that cancel the singularities. The Casimir piston configuration offers a very elegant way of cancelling these singularities. This configuration was originally used [8] as a single rectangular box with three parallel plates where the middle one is free to move. The dimensions of the piston are (L − a) × b and a × b, with the moving plate being located in a. In [8] the Casimir energy and Casimir force for a scalar field was calculated for a piston. The boundary conditions on the "plates" where the Dirichlet ones. The literature on the subject is quite big, [9,10,11,12,13,14,15,16,17,18,19,20,21] calculating the Casimir force for various configurations of the boundary conditions of the scalar field, in both massive and massless case. The regularization the Casimir piston provides is very useful. Actually, when one calculates the Casimir energy between parallel plates confronts, as we mentioned, infinities that must be regularized. The regularization of the Casimir energy in the parallel plate geometry can be performed if we calculate it as a sum over discrete modes (due to boundary conditions on the plates) minus the continuum contribution (plates distance send to infinity) [3,4]. The discrete sum consists of three terms, a volume divergent one (which is cancelled by the continuum integral), a surface divergent one and a finite term. This can be easily seen if the calculations of the Casimir energy are performed with the introduction of a UV cutoff. Before the use of the piston, the surface divergent term was thrown out "by hand"; a completely unjustified action since that term cannot be removed by renormalizing the physical parameters of the theory [22]. On the other hand, the zeta function regularization technique renormalizes this term to zero. Thus the cutoff technique and the zeta regularization technique agree perfectly. The Casimir piston solves this problem in a very elegant way, because the surface terms of the two piston chambers cancel each other and thus the Casimir force can be calculated in a consistent way [9,10,11,12,13,14,15,16,17,18,19,20,21]. In this article we shall impose bag boundary conditions for Majorana fermions between two slabs and construct, in some cases, a piston of such slabs (see (1)). Since the bag boundary conditions confine the fermion field in one of the two piston chambers, it is supposed that every piston chamber contains a different fermion flavor so no theoretical inconsistency occurs. We should mention that the piston with bag boundary condition fermions is in one dimension (while the other dimensions are infinite), since the confinement of a half integer spin field can be implemented only in this case (the presence of corners prevents solutions to the massless Dirac equation; see [23] and references therein). The massive fermion field is discussed in [7,24]. The slab fermion Casimir effect for a piston shall be also studied. It is always assumed that fermions appearing in different chambers must have different flavors, for consistent slab boundary conditions. The fermion piston has nothing new to offer as far as regularization method is concerned but we shall compute the Casimir force in order to check the validity of the rules that hold in the boson piston case. It would be interesting to see that the fermionic Casimir force between plates is attractive, contrary to what one expects. After the simple Majorana fermion piston, we shall examine the Majorana-fermion and vector-boson chamber and piston. The connection with supersymmetry shall be examined. In the case of Majorana-fermion and vector-boson (in the following fermion refers to Majorana fermion and boson to vector boson unless stated differently) piston we shall study three different cases, namely, fermion in one chamber (a-chamber) and boson in the other chamber (L−a-chamber), boson in the a-chamber and fermion in the L−a-chamber, and finally boson-fermion in the a-chamber and boson-fermion in the L − a-chamber (with different fermion flavors). The motivation to use a piston with one chamber filled with a fermion and the other with boson, is even stronger. This comes from the fact that the Casimir energy density is regularized, and when the L parameter is send to infinity, the Casimir energy yields the vector boson or Majorana fermion Casimir energy, of course without singularity. Thus this could be another regularization use of the piston with the new feature of using Majorana fermions and vector bosons. This could be particularly interesting since we can use a fermion just to regularize the vector boson field (photon) Casimir energy and then forget it. The new result within these considerations is that Casimir energy density (and therefore the Casimir energy) is regularized and not just the force. The motivation to study fermion boson chamber is the automatic cancelation of the singularities when the number of fermions and bosons follow the vector superfield rules dictated by supersymmetry. Of course supersymmetry is broken due to boundary conditions but a remnant of supersymmetry can be really useful. We shall extend the boson fermion chambers and pistons in the case where the fermion obeys chiral boundary conditions on the plates. In that case the minimization of the energy as a function of the chiral parameter φ is examined. Also the sign of the Casimir force as a function of φ is also studied. Our next step will be to introduce two real scalar bosons, obeying Robin boundary conditions, and a massive Dirac fermion obeying bag boundary conditions. Although the number of the fields are determined by the supersymmetric chiral superfield rules, supersymmetry is broken due to boundary conditions. In that case the total Casimir energy can be zero in some cases Finally we shall examine the massive scalar bosons Casimir effect between parallel plates in the case that mixed boundary conditions are imposed. The connection of this case with the rest of the present study is that the singularities cancel also. Indeed if one uses a boson, satisfying Dirichlet-Neumann boundary conditions on the plates, together with a scalar boson, satisfying Dirichlet-Dirichlet boundary conditions, the singularities of the Casimir energies for each case cancel each other. Finally, we present the conclusions and we discuss some applications of the above study. Majorana Fermions in a Slab and Bag Boundary Conditions Consider Majorana fermion fields in 3 space dimensions while one dimension is considered finite by two parallel plates at x = 0 and x = L. We shall also assume that fermions are not allowed to exist outside the parallel plate system. This is actually the MIT bag boundary conditions which are expressed as or in Lorentz covariant form n µψ γ µ ψ = 0 which n µ = (0, n) and n the vector normal to the surface of the plates and directed to the interior of the slab configuration. The above two equations show that there is no fermion current flowing outwards of the parallel plates. In most of the calculations we shall perform in this article we shall use the cutoff method [3,7] in order to compute the Casimir energy, since we want to see explicitly the singularities that the Casimir energy has. The eigenvalues of the massless Dirac equation obeying bag boundary conditions are where k T refer to the transverse components of the momentum. Using the cutoff method the Casimir energy density per unit area reads [7], where E f is the total Casimir energy per unit area between the plates and we have introduced an exponential regulator in terms of the cutoff τ . The factor 2 for the Majorana fermion comes from the spin and the degrees of freedom in four spacetime dimensions. Using and carrying out the integration we obtain In the limit τ → 0 we obtain where we can see explicitly the singularity in terms of the cutoff τ . Note that the series over τ contains only even powers of τ . This is a characteristic of fermions and vector bosons. Consistent boundary conditions for massless vector bosons are the so-called perfect conductor boundary conditions on the plates, namely Working in the same way as in the fermion case, the Casimir energy density per unit area of a vector field satisfying perfect conductor boundary conditions on the plates reads where ω nb = k 2 T + π 2 n 2 L 2 . Upon integration we obtain and in the limit τ → 0 we obtain where E b is the total Casimir energy per unit area of the boson between plates. We shall make extensive use of the above well known expressions in the rest of the paper (for a similar analysis see [25]). Note again that the series over τ contains even powers only, just as in the fermion case. For completeness we shall present briefly the scalar boson case. The scalar boson Casimir energy density reads and after integrating, we obtain In the limit τ → 0, the Casimir energy density for the scalar field reads It is obvious, contrary to the fermion and vector boson case, the scalar boson energy density contains even and odd powers of τ . Different Flavor Fermionic Casimir Piston Let us see how the piston configuration that we mentioned in the introduction helps in the cancellation of the singularities. Consider a piston with two chambers that are constructed by three parallel plates at x = 0, x = a and x = L. The total Casimir energy per unit area of a Majorana fermion is equal to, and relation (7) gives Taking the derivative of the total Casimir energy per unit area we obtain the Casimir force per unit area which is equal to In the above relation it is clear that the total Casimir force per unit area is finite. Relation (18) is the Casimir force on a piston plate in the case the chambers are filled with different flavors of fermions (in order bag conditions are consistently fulfilled). Vacuum fluctuations of massless fermions between two parallel and confining plates give rise to an attractive Casimir force. Thus both vector bosons and fermions lead to a negative Casimir force, contrary to what would be expected from fermions due to Fermi-Dirac statistics. The Casimir force behaves exactly as in the bosonic case. In detail the force is attractive when a is small and repulsive when L − a is small (see also [9,10,11,12,13,14,15,16,17,18,19,20]). Finally, it is obvious that in the limit L → ∞ we obtain the usual Majorana fermionic Casimir force between plates (see [23,24]). Majorana Fermion-Vector Boson Chambers and Piston Configurations Consider massless, non interacting vector bosons and Majorana fermions, described by the Lagrangian, We shall assume that the vector boson field and the Majorana fermion co-exist between two parallel plates located at x = 0 and x = L. Thus we construct a chamber filled with bosons and fermions in a "supersymmetric" way. Indeed Lagrangian (19) looks like the N = 1, d = 4 vector supersymmetric Lagrangian describing photons and the supersymmetric partner, the photino. We should clarify at this point that supersymmetry is broken when the bag boundary conditions for fermions and perfect conductor boundary conditions are used for bosons on the plates. Thus in this physical system, supersymmetry is explicitly broken. A similar effect happens when physical systems are studied at finite temperature, where supersymmetry breaks due to different boundary conditions applied to bosons and fermions. However in the slab case there exists a remnant of supersymmetry. In fact we shall use the above Lagrangian in order to determine formally the appropriate number of fermions and bosons fields we should use for singularity cancellation. Massless Vector Boson-Majorana Fermion Chamber Consider one vector boson and one Majorana fermion field described by the Lagrangian (19). We assume that the fields are confined between two parallel plates on which fermions satisfy bag boundary conditions and bosons satisfy perfect conductor, that is for fermions and n µ F µ ν a = 0 for vector bosons. The total Casimir energy per unit area of the slab system reads (Casimir slab chamber) It is obvious that the Casimir energy per unit area for the plate chamber is singularity free, since the singularity of the fermion energy is canceled by the corresponding ones of the two bosons. This is a very valuable result and it is due to the supersymmetry remnant that the system possesses. The total Casimir force per unit area between the plates is which, as we can see, is attractive. Thus, when we consider the fermion boson chamber, the force between the plates is even more attractive compared to the single boson (fermion) case. It is not difficult to extend the chamber to a piston configuration, just to see how the force behaves. Of course each chamber is singularity free. Thus adding the contributions from the two chambers, a and L − a, we obtain, the total Casimir energy (it is assumed that each chamber contains different fermion flavor) and the total Casimir force per unit area is given by Notice that in the limit L → ∞ relation (25) becomes identical to relation (23). Thus we recover the initial system, as is expected in every piston configuration. Massless Boson-Fermion Piston. A Regularization Method of the Casimir Energy Density per Unit Area Using the Piston As we mentioned in the introduction, the most theoretically attractive feature of the piston is that it renders the Casimir force of a bosonic (and fermionic) system finite. However the Casimir energy density (and therefore the total Casimir energy) for the aforementioned fields, even for the piston configuration, contain singularities. We shall demonstrate a use that the Casimir piston offers to the regularization of the Casimir energy density. We will make use of the Casimir piston configuration but we shall fill the two chambers with different statistics fields. To be specific, consider that the a chamber is filled with a Majorana fermion with bag boundary conditions on the boundaries while the L − a chamber is filled with a vector boson satisfying perfect conductor boundary conditions on the two plates. Notice that the system as a whole is determined by the Lagrangian (19), so there is a remnant supersymmetry, which is however broken since the fields are confined in different places in space, and of course the boundary conditions are different. The fermionic Casimir energy energy density per unit area is equal to while the vector boson has Casimir energy density The total Casimir energy density for the piston system is U total = U f (a) + U b (L − a). Adding the above two contributions notice that the total Casimir energy density is finite, and equal to In the above relation, the total Casimir energy density behaves as follows, whereŪ f is the regularized Majorana fermion energy density andŪ b is the corresponding one for the vector boson. Therefore, the energy density in each chamber is made regular and, of course, the total energy density is regular. The total fermionic Casimir energy density per unit area is equal to [28], since the fermionic energy density is regularized and independent of x. In the same way the vector boson total Casimir energy density per unit area is, Adding the two bosons and fermion contributions we obtain the total Casimir energy of the piston E total = − 1 720 We can clearly see that the total Casimir energy is free of singularities since we regularized the Casimir energy density. By taking the limit L → ∞ the total Casimir energy becomes equal to the single Majorana fermion Casimir energy. Thus the piston offers a regularization method for the fermion Casimir energy (and more importantly for the vector boson). This was the motivation to study such configurations. Of course the total Casimir force on the piston plate is regularized and free of singularities. Indeed, since F = − ∂E ∂a , taking the derivative of (32) with respect to a, we obtain, Again in limit L → ∞ we render the usual fermionic Casimir force between plates with bag boundary conditions, divided by 2 due to the Majorana degrees of freedom [23,24,26,27]. The most important case comes when in the above piston setup the fields are confined in the chambers with inverse order, that is, vector boson in the a chamber and Majorana fermions in the L−a chamber. Following the same procedure as in the previous, we obtain the total Casimir energy E total = − 1 720 and the total Casimir force, F total = 21 720 It is obvious that the total Casimir energy is again singularity free, as was in the previous case. The difference of the two cases is that in the limit L → ∞, the total Casimir energy becomes the Casimir energy of a massless vector boson field confined between two plates. Thus the inverse piston configuration serves as a regularization technique for the vector boson Casimir energy. Of course the force is finite, as we can see (35). The new regularization that the piston offers within the setup we presented is that the total energy is finite and not just the force. In conclusion, the piston can offer a regularization method for the vector boson and Majorana fermion Casimir energy when we make use of a "supersymmetric" system in which fermions and bosons are put in different chambers. Although supersymmetry is broken due to boundary conditions, the singularities cancel in a supersymmetric way, and the Casimir energy is singularity free. In the limit L → ∞ we render the known results. The new result in this use of piston is that the Casimir energy is regularized explicitly and not just the force. Fermions-Bosons with Chiral Boundary Conditions Apart from the bag boundary conditions on the plates that can be used for fermions, in the massless case we can use the so-called chiral boundary conditions. This stems from the invariance of the massless Dirac equation under the transformation, where β is an arbitrary phase. The bag boundary conditions make the fermion current vanish outside the surface of the plates n µψ γ µ ψ = 0 (39) (38) is not [27]. The restoration of the chiral symmetry in the boundary conditions can be achieved through the so called chiral boundary conditions, These conditions are invariant under the transformation (37) with φ, being promoted to a dynamical variable, transforming as φ → φ − 2β. Consider a chamber made from two parallel plates containing one vector boson gauge field satisfying perfect conductor boundary conditions on the plate and one Majorana fermion satisfying chiral boundary conditions on the plates. Regarding the chiral boundary conditions the φ parameter is considered to be φ = π on the plate located at x = 0 and a general value φ on the other plate on x = α (with 0 ≤ φ ≤ 2π). The reason to consider this setup is to examine the behavior of the Casimir energy and of the Casimir force as a function of the chiral parameter φ. Moreover we want to examine the minimum of the energy as a function of φ. It is known from reference [27], that the minimum energy for a fermion with chiral boundaries is obtained for φ = π. We want to see how this result changes when we consider bosons and fermions together, in a "supersymmetric" way. (keeping in mind that chiral boundaries break supersymmetry. The motivation using bosons and fermions is as before, the cancellation of the singularities in the Casimir energy and consequently in the Casimir force, without the introduction of counter-terms to cancel the singularities. The vector boson contribution to the total Casimir energy per unit area in the chamber is, The fermionic Casimir energy per unit area with chiral boundaries reads, Adding the two bosons and one fermion contributions we obtain the total Casimir energy in the chamber, which is regular since the singularities of the two cancel each other. From the above we easily obtain the Casimir force between the plates, Thus we see that in the Casimir chamber with chiral boundaries the total Casimir energy is regularized. Casimir Piston with Chiral Boundaries To complete the study of the chiral boundaries Majorana fermion-vector boson chamber, we shall consider a piston using the above combination of fields and boundary conditions. Particularly suppose that we have a piston filled in the left chamber with a Majorana fermion obeying chiral boundary conditions on x = 0 and x = a (with φ = π on x = 0 and general φ on x = a) and with a vector boson in the other chamber with perfect conductor boundary conditions at x = a and x = L. Using the same procedure as in the piston configuration of section 4, we obtain the total Casimir energy, which is finite (free of singularities) and is equal to, while the force on the plate at x = a is given by Notice that in the limit L → ∞ the energy and the force become equal to that of a single Majorana fermion between plates with chiral boundary conditions. A similar analysis holds for the case the boson and fermion are put into opposite chambers. In that case the energy is again finite of course and so is the force, which is equal to As before, the limit L → ∞ renders the known result of vector boson force between two plates. Let us note that, as in the previous section, the regularization in the piston case occurs at the energy density per unit area. Fermions with Bag Boundaries and Bosons with Robin Boundary Conditions We discussed in the previous sections the case having massless Majorana fermions and vector bosons exist between parallel plates. The number of the fields we used was determined by the way they should appear in a vector supermultiplet of a N = 1 supersymmetry. Of course supersymmetry was broken by the boundary conditions satisfied by fermions and bosons (bag or chiral for fermions and perfect conductor for bosons). Although supersymmetry was broken, the effect of using a vector boson and one Majorana fermion was that the singularities in the total Casimir energy per unit area (for the chamber) or in the Casimir energy density per unit area (for the piston case) where cancelled. In this section we shall consider a similar interplay between the number of fermions and bosons using two equal mass scalar bosons and a massive Dirac fermion. The boundary conditions will be bag for fermions and specific Robin boundary conditions for the scalar bosons. To start with, consider a fermion between two parallel plates located at x d = 0 and x d = L embedded in a flat d-dimensional spacetime. The Casimir energy for the fermion equals to [7,24] with C(d) = 2 (d−1)/2 for d odd and C(d) = 2 (d−2)/2 for d even. When the fermion obeys bag boundary conditions in the two plates, the z n appearing above are solutions of the equation The roots of the above equation solve the eigenvalue problem for the massive slab bag fermion [7,24]. Now consider a boson between two parallel plates located at x d = 0 and x d = L obeying robin boundary conditions [10,33] of the form on the first plate (at x = 0) and on the second plate (β 1,2 are arbitrary constants). Robin boundary conditions are known to provide conformal invariance for field theories between parallel plates [33] and are frequently used in the calculation of the Casimir energy (see [33,10] and references therein). The interest in Robin boundary conditions comes from the fact that there is a region in the parameter space for which the Casimir forces are repulsive for small distances and attractive for large distances. Thus stabilization of the distance between the plates can be achieved. We shall use Robin boundaries for the two massive scalar bosons between two parallel plates in a d dimensional spacetime (the same setup as that of the fermion case). The Casimir energy for the system with these boundary conditions is obtained by solving the equation with b i = β i /L. The above equation gives solution to the spectral problem under Robin boundary conditions [10,33]. We shall use Dirichlet boundary conditions in the x = 0 boundary (this means that b 1 = 0) and Robin boundary conditions on the x = L boundary specified by, Thus equation (53) reads F (y) = mL sin y + y cos y = 0 (55) The solutions of this equation, namely the roots y n , solves the spectral problem of the boson field between plates with boundary conditions (51) and (52). The bosonic Casimir energy of a massive boson reads Notice that equations (55) and (50) are identical. Thus the total Casimir energy for the two bosons and one fermion field equals to (57) The motivation to consider this fermion-boson "supersymmetric" configuration (N = 1 chiral supermultiplet) is the observation that for d = 4 and for d = 3 the total Casimir energy vanishes when the fermion and bosons have the same mass, that is when m f = m b . This result holds only for these two d values. Indeed, for d = 4, the parameter C(d) = 2 (d−2)/2 which holds for d = even becomes C(4) = 2. The same occurs for d = 3 on the other relation. Thus although supersymmetry is broken on the boundaries, the total Casimir energy vanishes. This result is particularly interesting because it occurs only for d = 4 when d = even and d = 3 when d = odd. This effect is known to occur only in supersymmetric theories (see for example reference [40]). the end we shall put d = 4 and s = −1/2) We shall use zeta-function regularization in order to compute the above. Making use of the following relation [4] ∞ n=0 a(n + 1 2 the Casimir energy for the Neumann-Dirichlet scalar boson is In the above, the following relation was used. In the same way we obtain the Casimir energy of the Dirichlet-Dirichlet boson which is equal to where we have made use of the following analytic continuation [4] ζ EH (s; Thus the total Casimir energy between the plates is E tot = E DD + E N D . As can be easily seen, relations (63) and (60) contain singularities due to the Gamma function Γ(s − d−1 2 ) for d = 4 and s = −1/2 (first line of (60) and second line of (63)). It is obvious that the two singular terms cancel each other, thus the final result of the total Casimir energy is finite. The final expression is equal to, We shall not pursuit this further since we just wanted to show how the cancellation of singularities within this setup occurs in four dimensions. Conclusions We have studied the Casimir effect of a massless Majorana fermion field and of a massless vector boson field confined between parallel plates. Particularly we considered the vector boson and the fermion field contained in a chamber and in a piston. We used various configurations which we now briefly discuss. The way the fields where chosen was according to a N = 1 supersymmetric Lagrangian that was actually broken due to the boundary conditions obeyed by the fields on the parallel plates. The fermion field was supposed to obey bag or chiral bag boundary conditions on the plates while the vector boson field obeyed perfect conductor boundary conditions. We saw that when we use both massless Majorana fermions and vector bosons in a chamber, both the total Casimir energy density and the Casimir energy density per unit area is singularity free. We used the cutoff method in order to explicitly see this cancellation. Thus although supersymmetry was explicitly broken on the boundary of the parallel plate chamber, a remnant of supersymmetry was responsible for the cancellation of the singularities. Another interesting configuration we used in this article was the piston, which consists of three parallel plates placed at x = 0, x = a and x = L. We have started by putting in the a chamber the massless Majorana fermion and in the L − a chamber the massless vector boson obeying bag and perfect conductor boundary conditions respectively. We concluded that the energy density per unit area of the system is regularized when we add the energy density contributions from the two chambers of the piston. Then integrating on the finite dimension we obtain the total energy per unit area for the piston. The limit L → ∞ yields the fermionic Casimir energy density for a Majorana fermion. The Casimir force was calculated and in the limit L → ∞ turns to be identical with the fermion Casimir force between two plates at a distance a. Equally interesting is the case for which the vector boson is put in the a chamber and the Majorana fermion on the L − a chamber. We concluded that the energy density per unit area is regularized. As before we calculated the total Casimir energy and force which we found it to be finite. The limit L → ∞ yield the Casimir force and Casimir energy for a vector boson between two parallel plates. We should mention that this method yields a regularized result at the energy density level and not just in the force level. The interest in calculating massless vector boson Casimir energies is obvious, since the photon belongs to that category. Thus the confinement of the electromagnetic field in a slab leads to a quantum effect, a vacuum energy density. We regularized the vacuum energy density using a massless Majorana fermion in the other chamber. That Majorana particle could be the photino. [41]. Actually the Dirac neutrino is experimentally ruled out since the Z-induced neutron coherent contribution would be too large, which does not happen in the Majorana case. Another interesting appearance of Majorana fermion states comes from the physics of superconductors and superfluids. Particularly a Majorana bound state is theoretically predicted in rotating superfluid 3 He − A between parallel plates. In the parallel plate geometry, the gap is about 10µm and a Magnetic field is applied along the plates. The Majorana vortex bound state is associated with a singular vortex in chiral p-wave superfluid [42]. Similar studies show that a Majorana bound state exists in triplet superconductors (see [43]). Concluding on the Majorana-vector boson case, the chamber and the piston setups where extended to the case of a fermion field obeying chiral bag boundary conditions and we have found that similar results hold. Following the same supersymmetric recipe, regarding the number of fields (of course supersymmetry is broken on the boundaries), we studied a massive Dirac fermion and two massive scalar bosons between two parallel plates. The spacetime dimensions is d and the fermions obey bag boundary conditions on the plates, while the scalar bosons obey specific Robin boundary conditions. We found that for d = 4 (for d = even) and for d = 3 (for d = odd) when the fermions and bosons have the same mass, the total Casimir energy vanishes. It is interesting to note that this occurs only for d = 3, 4. Finally we presented another case in which the singularities of the Casimir energy cancel. It is Casimir chamber filled with two massive scalar bosons obeying Neumann and Dirichlet boundary conditions at the plates. We found that under these circumstances the Casimir energy of the system is singularity free. Regarding the Majorana-vector boson case, it would be interesting to include finite temperature corrections or constant magnetic field in the calculation of the Casimir energy. This is of particular importance since these situations occur in nano-devices and in other technological applications where Majorana fermions appear [42].
2009-12-24T10:44:06.000Z
2009-12-24T00:00:00.000
{ "year": 2009, "sha1": "27f210647a5e8f49ccb5ad867ee1e68e9eb396d3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0912.4825", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "27f210647a5e8f49ccb5ad867ee1e68e9eb396d3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
265162968
pes2o/s2orc
v3-fos-license
Much Ado About Nothing? Reflections on the European Commission ’ s Proposal for an Inter-institutional Ethics Body On 8 June 2023, the European Commission published a long-awaited proposal for the establishment of an interinstitutional ethics body, meant to restore the public ’ s faith in the European Union ’ s administration following the Qatargate corruption scandal. Alas, the Commission ’ s proposal outlines a body that lacks investigative and sanctioning powers, has minimal administrative capacity and for the most part relies on the institution ’ s own policing. Put simply, it falls short of the promises made by the Commission ’ s President in her 2019 political guidelines, and much shorter of what was expected as a remedy to the European Union ’ s recent ethics-related scandals. In this short piece, we reflect on the Commission ’ s proposal for an inter-institutional ethics body in light of the overall ethics framework in the Union and provide a brief analysis of the Commission ’ s missed opportunity and of what could have been. I. Introduction The European Commission's 2019 political guidelines, by the then-candidate President of the European Commission Ursula von der Leyen, promised increased attention to transparency in the European Union (EU) to restore the faith of the public.Among several promises related to transparency, integrity and democratic scrutiny was the explicit pledge to support the creation of an independent ethics body for all EU institutions. 1The plan for the establishment of such a body has been in the pipeline ever since it was announced in von der Leyen's campaign speeches in 2019 and the mission letter to Commission Vice-President for Values and Transparency Věra Jourová. 2 The proposal for working towards an inter-institutional ethics body has recently gained new urgency after the European Parliament's Qatargate corruption scandal, which saw the emergence of allegations of criminal organisation, money laundering and bribery against Members of the European Parliament (MEPs), as well as high-profile lobbying scandals. The long-awaited proposal saw the light of day on 8 June 2023. 3While not entirely crisis-driven, as the intention for this proposal long preceded the Qatargate scandal, the publication of the Commission's proposal can be seen, to a certain extent, as a potential solution to a problem that has put considerable strain on the legitimacy of the EU's administration.The proposal contains several grandiose promises; however, most of them seem to fall short of von der Leyen's original plans and are a far cry from what was expected in the wake of the Qatargate scandal according to several critics, 4 as the proposed ethics body lacks investigative and sanctioning powers and mostly relies on self-policing.What precisely is the role of this body?How does the Commission aim to live up to its promises from 2019, in light of the recent scandals and with an eye to the future of ethics rules in the Union?In this short reflection piece, we present some critical thoughts on the Commission's proposal for the inter-institutional ethics body, its architecture, its positioning against other actors of EU public ethics and its missed potential as an actor of good administration in the Union. II. The current context of European Union ethics standards Let us start from the basics.Scandals aside, why is there even a need for an interinstitutional EU ethics body?What gaps is it supposed to fill?Overall, and as repeatedly brought up in the Commission's proposal for the body, the current ethics framework in the Union is fragmented and differs in several waysdepending on the institution or body or on whether one is a member or staff.In fact, it is not really a framework.Instead, there are several parallel frameworks that operate at different levels, with separate rules and standards, and stemming from various legal sources: for instance, in Article 339 TFEU on professional secrecy for members of the Union's institutions, Article 245 TFEU on the independence, integrity and discretion of Commission members, several articles of the Staff Regulations or Articles 2 and 3 of the European Parliament's Statute on the freedom and independence of MEPs. The ethics standards and obligations are then entrusted to internal oversight bodies within the EU institutions themselves, who are responsible for monitoring and assessing (in various ways) whether the ethics rules are respected.One manner of achieving this is through own declarations of members or staff (eg through declarations of interests). 5ccording to a recent Special Report by the European Court of Auditors (ECA) on ethics rules in the EU, 6 the general fragmentation and weakness of the current EU ethics 3 European Commission, "Communication from the Commission to the European Parliament, the European Council, the Council, the European Court of Justice, the European Central Bank, the European Court of Auditors, the European Economic and Social Committee and the Committee of the Regions: Proposal for an interinstitutional ethics body" (2023) COM(2023) 311 final. 4See, for instance, J Rankin, "MEPs fear EU ethics body will fall short of Von der Leyen's promises" (The Guardian, 11 May 2023) <https://www.theguardian.com/world/2023/may/11/eu-ethics-body-von-der-leyen-promises-european-parliament>(last accessed 10 June 2023); P Engelbrecht-Bogdanov, "Transparency International EU: Watered-down EU ethics body lacks credibility" (Transparency International, 8 June 2023) <https://transparency.eu/transparencyinternational-eu-watered-down-eu-ethics-body-lacks-credibility/>(last accessed 10 June 2023). 5For an extensive overview of the different enforcement mechanisms tied to ethics rules in the Union, see A Alemanno, "The Greens/EFA in the European Parliament: Legal study on an EU ethics body" (2021) available at <https://www.greens-efa.eu/files/assets/docs/legal_study_eu_ethics_body_web_11012021.pdf>(last accessed 16 June 2023). "frameworks" are coupled with limited training, knowledge and awareness on the side of staff.As the report rightfully points out, these three elements are key to the effective enforcement of ethics rulesespecially when those, to a large extent, rely on a common organisational culture and shared understanding of ethical behaviour.Within the institutional frameworks, the ECA study reports that staff members from different positions show hesitation about reporting ethical issues and unethical behaviours due to a perceived lack of appropriate protection. 7This creates an issue for procedures that are based on self-reporting and self-assessment as the independence required in performing these exercises might be lacking. Briefly put, in the Union's ethics rules frameworks, there are several instances where we can see significant fragmentationa lot of which is justified in light of the different duties and contexts in which institutions and bodies operate.Certainly, some ethical standards in particular contexts areto a degreealready addressed by several bodies in Union (eg through the office of the European Ombudsman focusing on maladministration in EU bodies and institutions or the European Anti-Fraud Office (OLAF) focusing on corruption or fraud concerning the Union's financial interests).While such bodies do play an important part in ensuring the compliance of EU bodies and institutions to a set of standards of ethical behaviour within their particular area of expertise, they are also reactive in nature and architecture and, for the most part, act in terms of complaint handling or investigations on a case-by-case basis.This current institutional set-up leaves a rather substantial "gap" in the establishment of common ethical standards that are applicable across the board and do not relate to a specific case by a specific institution and at a specific time.In short, there are several opportunities for harmonisation and standardisation, and there is ample room for improvement in terms of streamlining information and guidance regarding the existing rules governing ethical behaviour that are in this way separate from improving existing administrative structures or common practices in the Union.Having said that, the proposed ethics body does not address these whatsoever, nor does it attempt to introduce changes that would enhance the effectiveness of things that are already in place.Instead, it represents a standalone addition to the framework. III. The proposed architecture of the inter-institutional ethics body In this context, addressing this gap in the Union's ethics rules, the Commission proposed the setting of the EU inter-institutional ethics body.What is the Commission's vision for the ethics body and how is it supposed to add on to the Union's democratic protection system?According to the introduction of the Commission's proposal, the ethics body is meant to complement the Commission's work on the rule of law together with the proposed anti-corruption package, 8 the upcoming "defence of democracy package" 9 and the mandatory transparency register.10Most intended actions, stemming from the ethics body itself or from other aspects of the above initiatives, are meant to either add on to or harmonise existing ethics rules that apply to the EU institutions.For instance, the proposal repeatedly makes the point that several ethics rules already exist in the Union's institutionsfor instance, through rules of procedure or codes of conductbut are fragmented, misaligned and difficult to understand for the general public.The role of the ethics body here is to come up with common minimum ethical standards for members of the EU institutions and bodies listed under Articles 13(1) and 13(4) TEU 11 and to provide a mechanism to coordinate and exchange views and good practice on ethical standards, with due observance to the autonomy and independence of each institution.In short, this would cover the coordination of ethics standards for the European Parliament, the European Council, the Council, the European Commission, the Court of Justice, the European Central Bank, the ECA, the European Economic and Social Committee and the European Committee of the Regions.Starting off, the proposed ethics body seems to have quite a considerable task on its hands. The proposed composition of the body foresees that each participating institution will designate one full member to the ethics body and one alternatein principle, at the level of vice-president or equivalent 12 totalling nine members.According to Article 4 of the proposed agreement, the body will be chaired on a yearly rotating basis by one of the participating institutions. 13In addition to the institutional representatives, the body will be supported by independent experts in an advisory role, 14 which will be administratively attached to the Commission and will have the status of Special Adviser, 15 as well as a secretariat, which will consist of heads of unit (or equivalent) responsible for the ethics rules of each participating institution. 16The resources available for the body, in terms of human, administrative, technical and financial resources, including the appropriate staffing of the Secretariat, will be shared among the institutions and will be agreed on through a memorandum three months after the appointment of the members and experts. 17ith this in mind, let us get to the crux of the matter: what can this body actually do?In Articles 6 and 7 of the proposal, outlining the mandate and tasks of the body, there are three main areas of action that are envisioned by the Commission.First, the body is tasked with developing and reviewing common minimum standards applicable to the conduct of the members of the participating institutions, to provide a forum for the exchange of views on each institution's self-assessment of its own internal rules in light of the developed ethics standards and to promote cooperation among the institutions on issues of common interest.Article 7 of the proposal lists these areas of common interest wherein the body is expected to develop standards, which include: interests and assets of the members, external activities during and after the members' term of office and acceptance of gifts, awards, decorations, prizes and honours.The minimum standards in the above areas will be developed and agreed upon by consensus and will be formalised in writing. 18In addition to substantive standards in these areas, the body will also be tasked with developing actions to raise awareness and promote compliance.So far, so good. How will the ethics body ensure that the common minimum standards are honoured and followed?The Commission's response to the central question of this matter is, unfortunately, underwhelming: self-assessments and self-policing.Article 9 of the proposal stipulates that each participating institution carries out a written self-assessment report, reflecting on the alignment of its internal rules with the standards developed by the ethics body.The self-assessment report of each institution is meant to be presented by the concerned institution during a meeting of the body, which will then require that the report is subsequently reviewed by the body's team of independent experts, who will then issue an opinion. 19Upon the issuing of the opinion of the independent experts, which will either dissent to or support the institution's self-assessment, the body will hold an exchange of views with the aim of enabling dialogue between the participating institutions. 20Certainly, while there is value in the common setting of norms and the potential for peer shame and peer praise, the "powers" of the body end there, and neither the exchange of views nor the opinion or report is binding or has any recognised legal effect. 21ltimately, it is up to each individual institution to truthfully and independently assess how its own internal rules measure against the common ethics standards and to fully report on the findings, and it is up to the institution to decide whether or not to take any action related to potential ethical missteps brought up through the self-assessment and the follow-up actions by the body and the independent experts.Other than that, strictly speaking, the body itself is not envisioned to have any power whatsoever to investigate or sanction potential unethical behaviours.The only tangible action in the body's toolbox is the potential to name and shame laggards through its self-assessment reports, opinions and annual reports.Does this make for an effective anti-corruption mechanism?That, to a certain extent, is in the eye of the beholder and still remains to be seen.Yet, one might notice a mismatch between the grandiose promises of the Commission and the end result of this present proposal. The next steps in the process of the proposal are expected to bring about some changes to the current proposed architecture of the body.On the basis of the proposal, the Commission has invited the participating institutions to commence an inter-institutional dialogue and negotiate the make-up, powers and minutiae of the ethics body during a meeting in Brussels, which was set to take place in July 2023.While no new information has been released regarding this meeting, the European Parliament did adopt a resolution on 12 July 2023, 22 where it expressly stated that the proposal is lacking in ambition and falls short of a "genuine, independent ethics body". 23.A plaster for a bullet wound?A missed opportunity for improving the enforcement of ethics rules in the European Union institutional system Where do we go from here?The long-awaited proposal for an EU ethics body seems to fall somewhat short of its original vision as a means of helping maintain or restore the faith of 18 ibid, Arts 7(5) and 7(7). 19ibid, Arts 9(1)-(4). 20ibid, Art 9(5). 21ibid, Art 9(8). 22European Parliament, "European Parliament resolution of 12 July 2023 on the establishment of the EU ethics body" 2023/2741(RSP) available at <https://www.europarl.europa.eu/doceo/document/TA-9-2023-0281_EN.html> (last accessed 30 October 2023). 23ibid, para 1. Europeans in the Union by being a force for ethics, integrity and transparency.While a mandate for creating common standards for ethical behaviour amongst the EU institutions and for harmonising existing internal procedures might be a first step in the right direction and is by all means a welcome development, it is also clear that the Commission's current proposal is lacking in ambitionespecially given the context in which it emerged and gained momentum.The distinct lack of investigative and sanctioning powers of the proposed inter-institutional ethics body and the express lack of any binding or legal effect tied to the participating institutions' self-assessment and self-policing exercises are certainly not encouraging.Though the series of events surrounding Qatargate and the Uber scandal, amongst others, prepared the ground for a serious action against corruption and unethical behaviour in the Union institutions, and while the explanatory memorandum of the Commission's proposal certainly does a good job in "selling" the importance of the inter-institutional ethics body, what we are essentially left with in reality is simply the potential for a mild scolding. It is noteworthy that, in the run-up to the unveiling of the current proposal, the European Parliament has repeatedly asked for investigative powers to be given to any forthcoming EU independent ethics body.In a 2021 resolution, 24 which the Commission itself points to in the current proposal, and then again as recently as February 2023, 25 the European Parliament set out the powers and organising principles for a common ethics body for the Commission and the Parliament, covering both members and administrative staff and open to future participation from other EU institutions and bodies.Its mandate was meant to extend to "all provisions of codes of conduct and applicable rules on transparency, ethics and integrity" as well as obligations of the participating institutions relating to the Transparency Register, the protection of whistleblowers and the management of conflicts of interest. 26The European Parliament indicated that the ethics body should be tasked with verifying the veracity of declarations of financial interests and compliance with revolving-door rules. 27It also outlined a process whereby investigations into individual cases could be started through its own initiative as well as upon notification from external parties, 28 and the ethics body would have the power to request documents, 29 liaise with national authorities and EU watchdog institutions (such as OLAF, the European Public Prosecutor's Office (EPPO), the European Ombudsman and the ECA), 30 hear the accused individuals and propose sanctions to the responsible institution. 31n sum, the European Parliament proposed something akin to a centralised, independent regulatory agency, with significant monitoring and investigative powers.While this represents an ambitious proposal, it is also not a particularly original one, but rather it draws on a popular institutional model, currently in operation in several EU Member States. 32A well-known example is the High Authority for Transparency in Public Life (La Haute Autorité pour la transparence de la vie publique) in France, which checks the asset and interest declarations of public officials, monitors the revolving-door rules for certain categories of public officials and civil servants, manages the digital lobby register 24 and offers ethics trainings as well as on-request ethics guidance for individual cases. 33In Ireland, the Standards of Public Office Commission receives complaints and conducts formal investigations regarding possible wrongdoing relating to ethics rules and standards of conduct for public officials (elected and appointed), political party financing and lobby registration and disclosure. 34There are examples also from Eastern Europefor instance, the Romanian National Integrity Agency (Agenţia Naţională de Integritate) conducts systematic checks on the asset declarations of all elected public officials and undertakes administrative inquiries regarding conflicts of interest, incompatibilities and unjustified assets. 35e mention these examples not as an argument for the efficacy of a centralised ethics agency with investigative powersin reality, the performance varies widely from one national setting to anotherbut simply to illustrate that this is a commonly encountered institutional model.This makes the European Commission's refusal to consider it a controversial choice.Its stance appears to be justified by the principle that dealing with ethics issues is and should remain squarely within the sphere of autonomy of each EU institution.Thus, the Commission notes that "the institutions cannot renounce to exercise their respective powers which are entrusted to them by the Treaties", and, in particular, they "cannot delegate the responsibility for the conduct of their members and their prerogative to react to breaches of ethical rules by individual members". 36It is beyond the scope of this short insight piece to analyse this legal principle and its interpretation by the Commission, although we do suggest the latter can be subject to debate. V. An ethics talking shop? Assessing the proposed inter-institutional ethics body Having said this, it is worth considering the Commission's proposal on its own merits rather than decrying the lost opportunities for improved enforcement of ethics rules across the EU institutions.As we explained earlier, what the Commission has put on the table is a light-touch approach, whereby EU institutions are offered a platform to achieve regulatory approximation through dialogue, best-practice sharing and standard-setting.In the remainder of this paper, we show that, even when judged against this more modest objective, the Commission's proposal is unlikely to deliver results. Firstly, the proposed inter-institutional agreement (IIA) covers not fewer than nine participating EU institutions and bodies, with very different mandates and domains of activity and subject to different accountability frameworks.While the ambitious scope is laudable, it also means that agreement around ethics standards and rules will be more difficult to reach, and, to the extent that agreement is found, it will be around the lowest common denominator.Even if we account for the provision that the contents of the IIA should not "under any circumstance, constitute grounds for lowering the standards already applied by a party",37 the general dynamic is likely to be one that leans towards rewarding laggards and frustrating leaders.This is not a trivial aspect, because there are areas where inter-institutional differences are indeed remarkable.For instance, while so-called "cooling-off" periods apply to both the President of the European Council (eighteen months) as well as the College of Commissioners (two years for Commissioners and three years for the Commission President), MEPs are not subject to cooling-off periods at all. 38econdly, the proposal fails to leverage the full potential of a peer-review system.The standards developed by the ethics body are expressly not legally binding, which renders their application entirely dependent on the goodwill and "sincere cooperation" of the participating institutions.This is reminiscent of the approach taken by bodies such as the European Ombudsman or the ECA, which rely on the application of "soft" power instead of legal force to persuade those concerned to act in a certain manner.In such scenarios, compliance can still be leveraged through a robust practice of public naming and shaming, which relies on independent assessments of the participating parties against commonly agreed goalposts.However, the Commission's proposal misses the mark in two regards: firstly, by putting the EU institutions in charge of assessing themselves, with the panel of five independent experts reduced to merely responding to self-assessments; and secondly, by not endowing the ethics body with the necessary authority over EU institutions that could effectively motivate them to change their behaviour and comply with the developed ethics standards. Although Article 9(7) of the Commission's proposal stipulates that the participating institutions should update their internal rules in accordance with the conclusions of the self-assessment exercise, it does not impose any follow-up reporting requirements, let alone any (symbolic) penalties for failing to follow though.This effectively means that we will not know whether or to what extent the recommendations issued by the ethics body will actually be implemented by the participating institutions.In fact, the entire exercise of institutional self-assessmentfollowed by the opinion of independent expertsis not foreseen as a regular re-occurring process but rather as one-off or sporadic at best. 39Also significant is that the entire process is devoid of any comparative element.The self-assessments and the corresponding reports are published on the ethics body's website, but there is no single document to take stock of the performance of the participating parties in a horizontal way, to identify good and bad practices and to single out both leaders and laggards.This missing comparative element is in fact the key to activating the socialisation and social pressure mechanisms (ie naming and shaming) that give any peer-review mechanism a chance to be effective. 40The institutional design choices outlined above all disempower the EU ethics body and make it much more likely to simply rubber stamp the institutions' ethics self-assessments rather than provide authentic, comprehensive and systematic appraisals of their ethics frameworks or any kind of substantive and reliable incentives to improve. Equally important, the proposed ethics body will also probably be robbed of the epistemic legitimacy that successful peer-review bodies enjoy owing to their independence and subject expertise. 41It is doubtful that the independent experts associated with the ethics body will be truly independent.They are to be appointed "by common agreement" by the participating institutions, according to a procedure as yet unspecified, but in practice they will be administratively attached to the European Commission as Special Advisers.There seems to be little to safeguard the experts' independence once appointed.For instance, the draft IIA does not substantially address how they can be dismissed, but we do know that the European Commission enjoys a great deal of discretion in appointing, dismissing and setting the terms of activity for its Special Advisers, an issue that has drawn criticism before. 42Furthermore, the independent experts are not explicitly involved in the development of the common ethics standards enumerated in Article 7 of the proposal, nor in the decision to update those standards.This is an important omission that brings into doubt the legitimacy of said standards, but this also raises the question of why the proposed IIA envisages independent experts at all, given that they are not utilised for the one key activity where their expertise is directly relevant. Finally, there is no research capacity to support the work of the EU ethics body.If this is to be a standard-setting body, then a research capacity is crucial, as ethics standards do not exist in a vacuum but are constantly evolving in relation not only to frameworks set by international institutions such as the Organisation for Economic Co-operation and Development (OECD) or the Group of States against Corruption (GRECO) but also relative to broad societal values.The panel of five experts is unlikely to be able to cope with the required work by itself, especially given the deficiencies discussed above.Just to put things into perspective, it is useful to look at the example of the UK Committee on Standards in Public Life (CSPL), which, much like the envisaged EU ethics body, serves a purely advisory function.While the CSPL is not a regulator and does not investigate individual complaints, it does advise the UK Prime Minister on ethical standards "across public life" by commissioning periodic assessments on the ethical standards landscape in UK central and local government but also through evidence-gathering into a variety of other topical issues (eg election finance laws, the impact of artificial intelligence on public standards, the outside interests of Members of Parliament, intimidation and bullying in public institutions). 43This research and evidencegathering work is essential to grounding the CSPL in its mission and to building up its epistemic legitimacy.The UK example shows that it is possible to have a functional interinstitutional ethics body with just advisory functions, but it has to be adequately resourced and to enjoy a significant degree of autonomy. VI. The place of the inter-institutional ethics body in the accountability framework of the European Union Before concluding, it is important to assess the proposed inter-institutional ethics body within the broader oversight and accountability framework of the EU.How can its distinctive added value be understood? From one perspective, the answer is that it actually brings very little (if any) added value, particularly when it comes to enforcement.The first impediment is that it lack its own investigative powersas envisaged in the Commission's proposal, the ethics body cannot even advise or be (formally) consulted with regard to the application of ethics rules in individual cases.The existing EU watchdog bodies44 that are endowed with investigative powers cannot comprehensively address the enforcement of ethics rules in the EU institutional system.When considering OLAF and EPPO, the first impediment is that both of their mandates are explicitly tied to offences that affect the financial interests of the EU, whereas unethical behaviour need not necessarily have financial implications.The second impediment is that, while OLAF and EPPO can investigate the various categories of staff and the members of EU institutions and bodies, they only handle serious offences.EPPO deals with criminal activity, 45 while OLAF conducts administrative investigations, including into "suspicions of serious misconduct by EU staff and members of the EU institutions". 46However, unethical or ethically questionable behaviour need not necessarily constitute "serious misconduct", or indeed even be illegal, for it to lead to public scandals and to damage citizens' trust in EU institutions. 47Finally, less serious cases may be handled by the European Ombudsman, who inquires into maladministration in EU institutions and bodies but cannot impose sanctions.This short overview shows that there would have been ample place for a powerful watchdog body dedicated specifically to ethics, as the European Parliament had proposed. On the other hand, the proposed inter-institutional ethics body has better chances of bringing added value regarding the management of ethics within the EU institutions and bodies, thus helping to prevent rather than punish unethical (or ethically questionable) conduct.From this perspective, its mandate partially overlaps with the activities of the European Ombudsman and the ECA, both of which can and indeed have issued recommendations on ethics matters to the EU institutions.The ECA has done this through the instrument of special reports, while the European Ombudsman has used primality strategic investigations, as well as standalone standard-setting documents like the Code of Good Administrative Behaviour and the Public Service Principles for the EU Civil Service.While the work done by the European Ombudsman and the ECA in terms of ethics standard-setting remains patchy and thus does not invalidate the need for a specialised body with a more comprehensive mandate, like the proposed inter-institutional ethics body, it is also evident that a mechanism of cooperation between these three watchdogs would be requiredthis is, however, not foreseen in the Commission's proposal. A final question concerns the legal value of the common minimum standards adopted by the proposed ethics body.The draft IIA specifies a commitment of the participating institutions to implement the standards "in their internal rules on the conduct of their members", 48 and the IIA itself is of a binding nature for the parties. 49The minimum standards are therefore foreseen to have some legal weight, but it would be incorrect to consider them de facto legally binding, as there is no real mechanism of enforcement.As previously shown, the only way to check whether the standards are implemented (and respected) is through the reports that the ethics body produces based on the institutions' self-assessments.However, these reports are expressly not meant to have "any binding or legal effect". 50In practice, this means that the common ethics standards, and indeed any decision of the ethics body, have symbolic value more than anything else.This is not a unique situation for an EU watchdog body.Both the European Ombudsman and the ECA face similar challenges, as neither of them can impose sanctions on the institutions that they investigate or otherwise constrain them to change their behaviour.Instead, they rely on their persuasion capital, which they build up by adopting cooperative styles of control and by using strategically their relationship with the European Parliament, which can be recruited to put pressure on uncooperative institutions. 51owever, the new ethics body will not have this type of lever at its disposal.What is more, as shown in the previous section, it also lacks the needed attributes to activate the "softer" social pressure mechanism of naming and shaming.Therefore, from both legal and practical political perspectives, the ethics body simply does not have the necessary authority or status over the EU institutions to apply sufficient pressure and effectively motivate behavioural change. VII. Some early conclusions The European Commission's proposal for an inter-institutional EU ethics body represents a missed opportunity to strengthen the application of ethics rules across EU institutions and to fill an important gap in the enforcement landscape of the Union.What is more, the success of the proposed ethics body as a standard-setting advisory forum remains uncertain, as it lacks the necessary attributes to leverage institutional change through the "softer" social pressure mechanism of naming and shaming.While such mechanisms might have proven more effective (although not entirely) for bodies such as the ECA or the European Ombudsman, we must not forget that the proposed ethics body lacks a key element that could facilitate this approach: status.Both from legal and political 48 European Commission, "ANNEX to the Communication from the Commission to the European Parliament, the European Council, the Council, the European Court of Justice, the European Central Bank, the European Court of Auditors, the European Economic and Social Committee and the Committee of the Regions: Proposal for an interinstitutional ethics body" (2023) ANNEX 1 COM(2023) 311 final, Art 7(7). 49ibid, Art 21(1). 50ibid, Art 9(8). 51See A Năstase and C Neuhold, "The Court of Auditors and the European Ombudsman: The EU's 'Watchdogs'" in DHU Puetter, S Saurugger and J Peterson (eds), The Institutions of the European Union (Oxford, Oxford University Press 2021).perspectives, the ethics body simply does not have the necessary authority over the EU institutions to apply sufficient pressure and effectively motivate behavioural change. It is worth remembering that the project of an inter-institutional ethics body is not new.In 2000, the Commission proposed an Advisory Group on Standards in Public Life, also based on an IIA, with similar coverage in terms of institutions (minus the European Council and the European Central Bank) and with a similar mission "to provide advice on standards of professional ethics relating to the functioning of the Parties". 52The development of common standards was not foreseen back then, nor was the Advisory Group given any role in assessing the ethics frameworks of the participating institutions.Still, the proposal never took off, as the European Parliament had no appetite for it at the time. 53However, the situation is very different nowadays, when it is the Parliament asking for an interinstitutional ethics body, and not merely a "talking shop", but one with investigative powers.Surely this had the potential to embolden the Commission to come up with a more ambitious proposal rather than dusting off an old, ill-fated idea. The political challenges of getting nine EU institutions to participate in a common ethics body (of any sort) are undeniably daunting and therefore require some expectation management from the side of critics.Still, it remains to be seen whether the body, within itself and in the context of the Commission's further efforts in this area, will have the resources and capacity to be a force for good, or whether it will simply be an act of window-dressing.Of course, at this stage, no general conclusions can be drawn, as the IIA is at this stage merely a proposal that is destined to be criticised and amended before taking its final shapeif ever.Certainly, our aim here is not to provide any prediction about the overarching development of the EU ethics framework in light of the Commission's proposal.Instead, our intention is to open up a discussion on the basis of this proposal regarding the rules that govern the ethical behaviour of the Union's administration.
2023-11-15T16:07:32.016Z
2023-11-13T00:00:00.000
{ "year": 2023, "sha1": "85bd134dcd55029ef05ebcc04b33d91f7a8d7dd0", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/9FD72D5556FABB80AB5E350E24D531A0/S1867299X23000788a.pdf/div-class-title-much-ado-about-nothing-reflections-on-the-european-commission-s-proposal-for-an-inter-institutional-ethics-body-div.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "6d6fd6a63e7469175c45e11718f9ebff0ddd4e53", "s2fieldsofstudy": [ "Law", "Political Science" ], "extfieldsofstudy": [] }
38688730
pes2o/s2orc
v3-fos-license
Beta-adrenergic blocker, 2-(2-hydroxy-3-tert-butylaminopropoxy) chlorobenzene hydrochloride (D-69-12). The concept of two adrenergic receptor mechanisms or α- and β-adrenergic receptor systems, originally proposed by Ahlquist (1) was based on orders of potency of a series of catecholamines. Since dichloroisoprenaline (DCI) was introduced as a new category of drugs that selectively blocked the response due to β-adrenergic activation, several β-adrenergic blockers were found. In this paper 2-(2-hydroxy-3-tert-butylaminopropoxy) chlorobenzene hydrochloride (D-69-12) has been found to be a potent β-adrenergic blocker. However, D-69-12 in very high concentrations relaxed the smooth muscle preparations. So mode of action of D-69-12 was examined. Furthermore, the dual action of D-69-12 on the smooth muscle preparations has been discussed in this paper. responses of the taenia caecum and tracheal muscle to drugs were recorded through an isotonic lever. The movement of the atrium was isometrically recorded through a me chano-electrical transducer (RCA-5734). Locke Ringer solution gassed with a mixture of 95% O, and 5 % CO, and kept at 32'C was used as a bath fluid. Locke Ringer so lution used contained 9.0 g of NaCI, 0.4 g of KCI, 0.2 g of CaCl,, 0.2 g of MgCI,, 0.5 g of NaHCO, and 0.5 g of glucose in a litre. In experiments with the isolated atrium, tension increased by isoprenaline was used as a response (3). The competitive antago nistic activities of drugs were expressed as the pA, values, which were calculated from the parallel shift of the concentration action curve of isoprenaline (4,5). The results in this paper were presented as means of at least 7 experiments. 9-adrenergic blocking activity of D-69-12 All the concentration action curves of 1-isoprenaline tested on the taenia caecum, tracheal muscle and atrium were parallely shifted towards its higher concentrations by D-69-12, indicating a competitive antagonism between 1-isopranaline and D-69-12. in the presence of dibenamine 10-*' g/ml and propranolol 10-e g/ml. 1. D-69-12 (10-6 to 10'5 g/ml) also relaxed the taenia caecum. The maximum re laxation produced by D-69-12 was the same as that by 1-isoprenaline. However, these relaxations of the both smooth muscle preparations were unaffected by 10-'g/ml of pro pranolol, which was enough concentration to block the action of 1-isoprenaline (Fig, 1). Furthermore, relaxations of the taenia caecum produced by DCI and D-69-12 were little affected by pretreatment with dibenamine 10-6 g/ml and propranolol 10-6 g/ml (Fig. 2). D-69-12 in the concentrations of 10-'g/ml or more depressed the maximum height of the concentration action curve of BuTMA, an acetylcholine-like drug. This pheno menon is similar to antagonism between BuTMA and papaverine. DISCUSSION The adrenergic blocking potency ratio of D-69-12 relative to propranolol is one or more in the organ preparations used. D-69-12 in very high concentrations relaxed the smooth muscles. It is known that some Q-adrenergic blockers in their high con centrations relax the smooth muscles. Takagi and Takayanagi (3) have presented the evidence that relaxation of the smooth muscle preparations produced by DCI is due to its papaverine like action. It is indicated in this paper that D-69-12 as well as DCI has the dual action and the relaxation of the smooth muscle by D-69-12 is unaffected by the adrenergic blockers. Furthermore, D-69-12 as well as papaverine non-competi tively inhibited the concentration action curve of BuTMA, an acetylcholine-like drug. These facts indicate the possibility that the inhibitory action of D-69-12 is due to its papa verine-like action. It is considered as one of explanations of the dual action that D-69-12, being in the racemic forms, conceivably consists of a mixture with the two opposite actions re siding in two optical isomers. Therefore, a concentration action curve of a mixture of an agonist and its antagonist is theoretically analysed. If the agonist A and its antago nist B compete for the same receptor, the concentration action curve of the agonist A in the presence of the antagonist B is given by Gaddum's equation (1). It is given as a conclusion that the inhibitory action of D-69-12 on the smooth mus cle preparations is due to its non-specific action or papaverine-like action. SUMMARY Beta-adrenergic blocking potency ratio of D-69-12 relative to propranol is one or more. D-69-12 in very high concentrations relaxed the smooth muscle proparations. This inhibitory action seems to be due to its papaverine-like action. Fic. 3. Theoretical concentration action curves of an agonist A (KA = l0-e) and of an equivalent mixture of the agonist A and its competitive antagonist B (KB =10'$) in the presence and 'absence of a constant concentration (10'7) of B. A=agonist A alone, B=agonist A with B (10-7), C=equivalent mixture. D=eouivalent mixture plus B (10-7).
2018-04-03T04:20:21.158Z
1970-01-01T00:00:00.000
{ "year": 1970, "sha1": "3da4ac26fbab69b6b04d816634b06e84c0020941", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/jphs1951/20/4/20_4_504/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c55db384d84912f1cb9f2105bde1ebf41c1e27a8", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
233326043
pes2o/s2orc
v3-fos-license
Digital ≠ paperless: novel interfaces needed to address global health challenges Institute of Healthcare Management, Strathmore University Business School, Nairobi, Kenya HealthENet Limited, Nairobi, Kenya Health Care Management Department, University of Pennsylvania, Philadelphia, Pennsylvania, USA University of Warwick Warwick Medical School, Coventry, UK Department of Pediatrics, Indiana University School of Medicine, Indianapolis, Indiana, USA MARCH Centre, London School of Hygiene and Tropical Medicine, London, UK INTRODUCTION Health information systems (HISs) are considered a core component or building block of health systems. HISs are expected to support evidence-informed decision making at each level. 1 However, there are two implicit, and commonly held assumptions that are important to challenge: first, that information systems require information technology; and second, that information technology has no place for paper. While information systems are typically expected to involve the use of technology, the distinction between 'information need', and the technology to support the need, is important to consider especially (but not only) in low/middle-income country (LMIC) health system contexts with diverse constraints to technology implementation and use. Not every 'information need' requires the use of information technology, and many goals like quality improvement (QI) of health services may be achievable without the additional complexity of technology implementation. 2 3 GOING 'PAPERLESS' IS HARD When the decision to use information technology is made, 'going digital' is commonly equated with 'going paperless'. 4 5 Going paperless is challenging, especially for healthcare delivery in resource limited settings. 6 Two important hurdles to digital (including mobile) health in LMICs stand out: (a) costs and complexities around infrastructure (not only of devices like computers/tablets/ phones, but also backup power systems, networking, support, maintenance and procurement), and (b) costs and complexities around training (of diverse health system actors, often repeatedly, on complex hardware, software and workflows). Beyond these challenges, the shift to direct digital data entry at the point of care has a negative impact on both patient and provider satisfaction with the interaction. These are undervalued aspects of quality beyond technical skill, as time taken for direct digital data entry replaces time for direct patient-provider conversation and aspects like eye contact and non-verbal observation. 7 PAPER AS AN INTERFACE FOR INFORMATION ENTRY Paper, however, continues to be a simple, versatile, accessible and commonly used 'interface' for clinical documentation. Paper has few of the infrastructural and training challenges linked to digital technologies, and has advantages including automatic hard copies that may allow sites to meet legal requirements for documentation retention and patient privacy more easily. It also is preferred by many clinicians to document direct patient consultations. However, use of paper-based information is time consuming and expensive, either requiring direct reading or transcription/extraction into computer systems that involves numerous Summary box ► Effective information systems do not always need to be accompanied by information technologies. ► Information technologies do not need to be 'paperless', and can benefit from the numerous advantages of paper-based information entry. ► Automated digitisation of paper-based information by taking a picture can deliver routine health information easily, accurately and at low cost. ► Hybrid, paper-digital systems could overcome common barriers to technology implementation and use -the need for infrastructure and repeated training -and help bridge current circumstances and 'ideal' information systems of the future. BMJ Global Health steps and yields low data quality. 8 These have been the drivers for the move towards direct digital data entry, and the rapid proliferation and use of mobile devices have supported an assumption that paper-based documentation is incompatible with modern information systems. 9 10 However, in settings where infrastructure, resources and skills are constrained, low-tech approaches may still be best for certain tasks. We are making the case for more thoughtful, goal-directed use of paper in combination with available digital tools and user abilities. PAPER + DIGITAL One hybrid approach is to use the computational power of smartphones to automatically recognise information entered in paper. While handwriting recognition is still difficult to automate at accuracy levels needed for medical use, 11 the 'optical mark recognition' (OMR) approach is trusted by anyone familiar with shading circles to answer multiple-choice questions. Recent work demonstrates the combination of paper-based clinical documentation templates containing OMR fields, with a computer vision algorithm on smartphones to automatically digitise patient records. 12 In this case, templates are printed using rubber stamps, a widely available and low-cost solution for printing on demand, and the algorithm generates digital data from a smartphone picture of the template ( figure 1). The approach has demonstrated improvements in both clinical documentation 13 and care quality 12 with minimal infrastructure or training. While useful for capturing structured data, the approach does not support capture of narrative information, or continuous variables like heart rate. Further innovation could provide solutions as experience with the hybrid approach grows. FROM INFORMATION TO QUALITY To achieve high-quality care, however, a culture of quality must be fostered throughout the healthcare system. 'Information use' is an important element of such a culture, and is often a challenge in LMIC settings. 14 A culture of information acquisition and use is needed at all levels of the health system, in leaders-for setting goals and recognising performance, managers-for supervision and implementation and individual health workers in their own tasks of providing care. Underpinning such a culture shift will be the availability high-quality, routine data. 15 The hybrid, paper-digital information ecosystem, as currently implemented in East Africa, 16 allows managers to routinely track and respond to individual provider performance, as well as examine trends across a facility or district in a learning or quality improvement network. This capitalises on the advantages of paper for rapid documentation of patient consultations and those of digital data for tasks like quality improvement or referral management. A routine HIS using low cost, hybrid paper-digital approaches to information capture can improve equity of high-quality healthcare provision, ensuring that not only hospitals with the finances to afford expert clinical audits can support clinicians on the quality of services they provide, 17 and deliver opportunities for individual and system improvement. 18 Figure 1 'Paper-to-digital' electronic medical records. Smartphone screenshots show how paper-based templates for clinical documentation, in this example for hypertension (HTN) screening by a community health worker, are combined with a browser-based computer vision application that automatically recognises the filled circles seconds after taking an image of the template. The interface allows for visual confirmation of accuracy and quick editing (if needed), followed by sync-ing of data to a cloud server. THE ART AND SCIENCE OF MEASUREMENT IN INFORMATION SYSTEMS While a routine HIS using a hybrid paper-digital approaches is likely to improve on what is currently being done around quality improvement in LMIC health systems, even such frugal innovations require investment. Therefore, evaluation of effectiveness and cost-effectiveness is required. Evaluation is challenging because of the inherent complexity of both the intervention and the health systems in which it is embedded. Any evaluation of the effectiveness or cost-effectiveness of routine HIS would require combining relatively straightforward process improvements (eg, comparing individual provider behaviour with those expected from clinical guidelines), with health outcomes in target clinical areas. However QI interventions like these often yields benefits in less tangible areas such as provider motivation, cohesion (team work), retention and resilience. 19 These are more difficult to quantify and careful thought of study designs and novel approaches are likely needed to demonstrate the benefit of such interventions. 20 PAPER-FIRST, EVIDENCE-INFORMED DECISION MAKING Interventions generally work best when it is not too radical a leap for patients, healthcare providers and managers to make, and when systems are designed to accommodate specific contexts and constraints in LMIC health systems. The 'paper-first' approach, by relying on existing resources and practices, is one step in this direction. A further innovative leap in the paper-digital approach is the capture of digital data by simply taking a picture using commonly available, even personal, mobile devices. Together-information entry on paper, and information capture by taking a picture-the paper-digital approach reduces several barriers to generating routine health information such as infrastructure, training, power and stable internet. But ultimately, it is likely the familiarity with 'existing ways of doing things' like entering notes on paper, or taking a picture on any mobile phone, that will make the paper-digital approach adopted at scale. The democratisation of information generation and use is likely to empower individuals and teams to drive change even in resource-constrained health systems. 21 These approaches should have appeal at the national health ministry level, especially when they can be designed and tracked to provided critical information to support universal health coverage (UHC) efforts, or linked with existing HIS infrastructure, such as DHIS2. 22 UHC planning and implementation are challenged by an underlying lack of actuarial data to support and assess system (re-)design. UHC Task Forces might be able to provide pilot funding to test novel, hybrid paper-digital hybrid approaches to meet their data needs. At the provider level, the appeal is a manageable level of staff training time and a seamless approach to activity that is the norm currently. There is investor appeal by virtue of low entry and implementation costs, and a dramatic improvement in information to guide governance and further investment decisions. Further innovation along these lines needs support in the global health community; the investment needed is typically small, low risk and potentially high value. CONCLUSION It is imperative for the research, implementation and financing communities, both local and global, to embrace a concept of HISs that is not characterised as an inevitable, if slow, evolution from paper to digital but a thoughtful, context-sensitive, integration of the two. Twitter Meghan Bruce Kumar @kumeghan Contributors PK was involved in the conception and drafting the manuscript, and revising it critically for important intellectual content. SMS, JJM, SB and MBK were all involved in drafting the document and revising it critically for important intellectual content.
2021-04-22T06:18:57.777Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "f7a61039b1cd0a71950ab01acefaa89f837feb39", "oa_license": "CCBY", "oa_url": "https://gh.bmj.com/content/bmjgh/6/4/e005780.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5368448422f0eb0f748b3314ef17a22e5d85a4f8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
49482363
pes2o/s2orc
v3-fos-license
Overexpression of cytochrome P450s in a lambda-cyhalothrin resistant population of Apolygus lucorum (Meyer-Dür) The mirid bug, Apolygus lucorum Meyer-Dür, has been an important pest of cotton crop in China, and is primarily controlled with insecticides, such as pyrethroids. To elucidate the potential resistant mechanisms of A. lucorum to lambda-cyhalothrin, a series of biological, biochemical, and molecular assays were conducted in the reference (AL-S) and lambda-cyhalothrin-resistant (AL-R) populations. Comparison of the molecular target of pyrethroid insecticides, voltage-gated sodium channel, revealed that there were no mutation sites in the resistant population, indicating target insensitivity is not responsible for increased resistance of AL-R to lambda-cyhalothrin. Furthermore, the synergism assays and the activities of detoxification enzymes were performed to determine detoxification mechanism conferring the lambda-cyhalothrin resistance. In the tested synergists, the piperonyl butoxide had the highest synergism ratio against lambda-cyhalothrin, which was up to five-fold in both populations. In addition, the result also showed that only cytochrome P450 had significantly higher O-deethylase activity with 7-ethoxycoumarin (1.78-fold) in AL-R population compared with AL-S population. Seven cytochrome P450 genes were found to be significantly overexpressed in the resistant AL-R population compared with AL-S population. Taken together, these results demonstrate that multiple over-transcribed cytochrome P450 genes would be involved in the development of lambda-cyhalothrin resistance in AL-R population. Introduction The mirid bug Apolygus lucorum (Meyer-Dür) (Hemiptera: Miridae) had been a primary pest of cotton in northern China during the 1950's, but its population densities always remained low owing to the frequent application of synthetic insecticides against Lepidopteran pests [1]. However, since 1997, the widespread planting of trans-Bacillus thuringiensis (Bt) crops has dramatically reduced insecticides use and thus spurred the emergence of mirid bugs as dominant pests in transgenic Bt cotton fields in China [2][3][4][5]. offspring for the next generation. The dose of lambda-cyhalothrin was increased from 0.7 ng /adult to 55 ng /adult during the selection process. Both populations were reared on sauteed green beans (Phaseolus vulgaris). The environmental conditions were set as: 26 ± 1˚C, 60 ± 5% relative humidity (RH) and a 16 h:8 h light: dark photoperiod. Toxicity bioassay and synergism assay The topical method was used to determine the level of resistance to lambda-cyhalothrin and the synergistic activity of detoxifying enzyme inhibitors [28]. Acetone was used as the solvent, and also as a control. Lambda-cyhalothrin was serially diluted up to 4-7 different concentrations with 3-4 replications of each concentration. Prior to pesticide application, more than 30 A. lucorum adults of 4-day-old were anaesthetized with carbon dioxide and placed on ice for each concentration group. A droplet (0.6 μL) of lambda-cyhalothrin was applied onto the dorsum (thorax) of the adult using a semi-automatic dropper (PB-600 PAT, 3161323, USA). After treatment, ten individuals per group were placed in a plastic box with a fresh green bean pod. Mortality was calculated after 24 h. For synergism assays, the synergists PBO, DEM, DEF, and TPP were dissolved in acetone and applied topically to the dorsal prothorax of adults of the AL-S and AL-R populations, as described above. The doses applied (30 ng of PBO, 60 ng of DEM, 60 ng of DEF or 60 ng of TPP per individual adult) caused no mortality in adults from both strains. After 1 h, the adults were treated with lambda-cyhalothrin as described for the topical bioassay. The LD 50 values and slopes of mortality/dose relationships were estimated by probit analysis with the computer program POLO-PC (LeOra Software, USA). Metabolic enzyme assays Protein content was measured with bovine serum albumin as the standard substrate using the method of Bradford [29]. Amplification and sequencing of sodium channel gene and cytochrome P450 genes Total RNA was isolated from the adults of A. lucorum (3-4 days old) using TRIzol reagent (Invitrogen, Carlsbad, CA) following the manufacturer's specifications. First strand cDNA was synthesized from total RNA using PrimeScript™ RT reagent Kit with gDNA Eraser (Perfect Real Time) (Takara, Dalian, China). To check for target mutation, a series of cDNA fragments of the para-sodium channel gene were amplified with the primers of the previous study [10]. At least 30 adult individuals were selected for sequencing in each population. Amplification of cytochrome P450 genes were performed with specific primers (S1 Table). The missing 3' and 5' ends of CYP genes were obtained from first strand cDNA with gene-specific primers (S1 Table) using a SMART™ RACE cDNA amplification kit (Clontech, USA). The full-length sequences of CYPs were then amplified using gene-specific primers (S1 Table). All PCR products were gel-purified, ligated into the pMD-18T vector (Takara, Dalian, China) and sequenced by Invitrogen (Shanghai, China). Real time quantitative PCR of A. lucorum P450 genes The clean reads and computationally assembled sequences about AL-S and AL-R populations were submitted to the Sequence Read Archive (SRA) database (Accession number: SRP149628). 101 cytochrome P450 genes with the mean length of 1259 nucleotides were found via transcriptome analysis. Differential expression data between AL-S and AL-R populations revealed that 8 P450 unigenes were significantly up-regulated and 41 unigenes downregulated. The transcription profiles of 11 selected P450 genes in the AL-S and AL-R populations were determined by real-time qPCR. Specific primers (supporting information S1 Table) were designed to amplify the A. lucorum P450 and β-actin gene (reference gene). Primer pairs were optimized and tested to ensure that they yielded unique amplification products and possessed similar amplification efficiencies. The amplification efficiency of each primer pair was estimated by using the equation E = 10 −1/slope , where the slope was derived from the plot of cycle threshold (C t value) versus amount of serially diluted template cDNA. QPCR was carried out using the ABI 7500 qPCR System with the Platinum SYBR Green qPCR SuperMix-UDG kit (Invitrogen, Carlsbad, CA). The optimized cycling conditions were 1 cycle of 2 min at 50˚C, 1 cycle of 2 min at 95˚C, and 40 cycles of 15 s at 95˚C and 30 s at 60˚C followed by a product dissociation stage (Applied Biosystems 7500). To check reproducibility, each qRT-PCR assay was performed in triplicate, and samples were repeated three times, each with a new preparation of total RNA. The relative transcript levels for each P450 gene in each population were calculated by the 2 −ΔΔCt method [30]. Phylogenetic analysis Phylogenetic analysis was conducted in order to investigate evolutionary relationships among the putative P450 proteins identified in A. lucorum and selected proteins from other insects. was performed using the multiple alignment program Clustal W in MEGA version 5.1 [31]. Tree construction was performed using the neighbor-joining method in MEGA version 5.1 [31]. The reliability of the trees was evaluated using the bootstrap procedure with 1000 replications. Statistical analysis Data were expressed as Mean ± standard error (SE) deviation from triplicate experiments. The difference in expression level of each CYP gene between AL-S and AL-R population was determined by the Student's t-test, using SPSS for Windows (SPSS, Chicago, IL, USA). One ANOVA with Tukey's Multiple Comparison Test were used for comparisons of the relative expression of CYP6X2 by induction of lambda-cyhalothrin or not with GraphPad Prism version 5.0 (GraphPad software, San Diego, CA, USA). Lambda-cyhalothrin resistance dynamics The dynamics of lambda-cyhalothrin toxicity against A. lucorum over successive lambda-cyhalothrin selected generations were determined via bioassay ( Table 1). The LD 50 value changed from 0.74 ng/adult of F0 generation to 54.09 ng/adult of F11 generation. The resistance ratio (RR) of the AL-R population to topical application of lambda-cyhalothrin increased up to 74-fold after selection for 11 generations. However, the LD 50 value of AL-S population also increased to 9.13 ng/adult when long-term maintenance (data shown in Table 2). Hence, the net resistance ratio (RR) of the AL-R population was only 5.9-fold compared to the AL-S population. Synergistic effects on the toxicity of lambda-cyhalothrin The effect of synergists on lambda-cyhalothrin toxicity in the AL-S and AL-R populations was determined by bioassays ( Table 2). TPP did not have synergistic effect on lambda-cyhalothrin toxicity in the AL-S population, however, had a synergism in the AL-R population. Similar synergistic potential of PBO and DEM to lambda-cyhalothrin between the two populations were observed. The synergistic ratio of DEF to lambda-cyhalothrin was 1.8 and 2.6 for the AL-S and AL-R population, respectively. Detoxifying enzyme activity The activities of the detoxifying enzymes CarE, GST and P450 in AL-S and AL-R populations were compared (Fig 1). The O-deethylase activity towards formation of 7-hydroxycoumarin (ECOD) of P450 was significantly higher (1.78-fold) in the AL-R population than that of the AL-S population, suggesting that lambda-cyhalothrin resistance in the AL-R population is potentially conferred by increased P450 activity. Comparison of para-sodium channel gene The full complete ORF sequence of para-sodium channel was compared between AL-S and AL-R population. Through sequence comparison, no nucleotide mutation was found in the Table 2 whole ORF. It was speculated that target site insensitivity didn't account for the lambda-cyhalothrin resistance in AL-R population. Relative expression of CYP genes in adult mirid bugs The relative expression of genes from the CYP4 and CYP3 clans in adults from AL-R population was determined by qPCR and compared with the expression in the AL-S population ( Table 3). Among the 11 tested CYP genes, CYP6HM1, CYP6HM2, CYP6JB1, CYP6JB2, CYP6JC1, CYP6X2 and CYP395H1 had significantly higher expression levels in the AL-R population as compared to the AL-S population. The induction of CYP6X2 expression was also analyzed by exposing adults of the AL-S and AL-R populations to a topical droplet containing 9 and 70 ng of lambda-cyhalothrin, respectively. The result also showed that CYP6X2 was similarly induced in both the reference (1.86-fold) and the resistant (1.54-fold) populations (Fig 2). Significant differences were again found in CYP6X2 expression levels when comparing non-treated AL-S and AL-R populations (20.6-fold), in accordance with above mentioned results. Characterization of full-length CYPs The characteristic parameters of the obtained full-length CYPs were listed in Table 4. As shown in S1 Fig, the translated proteins of the CYPs possess the characteristic conserved domains including the oxygen-binding motif (helix I) ([A/G]GX[E/D]T[T/S]) , the helix K motif (EXXRXXP), the heme-binding "signature" motif (PFXXGXXXCXG) and a sequence motif (PXXFXP) specific to CYP6 members. The results indicated that these CYPs belong to typical microsomal P450 clades. The phylogenetic tree, generated from aligned amino acid sequences of CYPs, revealed that these CYPs were closely related to those of families CYP4 and CYP6 of other invertebrate species (Fig 3). Discussion The control of mirid bugs in Bt-transgenic cotton crop fields almost is executed by spraying chemical insecticides worldwide. Over-utilization and long-term exposure to insecticides has Expression analysis of cytochrome P450s in lambda-cyhalothrin resistant Apolygus lucorum induced the resistance in mirid bugs. Insecticides currently approved for mirid bugs control are pyrethroids and organophosphates. For A. lucorum, some cases about the pyrethroid resistance have been reported in the Yellow river basin of China [10][11][12]. For the tarnished plant bug L. lineolaris, the resistance to pyrethroid insecticides occurred in the mid-south cottongrowing areas of the USA [24]. Hence, it is necessary to elucidate the potential reason for the development of pyrethroids resistance in mirid bugs. Pyrethroid resistance mechanisms were usually complex, which mainly based on the pest, field environment, and insecticide application. A previous study of pyrethroid resistance in A. lucorum found an association between target site insensitivity due to a substitution (L1015F) and pyrethroid resistance [10]. Another study involving Lygus species found that resistance was correlated with increased activity of P450 detoxifying enzymes [16]. In the present study, no mutation was found in the para-sodium channel of AL-R population, suggesting that target insensitivity is unlikely to be involved in lambda-cyhalothrin resistance development. It was common that one or multiple mechanisms may be lying in different pyrethroid resistant populations of the same insect species because of the different insecticide selection pressure. Xu et al found that the synergism of PBO to lambda-cyhalothrin was obvious with the synergism ratio up to 7.2 compared with the other three types of insecticides in A. lucorum [32], which was consistent with our significant synergistic effects of PBO to lambda-cyhalothrin in both AL-S and AL-R populations. The explanation for the obvious synergism of PBO to lambda-cyhalothrin in AL-S population was that the susceptibility of AL-S population to lambda-cyhalothrin was distinctly decreased during the long term rearing with the food P. vulgaris containing pesticide residue. The further biochemical assays confirmed indeed that the resistant individuals had higher level of P450 activities compared with reference individuals. These evidences pointed to a P450-mediated metabolic resistance mechanism involved in lambda-cyhalothrin resistance of AL-R population. Nevertheless, other metabolic mechanisms, such as glutathione S-transferase, and esterase mediated metabolisms, should not be excluded, because synergists DEM, DEF, and TPP also increased the toxicity of lambda-cyhalothrin in AL-R population. This phenomenon was similar with the enhanced detoxification rather than target insensitivity mechanism found in deltamethrin resistant L. striatellus [33]. In the Order Hemiptera, a variety of studies have documented pyrethroid resistance associated with P450s [24,25,33,34], esterases [33,[35][36][37] and glutathione S-transferases [38,39]. Based on previous transcriptome analysis, 49 P450 unigenes were differentially expressed between the resistant and reference populations, including 8 P450 unigenes upregulated and 41 unigenes downregulated. The expression patterns of 8 upregulated P450 unigenes and 3 insecticide resistance related P450 genes were further analyzed via qPCR. Our results showed that CYP6HM1, CYP6HM2, CYP6JB1, CYP6JB2, CYP6JC1, CYP6X2, and CYP395H1 are more highly increased in the AL-R population than the AL-S population ( Table 3). All of the seven elevated P450s belong to the CYP6 family. CYP6 family was more frequently found involving in insecticide resistance than any other P450 family [40]. For example, CYP6X1 in L. lineolaris was associated with pyrethroid resistance [24], while our CYP6X2 was highly similar to CYP6X1 of L. lineolaris (up to 82% amino acid sequence identity). CYP6AY3v2 in Laodelphax striatellus (Fallén) associated with deltamethrin resistance [33], while our CYP395H1 showed 34% similarity with CYP6AY3v2 of L. striatellus. CYP6F1 in C. quinquefasciatus resistant to pyrethroids [41]. CYP6A51 in Ceratitis capitate was resistant to lambda-cyhalothrin [42]. Moreover, inducibility by insecticide is a typical characteristic of some P450 genes involved in insecticide resistance [43][44][45][46]. In our case, the expression of CYP6X2 was also induced in both the AL-R (1.54-fold) and AL-S populations (1.86-fold) when adults were treated with a dose of lambda-cyhalothrin equivalent to their corresponding LD 50 values. Therefore, we hypothesize that CYP6X2 gene may play a relevant role in the resistance of the AL-R population to lambdacyhalothrin by over-expression of a lambda-cyhalothrin-inducible gene. However, the overexpression of P450 genes does not necessarily correlate with insecticide resistance [47]. Further works are needed to demonstrate unequivocally the role of CYP6X2 in resistance to lambdacyhalothrin, including the metabolism of lambda-cyhalothrin by recombinant CYP protein. Elevated expression of P450 genes in resistant insects may be achieved through increased transcription by mutations/insertions/deletions in cis-acting promoter sequences [17]. There has been a report of the insertion of a 15 bp fragment close to the transcription start site (−15 to −29) in the 5'-flanking region of the CYP6D1 gene in permethrin-resistant strains of M. domestica, which was absent in susceptible strains [48]. Therefore, comparison of the 5'UTR and promoter sequences was necessary for identifying regions responsible for the up-regulation of CYP6X2. Conclusions The metabolic resistance mediated by P450 appears to be the main resistance mechanism in the resistant AL-R population. Although, our data could not firmly conclude that up-regulation of the seven identified detoxification genes are associated with the observed lambda-cyhalothrin resistance, it certainly provides a solid basis for future functional studies of encoded proteins and resistance mechanism confirmation. Supporting information S1 Fig. Full-length mRNA and amino acid sequence of CYPs A. lucorum. Conserved amino acid domains common to cytochrome P450s are highlighted as follows: the helix I, helix K, PERF and heme-binding motif is shaded in yellow, blue, grey and purple respectively. A CYP6X2 B CYP6JB1 C CYP6HM1 D CYP6JC1 E CYP6HM2 F CYP6JB2 G CYP395H1. (RAR) S1 Table. Sequences of primers used in this study. (DOCX)
2018-07-05T00:30:48.509Z
2018-06-27T00:00:00.000
{ "year": 2018, "sha1": "7681d0da36aa9c156e6d25b871084aa21765b73e", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0198671&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7681d0da36aa9c156e6d25b871084aa21765b73e", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
261704273
pes2o/s2orc
v3-fos-license
Lower mortality risk in APOE4 carriers with normal cognitive ageing Abnormal cognitive ageing, including dementia, poses serious challenges to health and social systems in ageing populations. As such, characterizing factors associated with abnormal cognitive ageing and developing needed preventive measures are of great importance. The ε4 allele of the Apolipoprotein E gene (APOE4) is a well-known genetic risk factor for late-onset Alzheimer’s disease. APOE4 carriers are also at elevated risk of cardiovascular diseases which are associated with increased risk of cognitive impairment. On the other hand, APOE4 is known to be associated with reduced risk of multiple common types of cancer—a major age-related disease and leading cause of mortality. We conducted the first-ever study of APOE4’s opposing effects on cognitive decline and mortality using competing risk models considering two types of death—death with high-amounts versus low-amounts of autopsy-assessed Alzheimer’s neuropathology. We observed that APOE4 was associated with decreased mortality risk in people who died with low amounts of Alzheimer’s-type neuropathology, but APOE4 was associated with increased mortality risk in people who died with high amounts of Alzheimer’s-type neuropathology, a major risk factor of cognitive impairment. Possible preventive measures of abnormal cognitive ageing are also discussed. Results Sample characteristics.This study used the National Alzheimer's Coordinating Center (NACC) data repository which is composed of data submitted from 45 NIA Alzheimer's Disease Research Centers (ADRC's) from across the US.As of March 2023, the NACC database included 45,998 individuals recruited since 2005.More details about sampling and inclusion/exclusion criteria are described in the Methods section. The demographic and clinical characteristics for subjects in the two competing risk groups and censored subjects are summarized in Table 1.As shown in Table 1, the competing risk group with low amounts of AD neuropathology at death (DeadLowADnp) had younger baseline ages, a lower proportion of females, less cognitive impairment at baseline, and shorter study follow-up time compared to the competing risk group with high amounts of AD neuropathology (DeadHighADnp).Censored subjects had younger baseline ages, higher proportions of females and people who identified as non-White or Hispanic, as well as less cognitive impairment at baseline, compared to both the DeadLowADnp and DeadHighADnp groups.Censored subjects had longer study follow-up times compared to the DeadLowADnp group but had similar follow-up times to the DeadHighADnp group. The APOE allele distributions for subjects in the two competing risk groups and censored subjects are shown in Table 2.These APOE4 allele distributions are very similar to reported distributions from studies of the general population.In particular, the APOE4 prevalence in the DeadLowADnp group of 21% was similar to the estimated 23% in non-demented elderly populations 8 , and the 56% prevalence among the DeadHighADnp group was similar to the average population estimate of 58% for people with autopsy-assessed AD 7 .The APOE4 prevalence in the censored subjects was 40%, which was higher than the DeadLowADnp and lower than the DeadHighADnp group.This was not surprising since this group contains a mix of subjects, some of whom will eventually die with high amounts of AD neuropathology and others with low amounts. Autopsy propensity score model.As the first step in our primary analysis, a logistic regression model was used to derive autopsy propensity scores to reduce potential bias related to consent for brain autopsy donation.Five variables were selected to be included in the model using backward stepwise selection with cross-validation.Having a self-identified race as White (compared to non-White) was the strongest predictor of autopsy participation (OR = 2.73, 95% CI 2.38-3.12),followed by living in a senior community (independent, assisted living, or nursing facility vs. private residence, OR = 1.68, 95% CI 1.49-1.89),participation in a research study (OR = 1.48, 95% CI 1. 35-1.63), higher Clinical Dementia Rating (CDR®) Dementia Staging Instrument sum of boxes (OR = 1.44, 95% CI 1.37-1.51),and years of education (OR = 1.26, 95% CI 1. 20-1.31).APOE4 was not a significant predictor of autopsy propensity univariately nor as a covariate if added to the final propensityscore model.This logistic regression model was used to predict autopsy propensity scores for all subjects in our study.The predicted scores were then broken into discrete percentile categories.The second step in our primary www.nature.com/scientificreports/analysis was to incorporate these autopsy propensity score percentile categories as strata in the stratified competing risk survival models. Competing risk survival models.The cumulative incidence function (CIF) plot from the Fine and Gray subdistribution competing risk model (Fig. 1a,b) showed the lowest incidence of death for APOE4 carriers in the DeadLowADnp competing risk group compared to all other groups.In contrast, APOE4 carriers in the DeadHighADnp group had the highest incidence of death.In the Fine and Gray competing risk survival model, the 95% confidence interval (CI) of the subdistribution hazard ratio (SHR) for APOE4 was below one in people www.nature.com/scientificreports/who died with low amounts of AD neuropathology (DeadLowADnp).This indicated that APOE4 carriers in the DeadLowADnp competing risk group had decreased mortality risk (SHR = 0.42, 95% CI 0.37-0.47)compared to non-carriers (Fig. 2).In contrast, the 95% CI of the SHR for APOE4 was above one in people who died with high amounts of AD neuropathology (DeadHighADnp), indicating that APOE4 carriers in the DeadHighADnp competing risk group had increased mortality risk (SHR = 2.74, 95% CI 2.57-2.92)compared to non-carriers (Fig. 2).The pattern of the cause-specific hazard ratio (CSHR) and the SHR from the www.nature.com/scientificreports/Significant baseline covariates in the competing risk survival models included sex, functional independence, cognitive impairment, and global CDR score.In both competing risk groups, reduced mortality risk was associated with females and being functionally independent, cognitively unimpaired, or having lower global CDR scores.There were no significant three-way or higher-order interactions in the multivariable competing risk models.The β coefficients and p-values for the multivariable models including all two-way interactions that had significant (p < 0.05) CSHR and SHR are shown in Table 3.In particular, in the DeadHighADnp competing risk group, there were significant two-way interactions of APOE4 with age (SHR = 0.977, 95% CI 0.966-0.989;CSHR = 0.986, 95% CI 0.979-0.993)and with female sex (SHR = 0.850, 95% CI 0.739-0.977;CSHR = 0.744, 95% CI 0.652-0.849).In the DeadLowADnp competing risk group, there was a significant interaction between APOE4 and cognitive impairment (SHR = 0.401, 95% CI 0.314-0.512;CSHR = 0.433, 95% CI 0.337-0.556).In sensitivity analysis, the direction of the main findings for APOE4 remained the same with or without including censored subjects, adjusting for covariates, or using the autopsy propensity scores as a stratifying variable. Competing risk survival models were conducted within each individual Alzheimer's Disease Research Centers (ADRC's) from the NACC data repository, to check for stability and consistency across study sites.Since the www.nature.com/scientificreports/sample sizes were smaller in the individual ADRC, we used univariate competing risk survival models, with only APOE4 as an independent variable, to keep the number of parameters low and statistical power high.Of the 45 ADRC in NACC, 42 had autopsy data on at least one person.Of those 42 ADRC, 17 ADRC's which had 165 or more study subjects with autopsy data (representing 81% of the total autopsied cases) had statistically significant SHR's that were all in the same direction as our overall SHR finding (Fig. 2).The rest of the ADRCs had fewer than 164 subjects with autopsy data and/or missing other key data thereby yielded insufficient power to reach statistical significance for SHR in the competing risk models.The adjusted SHR for APOE4 were further evaluated in multivariable competing risk survival models among each of the sub-groups or strata formed by several key variables: baseline age (split at the median), sex, baseline cognitive impairment, race, and ethnicity to check for stability of the APOE4 HRs and consistency among these sub-groups.These models included APOE4 as the main explanatory variable and all covariates apart from the stratifying variable.The fitted multivariable models among each of the sub-groups or strata split by baseline age, sex, and baseline cognitive impairment as well as the significant two-way interaction terms.The adjusted hazard ratios in the multivariable competing risk analysis within each stratum are consistent with the same general pattern.The 95% CI's of the SHR's were below one for the APOE4 carriers in the DeadLowADnp group and above one for the APOE4 carriers in the DeadHighADnp group across all sub-groups for each stratification variable (Fig. 3).In particular, when broken down further by race/ethnicity, the SHR's were consistently below one in the DeadLowADnp group in the Black or African American, Asian, and non-White Hispanic sub-samples, although some had insufficient power to reach statistical significance. Discussion In this study, we observed that APOE4 carriers with low amounts of AD neuropathology at death had decreased mortality risk compared to non-carriers, but APOE4 carriers with high amounts of AD neuropathology at death had increased mortality risk.Although self-identified race was the strongest predictor of consent for a brain autopsy, our main HRs of APOE4 findings were consistent in non-White and Hispanic sub-samples.Similarly, all of the results from the univariate competing risk analyses in the individual Alzheimer's Disease Research Centers were in the same direction as our main findings on HRs of APOE4.Importantly, the data from different centers were independent samples, so the consistency in the direction of the subdistribution hazard ratios across different centers indicates that our results were not swayed by strong effects from a few particular centers or sub-groups, or by an aberrant occurrence in the dataset. Previous studies have reported APOE4 carriers as a whole have increased all-cause mortality risk [25][26][27][28] .While our results for the competing risk group with high amounts of AD neuropathology agree with the mortality risk results of these previous studies, our observed association of decreased mortality risk in APOE4 carriers with low amounts of AD neuropathology at death are contradictory to reported findings of the existing studies.Note that, these previous studies focused only on all-cause mortality and did not consider competing risks, which thereby led to severe bias in their findings on one of the two competing risk groups.Although many APOE4 carriers develop AD and its related neuropathology, about 23% of non-demented elderly people globally are APOE4 carriers 8 .Thus, our observed result of decreased mortality risk in APOE4 carriers with low amounts of AD neuropathology could impact a sizeable portion of the general population. At the population level, cognitive ageing and mortality risk associated with APOE4 naturally depend on a trade-off with other leading causes of death in elderly people.Of particular importance are major age-related diseases such as cardiovascular diseases (e.g., heart attack, stroke, heart failure, coronary artery disease, ischemic heart disease) and cancers 10 .Numerous studies have shown that, in addition to an elevated risk for AD, APOE4 www.nature.com/scientificreports/carriers are at increased risk of cardiovascular diseases (CVD) [11][12][13][14][15] and decreased risk for major types of cancers and many other diseases [19][20][21]32 . A nmber of studies have shown how these pleiotropic effects of APOE4 can be beneficial to some APOE4 carriers and detrimental to others 11,[32][33][34] .Notably, in a prospective cohort study of 3,924 participants of the Framingham Heart Study Offspring cohort, Kulminski et al. observed that APOE4 was antagonistically associated with onsets of CVD and cancer, where APOE4 carriers were predisposed to an earlier onset of CVD and a delayed onset of cancer compared to the non-carriers 11 .In a phenome-wide association study of the UK Biobank data, Lumsden et al. found that APOE4 carriers were at increased odds for AD and ischemic heart disease, and decreased odds for gallbladder disease and liver disease compared to non-carriers 32 . Opposing effects of APOE4 on neurodegeneration were observed in a mouse model by Hudry et al., where they hypothesized that APOE4 may be neurotoxic during early stages of amyloid deposition in the development of AD, but may be neuroprotective in later stages of ageing 33 .These reported pleotropic effects of APOE4 could have contributed in several ways to the differences in mortality risk we observed among APOE4 carriers with low versus high amounts AD neuropathology at death.There are many studies which have found that APOE4 carriers have elevated risk for CVD (either early or late onset) [11][12][13][14][15] , which has been the number-one leading cause of death globally for decades 9 .In the NACC dataset, APOE4 carriers have an elevated risk of clinician reported myocardial infarction among people with a concurrent AD diagnosis (n = 4,569) but not in those without an AD diagnosis (n = 13,009) at the baseline assessment of clinician reported comorbidities in the past 12 months.These results indicate that among APOE4 carriers in the competing risk group with high amounts of AD neuropathology, there is higher risk of CVD-related death, which directly supports our finding of elevated mortality risk in this group.The two-way interaction between APOE4 and sex that we observed in subjects with high amounts of AD neuropathology may also be related to death with CVD.Since men and APOE4 carriers are at greater risk for CVD-related death than women and noncarriers of APOE4 9,15 , the combined risk for men who are APOE4 carriers could make that sub-group particularly vulnerable to CVD-related death. The association of APOE4 with early-onset CVD as a cause of death 11 could also be a contributing factor to our findings on mortality risk for APOE4 carriers with low AD neuropathology.First, people with early-onset CVD would be more likely to die, or be too unhealthy, at a younger age to participate in an Alzheimer's disease study at NACC.This would decrease the observed hazard of death with early-onset CVD among NACC participants, particularly among APOE4 carriers who are more susceptible to early-onset CVD.Second, people with a nonfatal CVD event, or those with a family history of CVD, would be more likely than not to be taking medication or preventative measures to treat or delay the onset of CVD.This may not only decrease the hazard of death with early-onset CVD among NACC participants, but these medications and preventative measures may also provide some protection from cognitive decline or accumulation of AD neuropathology according to multiple published studies [35][36][37][38] .Thus, APOE4 carriers in the competing risk group with low amounts of AD neuropathology may have lower risk of CVD-related death than the general population which could have contributed to the decreased mortality risk for people in this group. APOE4 carriers with low AD neuropathology may also benefit from potential protective effects against cancer, which is the second leading cause of death 10 .In addition to the trade-off described by Kulminski et al. with APOE4 postponing cancers to older ages 11 , other studies have also reported that APOE4 carriers have lowered mortality risk for some common cancers including melanoma 19 , colon cancer 20 , and colorectal neoplasm 21 .These results are concurrent with the NACC data where APOE4 was associated with being cancer free at the baseline assessment of clinician reported diagnosis of cancer in the past 12 months.Considering these results, one could expect that among those APOE4 carriers with low amounts of AD neuropathology, the risk for cancer is also reduced compared with the general population, and therefore contributes to decreased mortality risk in this group. In addition to the trade-off of risk with age-related diseases, the theory of antagonistic pleiotropy might also be considered in relation to mortality and APOE4 39,40 .In the theory of antagonistic pleiotropy, genes that are related to detrimental effects in ageing persist in the population because they contribute to fertility and/or have beneficial survival effects in early life.APOE4 is believed to promote inflammation as an innate immune response to infections in early life, but this beneficial inflammation process may become problematic later in life as people become more prone to age-related diseases at older ages.A number of studies have linked APOE4 to enhanced fertility 40,41 and better outcomes related to infectious diseases [42][43][44][45] , particularly in early life.It is possible that these same protective mechanisms continue to protect APOE4 carriers if they do not become susceptible to developing age-related diseases and AD neuropathology.This would contribute to the lowered mortality risk among this group of APOE4 carriers with low AD neuropathology. Reported interactions between APOE4 and other genes which provide protection against shortened lifespans 46 or reduce susceptibility to AD neuropathology 47,48 , present interesting opportunities for future explorations of potential mechanisms which could lead to decreased mortality risk among APOE4 carriers with low amounts of AD neuropathology and abnormal cognitive ageing.Lin et.al. observed interactions between APOE4 and Wnt signaling genes, where there was a pro-longevity effect of rare coding variants in the Wnt signaling pathway for APOE4 carriers 46 .A study by Belloy et al. reported that interactions between APOE4 and Klotho (a longevity gene) were associated with reduced AD risk and amyloid pathology burden in a subset of APOE4 carriers 47 .However, Chen et al. indicated that the interaction between APOE4 and Klotho may not only provide protection of reduced AD risk and slowed progression in the early stages of the disease for APOE4 carriers, but also provide significant protection of slowed cognitive decline for non-carriers in later disease stages 49 .In a mouse model, Tachibana et al. observed an interaction between APOE4 and LRP1 (a protein coding gene) where mice with LRP1 and APOE4 had increased amyloid pathology, but LRP1-knockout mice with APOE4 did not have increased amyloid pathology 48 .It is possible that interactions of APOE4 with some combination of these other genes could provide protection against cognitive decline and the accumulation of AD neuropathology, thereby www.nature.com/scientificreports/reducing mortality risk among APOE4 carriers.Genetic data from the National Institute on Ageing Genetics of Alzheimer's Disease Data Storage Site for the NACC cohort is a valuable resource for future studies to provide insight into potential genes which interact with APOE4 that are associated with the results of our study.The process of abnormal cognitive ageing associated with the excess accumulation of AD neuropathology has devastating effects on the quality of life for patients and has a detrimental impact on mortality risk and lifespans.Considering that APOE4 is not rare, our findings on differential mortality risks in APOE4 carriers are relevant to both lifespans and quality of life for a sizable portion of the general population.Moreover, our findings open up interesting opportunities for future studies on factors which could protect APOE4 carriers and other vulnerable sub-groups against abnormal cognitive ageing and prolong their lives.Given that there is no cure nor effective treatment for abnormal cognitive ageing from the accumulation of AD neuropathology, it is of interest to investigate potential alterable lifestyle measures that might reduce risk of late-onset AD among vulnerable subpopulations.Our findings have significant implications on the possibility of reducing mortality risk through preventing the accumulation of AD neuropathology among APOE4 carriers, and possibly other vulnerable subpopulations as well.Preventive lifestyle measures for CVD (e.g., smoking abstinence or cessation, regular exercise, healthy diet, improved sleep) have been shown to have beneficial effects on cognition and other functions of the brain [50][51][52][53] .Some commonly used dietary supplements such as omega-3 and fish oil products have been shown to have protective effects for CVD, as well as AD and its related neuropathology 38 .There are many studies of commonly used FDA-approved medication for hypertension 35 , heart disease, diabetes (e.g., metformin) 37 , that have shown benefits for reduced risk of AD neuropathology and cognitive decline.We believe that the two-way interaction we observed between APOE4 and baseline cognitive impairment among people with low amounts of AD neuropathology may be influenced by these types of preventative measures.Specifically, APOE4 carriers are more likely to have a family history of AD and/or CVD which could make them more proactive about changes in their cognition and overall health.Altogether, it is warranted to further study and optimize these preventative measures, dietary supplements, and medications intended to treat or prevent CVD, or other age-related diseases, so that some combinations may become effective in protecting against cognitive decline, reducing mortality risk, and prolonging health span and lifespans in vulnerable sub-groups. A major limitation of our study is that the NACC data is not a random sample with the majority of enrollees identifying as non-Hispanic White.Thus, our findings may not extend beyond non-Hispanic White populations.Autopsies were performed in a non-random sub-sample of all the dead subjects, and we account for this potential participation bias by using the autopsy propensity scores.However, there may also be further bias related to the high proportion of NACC enrollees with APOE4 or other perceived AD risk factors.Conversely, APOE4 carriers who may be less susceptible to developing AD neuropathology (e.g., those without a family history of AD) could be underrepresented in the NACC data, since they may not be aware of carrying APOE4.Further research on characterizing APOE4 and other factors associated with abnormal cognitive ageing, and developing needed preventive measures are warranted. Sample. The data for this study came from the National Alzheimer's Coordinating Center (NACC) data repository which is composed of data submitted from 45 past and presently active NIA Alzheimer's Disease Research Centers (ADRC) from across the US.All studies and all experimental protocols from every ADRC were approved by their respective Institutional Review Board prior to study initiation.Written informed consent was obtained for all study participants at each ADRC.All studies and methods at all participating ADRC's were conducted in accordance with the principles expressed in the Declaration of Helsinki.In short, local Institutional Review Boards (IRBs) approve ADRC research activities.NACC data are de-identified, and research involving the NACC database is approved by the University of Washington IRB. As of March 2023, the NACC database included 45,998 individuals recruited since 2005.All of our analysis excluded individuals who were missing APOE genotyping (n = 11,565).People who were dead but did not have a brain autopsy (n = 3,894) were compared to those that had a brain autopsy (n = 5,858) in developing the autopsy propensity model.People who were dead but did not have a brain autopsy used for developing the propensity score model, and 112 individuals who had a brain autopsy but were missing key data for group characterization, were excluded from the competing risk survival models.The remaining 5,746 individuals with the key brain autopsy data were divided into two competing risk groups: death with low amounts of AD neuropathology (DeadLowADnp, n = 1,889) and death with high amounts of AD neuropathology (DeadHighADnp, n = 3,857).Individuals without reported death (n = 24,681) were also included as censored at their last clinical visit for the competing risk survival models. Autopsy Assessment.Alzheimer's disease neuropathology assessed at autopsy in the NACC data before 2014 included neurofibrillary tangles (Braak stages) and neuritic plaques (CERAD scores) 30,31 .At that time, the NACC data also included a primary neuropathological diagnosis (e.g., AD, Lewy Body dementia, vascular dementia).In 2014, the autopsy diagnosis was replaced by the ABC score, which was added to the NACC data along with the assessment of amyloid plaques (Thal phases) 30 .The ABC score provides a standardized quantification based on the Thal phases (score A), Braak stages (score B) and CERAD scores (score C) 31 .In this study, both the earlier method and the ABC scores were used to quantify the general amounts of AD neuropathology at death as either low (DeadLowADnp) or high (DeadHighADnp).The DeadLowADnp competing risk group included individuals who died with lower amounts of Alzheimer's neuropathology, characterized by an autopsy assessed ABC score of none or low, a non-AD primary autopsy diagnosis, a B score of 0 or 1 but missing A and/or C score, or an A score of 0 or 1 along with a C score of 0 or 1 but missing B score.The DeadHighADnp competing risk group included individuals who died with higher amounts of Alzheimer's neuropathology, Statistical analysis-autopsy propensity model. A logistic regression model predicting autopsy par- ticipation among all dead subjects was formed to derive an autopsy propensity score.Forward/backward selection with cross-validation (R package "rms" v6.3-0) was used for variable selection from the demographic and clinical variables.We tested APOE4 as a predictor both univariately and in the final propensity model with the selected covariates, to check if it appeared to be randomly distributed between people who did and did not have autopsies.The predicted autopsy propensity scores from the final model were generated for all individuals in the study (living and dead) and divided into J = 20 discrete percentile categories.The stratified competing risk survival models included these 20 autopsy propensity score categories as model strata. Statistical analysis-competing risks survival models.Age at death was used as the time to the outcome events (death with low vs. death with high AD neuropathology) and age at the last clinical evaluation was treated as the censoring time for living individuals.APOE4 (with non-carriers as the reference group) was the main explanatory variable in all competing risk survival models as it was the main effect of interest.Baseline age was included as a covariate and all other demographic and clinical covariates were selected with the forward/ backward variable selection method.We further tested two-way and three-way interactions between all covariates.Backward selection was used in the model of the CSHR to select significant interactions between covariates.The selected interactions were then tested in the Fine and Gray model of the SHR.Only interactions that were significant in both the CSHR and SHR were retained in the analysis reported in Table 3. We used the Fine and Gray competing risk survival model 22,23 to estimate the subdistribution hazard ratio (SHR) and to calculate the cumulative incidence function (CIF).We also fitted a cause-specific hazards (CSH) competing risk survival model 22,24 to estimate the cause-specific hazard ratio (CSHR).With the J = 20 autopsy propensity score percentile categories as model strata and the K = 2 competing risk groups (DeadLowADnp and DeadHighADnp), both models have the following proportional-hazard form: where, for each subject i in stratum j and competing risk group k, t is age at death (or age at the last clinical visit for censored subjects), h 0 (t) is the baseline hazard, X represents a vector of covariates including APOE4 (presence or absence) and the selected covariates (including main effects and interaction terms), and β ′ k represents the corresponding vector of estimated regression coefficients (which are fixed across j strata as in typical stratified Cox PH model settings) for each competing risk group k. 54,55 We used both the Fine and Gray and the CSH competing risk survival models because they estimate the hazard rates differently but both are commonly reported in literature.The hazard rate equation for the causespecific hazard function from the CSH model is: where h k C (t) , is the instantaneous (i.e.δt ) risk of dying if a subject is in the competing risk group k given that the subject is still alive by age t.Thus, in the CSH competing risk survival model, all subjects not in a particular competing risk group are counted as censored observations when estimating the CSHR in that particular competing risk group.The hazard rate equation for the subdistribution hazard from the Fine and Gray model is: where h k S (t) is the instantaneous risk of dying if a subject is in the competing risk group k given that the sub- ject is still alive by age t or is in a different competing risk group and has already died.Thus, in the Fine and Gray competing risk survival model, the SHR estimates the effect of the covariates in the presence of the other competing risks, rather than including them as censored observations as in the CSHR estimation.The Fine and Gray model was used to calculate the associated cumulative incidence function (CIF) and the corresponding 95% confidence intervals shown in Fig. 1. The R packages "survival" (v3.4-0), "cmprisk" (v2.2-11), "crrSC" (v1.1.2),and "timereg" (v2.0.2) were used to generate CSHR (unstratified and stratified), unstratified SHR, stratified SHR, and CIF plots (Fig. 1), respectively.The models were also assessed without including censored subjects, propensity-score stratification, or adjusting for covariates to check for robustness of the results.Within each individual ADRC, a univariate Fine and Gray competing risk survival model was used to assess mortality risk for APOE4 carriers to check for stability and consistency across all ADRCs as shown in Fig. 2. Also, the adjusted SHR for APOE4 were further evaluated in multivariable competing risk survival models among each of the sub-groups or strata formed by several key variables: baseline age (split at the median), sex, baseline cognitive impairment, race, and ethnicity to check for stability and consistency of the APOE4 HRs among these sub-groups (Fig. 3).There is a small portion of study subjects with information available on some clinician diagnosed comorbidity (e.g.CVD, cancer) in the NACC data.We informally reported the relative abundance of the comorbidity among the subgroup of APOE4 carriers (1) δt Figure 1 . Figure 1.The cumulative incidence function (CIF) plot for the competing risk groups is shown for APOE4 carriers in panel (a), and non-carriers in panel (b).The cumulative incidence (as a proportion) is on the y-axis and the age at death (in years) is on the x-axis.APOE4 carriers who died with high amounts of Alzheimer's lesions (DeadHighADnp APOE4 carriers, red solid line) had the highest incidence of death, followed by non-carriers (DeadHighADnp non-carriers and DeadLowADnp non-carriers, red and blue dashed lines, respectively).The lowest incidence of death was among APOE4 carriers who died with low amounts of AD neuropathology (DeadLowADnp APOE4 carriers, blue solid line).The 95% confidence intervals (shaded areas) of the CIF-calculated by the Fine and Gray method-show no overlap in the cumulative incidences of death for the competing risks of death among APOE4 carriers and non-carriers. Figure 2 . Figure 2. Forest plots of univariate Fine and Gray competing risk survival analyses are shown in the 17 Alzheimer's Disease Research Centers (ADRC) that had at least 165 autopsied people (representing > 80% of the autopsy sample).The colored horizontal bars represent the estimated 95% confidence interval of the subdistribution hazard ratio (SHR) for APOE4 on mortality risk, with the box in the middle at the estimated SHR scaled to the size of the sub-sample.The dashed vertical line is at exp(β = 0) = 1, with decreasing mortality risk on the left and increasing mortality risk on the right.The SHR estimates for the competing risk group of death with low amounts of AD neuropathology (DeadLowADnp, shown in blue) are all to the left of the dashed vertical line, indicating decreased mortality risk was associated with APOE4 for this competing risk group in all 17 ADRC.The SHR estimates for the competing risk group of death with high amounts of AD neuropathology (DeadHighADnp, shown in red) are all to the right of the dashed vertical line, indicating increased mortality risk was associated with APOE4 for this competing risk group in all 17 ADRC.The SHR for APOE4 on mortality risk from the univariate competing risk model including all ADRC is shown in bold at the bottom of the figure. Figure 3 . Figure 3. Forest plots of adjusted hazard ratios from multivariable Fine and Gray competing risk survival analyses within each stratum as stratified by baseline median age for autopsied subjects in panel (a), sex in panel (b), baseline cognitive impairment in panel (c), race in panel (d), and ethnicity in panel (e).Each multivariate model included the main explanatory variable APOE4 and all covariates apart from the stratification variable.The models stratified by baseline median age, sex, and baseline cognitive impairment also included significant two-way interactions.The colored horizontal bars represent the estimated 95% confidence interval of the subdistribution hazard ratio (SHR) for APOE4 on mortality risk, with the box in the middle at the estimated SHR scaled to the size of the sub-sample in each stratum.The dashed vertical line is at exp(β = 0) = 1, with decreasing mortality risk on the left and increasing mortality risk on the right.The SHR estimates for the competing risk group of death with low amounts of AD neuropathology (DeadLowADnp, shown in blue) are all to the left of the dashed vertical line, indicating decreased mortality risk was associated with APOE4 for this competing risk group across all strata.The SHR estimates for the competing risk group of death with high amounts of AD neuropathology (DeadHighADnp, shown in red) are all to the right of the dashed vertical line, indicating increased mortality risk was associated with APOE4 for this competing risk group across all strata. https://doi.org/10.1038/s41598-023-41078-5 Table 1 . Demographics and clinical characteristics for study groups from the NACC cohort.DeadLowADnp = the competing risk group of individuals who died with low amounts of Alzheimer's neuropathology.DeadHighADnp = the competing risk group of individuals who died with high amounts of Alzheimer's neuropathology.Censored = all individuals who were not indicated to be dead.SD = standard deviation.IQR = Interquartile range.Cog.imp = cognitively impairment.AD = Alzheimer's disease.CDR = Clinical Dementia Rating (CDR®) Dementia Staging Instrument score.a Statistically significant difference (at p < .017with Bonferroni correction) from the DeadLowADnp group.b Statistically significant difference (at p < .017with Bonferroni correction) from the DeadHighADnp group. Table 2 . APOE allele distributions for study groups from the NACC cohort.DeadLowADnp = the competing risk group of individuals who died with low amounts of Alzheimer's neuropathology.DeadHighADnp = the competing risk group of individuals who died with high amounts of Alzheimer's neuropathology.Censored = all individuals who were not indicated to be dead.APOE4 carriers = individuals with at least one ε4 allele (i.e., ε2/ε4, ε3/ε4, and ε4/ε4).a Statistically significant difference (at p < .017with Bonferroni correction) from the DeadLowADnp group.b Statistically significant difference (at p < .017with Bonferroni correction) from the DeadHighADnp group. Table 3 . Multivariable competing risk models with main effects and significant 2-way interaction terms.DeadLowADnp = the competing risk group of individuals who died with low amounts of Alzheimer's neuropathology.DeadHighADnp = the competing risk group of individuals who died with high amounts of Alzheimer's neuropathology.Cog.imp = cognitively impairment.Fun.indep = functional independence.CDR = Clinical Dementia Rating (CDR®) Dementia Staging Instrument score.APOE4 carriers = individuals with at least one ε4 allele (i.e., ε2/ε4, ε3/ε4, and ε4/ε4).
2023-09-14T06:17:24.549Z
2023-09-12T00:00:00.000
{ "year": 2023, "sha1": "154e3c820b8822a47e47e0d1289c7b2730e64faf", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-023-41078-5.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fae41dcfa44d06f604efd292e10147230df4a923", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2933952
pes2o/s2orc
v3-fos-license
Magnetic Resonance Imaging of Blood Brain/Nerve Barrier Dysfunction and Leukocyte Infiltration: Closely Related or Discordant? Unlike other organs the nervous system is secluded from the rest of the organism by the blood brain barrier (BBB) or blood nerve barrier (BNB) preventing passive influx of fluids from the circulation. Similarly, leukocyte entry to the nervous system is tightly controlled. Breakdown of these barriers and cellular inflammation are hallmarks of inflammatory as well as ischemic neurological diseases and thus represent potential therapeutic targets. The spatiotemporal relationship between BBB/BNB disruption and leukocyte infiltration has been a matter of debate. We here review contrast-enhanced magnetic resonance imaging (MRI) as a non-invasive tool to depict barrier dysfunction and its relation to macrophage infiltration in the central and peripheral nervous system under pathological conditions. Novel experimental contrast agents like Gadofluorine M (Gf) allow more sensitive assessment of BBB dysfunction than conventional Gadolinium (Gd)-DTPA enhanced MRI. In addition, Gf facilitates visualization of functional and transient alterations of the BBB remote from lesions. Cellular contrast agents such as superparamagnetic iron oxide particles (SPIO) and perfluorocarbons enable assessment of leukocyte (mainly macrophage) infiltration by MR technology. Combined use of these MR contrast agents disclosed that leukocytes can enter the nervous system independent from a disturbance of the BBB, and vice versa, a dysfunctional BBB/BNB by itself is not sufficient to attract inflammatory cells from the circulation. We will illustrate these basic imaging findings in animal models of multiple sclerosis, cerebral ischemia, and traumatic nerve injury and review corresponding findings in patients. Unlike other organs the nervous system is secluded from the rest of the organism by the blood brain barrier (BBB) or blood nerve barrier (BNB) preventing passive influx of fluids from the circulation. Similarly, leukocyte entry to the nervous system is tightly controlled. Breakdown of these barriers and cellular inflammation are hallmarks of inflammatory as well as ischemic neurological diseases and thus represent potential therapeutic targets. The spatiotemporal relationship between BBB/BNB disruption and leukocyte infiltration has been a matter of debate. We here review contrast-enhanced magnetic resonance imaging (MRI) as a non-invasive tool to depict barrier dysfunction and its relation to macrophage infiltration in the central and peripheral nervous system under pathological conditions. Novel experimental contrast agents like Gadofluorine M (Gf) allow more sensitive assessment of BBB dysfunction than conventional Gadolinium (Gd)-DTPA enhanced MRI. In addition, Gf facilitates visualization of functional and transient alterations of the BBB remote from lesions. Cellular contrast agents such as superparamagnetic iron oxide particles (SPIO) and perfluorocarbons enable assessment of leukocyte (mainly macrophage) infiltration by MR technology. Combined use of these MR contrast agents disclosed that leukocytes can enter the nervous system independent from a disturbance of the BBB, and vice versa, a dysfunctional BBB/BNB by itself is not sufficient to attract inflammatory cells from the circulation. We will illustrate these basic imaging findings in animal models of multiple sclerosis, cerebral ischemia, and traumatic nerve injury and review corresponding findings in patients. INTRODUCTION An important feature of the brain that sets it apart from other organs is the presence of the blood brain barrier (BBB), a selective barrier to the central nervous system (CNS) that impedes the influx of most compounds from blood to brain. The concept of a BBB dates to the late nineteenth century when Ehrlich (1885) observed that water-soluble dyes injected into the circulation leak into all organs of rodents except for the brain. Since then the image of the BBB changed from a static physical wall into a dynamic interface between the blood and the CNS that controls the supply of nutrients while simultaneously shielding it off from potentially harmful substances. The central anatomical substrate of the BBB is the cerebral endothelium which is characterized by the presence of tight cell-cell junctions (Kniesel and Wolburg, 2000), lack of fenestrations (Fenstermacher et al., 1988), and low pinocytotic activity (Sedlakova et al., 1999). These features restrict paracellular diffusion of water-soluble substances from blood to brain (Hawkins and Davis, 2005). In addition, the BBB is composed of large numbers of pericytes that are embedded into the vascular basement membrane and a layer of astrocytic end-feet ensheathing the vessels (Ballabh et al., 2004). They, together with the endothelial cells, form the so-called neurovascular unit which ensures homeostasis of the CNS microenvironment. Similarly, in the peripheral nervous system (PNS) nerve fibers are protected from the circulation by a blood nerve barrier (BNB). Impairment of the BBB or the BNB is a critical step in the development and progression of neurological conditions such as ischemic stroke (Latour et al., 2004) and multiple sclerosis (MS; Minagar and Alexander, 2003), but also traumatic brain (Schwaninger et al., 1999) or nerve injury (Seitz et al., 1989). Inflammation is another common hallmark of these disorders. It involves a complex cascade of events in which both, the activation of resident glial cells as well as the infiltration of bone-marrow derived leukocytes play an important role and might impact disease progression. While in ischemic stroke and trauma inflammation occurs as a response to brain or nerve tissue damage, in autoimmune disorders such as MS, inflammation initiates the disease. Hence, in both cases, preservation of BBB/BNB integrity and the prevention of inflammatory responses constitute promising therapeutic targets. However, application of barrier stabilizing www.frontiersin.org anti-edematous or anti-inflammatory treatments requires precise knowledge of the timing and location of BBB/BNB disturbances as well as cellular inflammation. Conventional magnetic resonance imaging (MRI) that is routinely used to assess CNS and PNS pathologies gives only a gross estimate of tissue damage. Moreover, signal changes are nonspecific and do not allow discrimination of areas with edema formation, inflammation, or glial scarring. Recently, novel MR contrast agents helped to gain deeper insights into the pluriformity of neuroinflammation. The present review focuses on the visualization of BBB and BNB disturbances by Gadolinium (Gd)-DTPA and Gadofluorine (Gf) enhanced MRI and depiction of cellular inflammation by iron particle and perfluorocarbon (PFC)based cellular MRI. By combining these imaging techniques it became apparent that the spatiotemporal relationship between breakdown of the BBB/BNB and leukocyte infiltration is more complex than previously anticipated. To illustrate these issues we will use three paradigmatic disorders of the nervous system as models: ischemic stroke, MS with its animal model, experimental autoimmune encephalomyelitis (EAE), and finally traumatic peripheral nerve injury. MR IMAGING OF BBB AND BNB DISTURBANCES CONVENTIONAL GD-DTPA ENHANCED MRI Gadolinium-based MR contrast agents are licensed for a broad range of clinical applications. In the nervous system Gd-DTPA is routinely applied to identify areas with BBB/BNB dysfunction. Upon systemic application it extravasates out of the intravascular compartment at sites with a leaky BBB and accumulates locally leading to a substantial T1-shortening effect within that region. Importantly, signal changes rapidly decline due to a reversal of the diffusion gradient upon clearance of Gd-DTPA from the circulation. Multiple sclerosis In MS, a chronic inflammatory demyelinating disease of the CNS, Gd-DTPA enhanced MRI is the current gold standard for the evaluation of acute disease activity. However, whereas Gd-DTPA enhancement reliably predicts the occurrence of relapses it does not correlate with cumulative impairment and disability (Filippi et al., 2011). Moreover, as triple-dose Gd-DTPA significantly increased the harvest of enhancing lesions (Silver et al., 2001) it became obvious that conventional Gd-DTPA enhanced MRI only captures a small portion of lesions with a dysfunctional BBB. Ischemic stroke Ischemic stroke regularly leads to disruption of the BBB which already starts a few hours after the onset of ischemia and lasts for several weeks (Strbian et al., 2008;Brouns and De Deyn, 2009). Interestingly, early parenchymal Gd-DTPA enhancement appears to be a predictor of hemorrhagic transformation in experimental stroke (Knight et al., 1998). Nonetheless, changes of the infusion protocol in rodent studies revealed that subtle changes in BBB permeability are missed by standard dose Gd-DTPA enhanced MRI. Among other reasons this might be due to the greatly lowered blood flow in the affected region that leads to suboptimal delivery of the contrast agent (Nagaraja et al., 2007). Thus, especially in the acute ischemic phase BBB disturbances might escape detection by conventional Gd-DTPA enhanced MRI (Merten et al., 1999). Nerve trauma As in the brain, nerve injury in the PNS is frequently accompanied by breakdown of the BNB. Nerves undergoing Wallerian degeneration (WD) after transection or crush injury characteristically show a prolongation of the T2 relaxation time at the lesion site and distally in conventional MRI (Does and Snyder, 1996). However, despite extravasation of albumin and Evans blue in experimental studies as clear evidence for BNB opening Gd-DTPA enhancement is not consistently detected in injured nerves (Cudlip et al., 2002;Lacour-Petit et al., 2003). Hence, it cannot be considered as a reliable measure for BNB integrity or dysfunction. GADOFLUORINE ENHANCED MRI: A NOVEL TOOL FOR EXPERIMENTAL RESEARCH Important proof-of-principle studies with Gf M, a novel experimental MR contrast agent developed by the former Schering AG (Berlin, Germany) revealed that BBB/BNB disturbances are more widespread than previously anticipated and can involve brain areas remote from focal lesions. Gf represents a highly fluorinated Gd-compound originally developed for MR lymphography and imaging of atherosclerotic plaques (Meding et al., 2007). We and others directly compared Gd-DTPA and Gf enhancing lesions in EAE, stroke, and nerve injury and found that Gf detects BBB and BNB disturbances with much higher sensitivity than Gd-DTPA. In these experimental settings Gf enhancement exactly matched the leakage of systemically applied Evans blue, a common marker for albumin extravasation across the BBB. In comparison to Gd-DTPA, Gf has unique molecular binding properties that might explain the different elimination kinetics and its higher sensitivity. Upon intravenous injection Gf is largely bound to serum albumin. It passively diffuses into the nervous system at sites of a leaky BBB or BNB, but, unlike Gd-DTPA gets trapped by local interactions. Thus, Gf persists in lesions long after clearance from the circulation which usually occurs within 24 h after application. Although the binding partners at the molecular level in the CNS and PNS have not yet been elucidated, in vitro studies suggested that Gf avidly binds to components of the extracellular matrix (Meding et al., 2007). Experimental autoimmune encephalomyelitis, the animal model of MS Experimental autoimmune encephalomyelitis is the most widely used animal model for MS. EAE lesions show lymphocyte and macrophage infiltration, variable degrees of demyelination, and leakage of the BBB. EAE lesions can be detected on T2-weighted (w) MRI similar to MS lesions, and partly show Gd-DTPA enhancement on T1-w MRI. When Gd-DTPA-and Gf enhancing lesions were counted, the number of Gf positive EAE lesions by far outnumbered Gd-DTPA positive lesions in individual EAE animals (Bendszus et al., 2008). Moreover, many Gf enhancing lesions were not yet visible on parallel T2-w MRI, the standard sequence to quantify lesion load in MS patients (Figure 1). Thereby, even spinal cord (Figure 1) and optic nerve lesions (Figures 2A-D) that often fail to be visualized despite unambiguous clinical involvement could be depicted by Gf on a standard 1.5 T MR scanner. In a subgroup of animals spinal cord specimens were analyzed for Gf uptake and parenchymal inflammation. Foci of Gf deposition could be detected macroscopically due to the coupling with a carbocyanine dye. Importantly, all tissue specimens devoid of inflammation were Gf negative, while Gf enhancing lesions always exhibited microglial activation or macrophage infiltration. However, a considerable number of lesions with low-grade inflammation failed to show Gf uptake. By contrast, severe inflammation was always accompanied by Gf accumulation. A possible explanation for this observation, besides sensitivity issues, could be that foci with mild microglia activation/macrophage infiltration, but without BBB leakage, represent initial stages of lesion development whereas severe inflammation secondarily evokes BBB opening. Frontiers in Neurology | Neurotrauma The enhanced sensitivity of Gf for EAE lesion detection was confirmed independently at 7 T high field MRI (Wuerfel et al., 2010). Among a total of 61 contrast enhancing lesions 26 were exclusively visible after Gf administration (nine of them in the optic nerve). The remarkable improvement in lesion detection was accompanied by early Gf uptake in the circumventricular organs (CVO) of diseased animals. The CVO are particular areas of the brain with an incomplete endothelial BBB that participate in immune cell recruitment to the CNS (Schulz and Engelhardt, 2005). Interestingly, Gf mean intensity ratios in the subfornicular organ and the area postrema significantly correlated with disease severity and onset of symptoms suggesting that early Gf enhancement in the CVO might be a predictor of the clinical EAE course. Ischemic stroke Similarly, in cerebral ischemia Gf is much more sensitive than Gd-DTPA enhanced MRI. Brain photothrombosis (PT) is a simple model of focal cerebral ischemia that uses local intravascular photo peroxidation to generate highly circumscribed ischemic cortical lesions with almost immediate breakdown of the BBB. Accordingly, PT lesions exhibit an early and strong Gd-DTPA and Gf uptake on T1-w MRI. However, within the first 12 h after the onset of ischemia Gf enhancement, in addition, gradually extended to the entire ipsilateral hemisphere (Stoll et al., 2009b) that undergoes functional alterations, but lacks neuronal damage . These Gf enhancing areas remote from the lesions did not show Gd-DTPA enhancement. Importantly, the timing of Evans blue extravasation on macroscopic brain slices and histological sections exactly matched the evolution of Gf enhancement on in vivo MRI (Stoll et al., 2009b; Figure 3). Several conclusions can be drawn from these observations: (i) Leakage of the BBB is not restricted to structural brain lesions, but also occurs in intact brain areas. Thus, functional states without structural damage can lead to intermittent and fully reversible opening of the BBB. Elucidation of the underlying mechanisms of these transient BBB disturbances might help to develop physiological means to open the BBB for drug delivery. (ii) Lesion-associated BBB disturbances are more abundant in autoimmune disorders of the CNS and ischemic stroke than previously appreciated. Thus, further MR contrast development is warranted since the capabilities of routinely used Gd-DTPA are limited. Fortunately, derivates of Gf are now commercially available for research purposes in animal models. Nerve trauma Nerve degeneration after traumatic injury is accompanied by breakdown of the BNB (Seitz et al., 1989;Bouldin et al., 1991), but, surprisingly, injured nerves do not regularly show Gd-DTPA enhancement (reviewed in Stoll et al., 2009a). Contrastingly, the process of nerve degeneration and subsequent recovery can be assessed by Gf enhanced MRI (Bendszus et al., 2005b) due to the fact that the BNB is disturbed during the degeneration process, but sealed again upon arrival of regenerating nerve sprouts from the proximal stump. Forty-eight hours after crush injury of sciatic nerves in male Wistar rats Gf accumulated within the entire nerve and its lower leg branches undergoing WD and persisted until successful regeneration (Figures 4A,B). Likewise, intense nerve enhancement was present after chronic constriction injury leading to less severe axonal damage. Within several weeks Gf www.frontiersin.org uptake gradually declined from proximal to distal parts of the injured nerve in parallel to regrowth of nerve fibers. Thus, Gf enhanced MRI holds promise to bridge the current diagnostic gap between nerve injury and completed regeneration. Moreover, since non-regenerating, permanently transected nerves exhibit persistent Gf enhancement (Bendszus et al., 2005b), Gf enhanced MRI might help to discern the need for surgical nerve release or grafting if spontaneous regeneration fails. Consequently, Liao et al. (2012) successfully applied Gf enhanced MRI to monitor nerve regeneration after implantation of chitosan nerve conduits with mesenchymal stem cells in a rat model of neurotmesis. CELLULAR MRI SPIO/USPIO ENHANCED MRI Gadolinium-DTPA and Gf enhanced MRI are crude indicators of BBB or BNB disturbances, but they do not specifically visualize cellular immune responses. Inflammation, however, plays a major role in disorders of the nervous system. While under healthy conditions immune cell access to the nervous system is restricted, leukocytes readily cross the BBB or BNB in the context of pathophysiological processes and enter the neural tissue guided by cell adhesion molecules and chemokines (Man et al., 2007). There is an ongoing controversy whether immune cell invasion is linked to BBB disruption or occurs independently (see below). A prerequisite for solving this issue are imaging tools that allow monitoring of cell migration in vivo. From a clinical perspective, tracking of inflammatory cells could moreover help to identify active phases of CNS and PNS inflammation and to monitor the efficacy of therapeutic interventions. Thus, it is not surprising that cellular neuroimaging has become a field of intense research effort. The preferentially applied contrast agents for cellular MRI are iron oxide nanoparticles and lately, PFC (see Perfluorocarbon Enhanced 19 F MRI). Depending on their hydrodynamic particle size iron nanoparticles are classified into superparamagnetic iron oxide particles (SPIO; 50-200 nm diameter) and their ultra and very small variants (USPIO; ∼35 nm diameter;VSOP < 10 nm diameter). They all possess relatively large negative magnetic susceptibilities, featuring a more extensive shortening of T1 and T2 relaxation times than Gd-DTPA. Hence, their sensitivity is much higher than that of Gd compounds. The easiest and safest method of iron labeling is spontaneous uptake by blood-borne cells after systemic contrast agent application (Bulte, 2009). If thereafter labeled cells are attracted to a target organ they can be visualized by MRI. Importantly, the property to phagocytose and to migrate Frontiers in Neurology | Neurotrauma were sacrificed at given time points. Total brain preparations (A) and coronal brain sections (B) of the animals show that Evans Blue extravasation starts at the lesion site, but extends to the remote ipsilateral cortex and the corpus callosum within the first day. Coronal brain slices at the level of the lesion are always shown on top, the other six in the row represent subsequent 1 mm slices located frontally to the lesion (B). At day 3 breakdown of the BBB indicated by Evans blue extravasation is restricted to the PT lesion. (E) shows Gd-DTPA enhancement of the photothrombotic lesion within the first 24 h. Note that, in contrast, Gf enhancement occurs within in the lesion, but also affects the ipsilateral cortex and the corpus callosum spared by Gd-DTPA (F). Moreover, T1-w MR images anterior to the PT lesion exhibit no Gd-DTPA (C), but strong Gf enhancement (D) according to Evans blue extravasation shown in (A,B). Reproduced from Stoll et al. (2009b). to sites of inflammation turns macrophages into ideal targets for cellular MRI. Apart from macrophage phagocytosis SPIO/USPIO are cleared from the circulation by cells of the reticuloendothelial system (RES) in liver and spleen. The extent of cellular labeling in relation to clearance by the RES depends on iron oxide particle size and the net charge of the polymer coating (Hoehn et al., FIGURE 4 | Visualization of breakdown of the blood nerve barrier (BNB) (A,B) and inflammation in experimental nerve crush (C-F). Coronal images depict the pelvis and both thighs of a rat lying in prone position with both legs positioned in a round surface coil (CISS sequence; slice thickness 1 mm). Note that Gf accumulates in the degenerating distal stump on T1-w MRI [arrow in (A)] and binds to peripheral nerve structures as revealed by fluorescence of carbocyanine-labeled Gf (B). Gf enhancement ceases not until successful regeneration (not shown). By contrast to breakdown of the BNB, macrophage infiltration is restricted to the early phase of Wallerian degeneration. Five days after sciatic nerve crush focal signal loss is present at the lesion site and distally due to the invasion of SPIO-labeled macrophages from the blood (C). The corresponding paraffin section stained for iron confirms the infiltration of numerous iron-laden macrophages in the degenerating nerve segment (D). At day 8, macrophage infiltration is restricted again to the lesion site and ceases thereafter (E). Correspondingly, distal nerve segments no longer show iron-positive cells after application of SPIO as shown for day 10 in (F). The BNB, however, is still leaky at that time (not shown) indicating that macrophage infiltration occurs within a narrow time window and persistent BNB disturbance does not per se induce cellular infiltration. Adapted from Bendszus et al. (2005b); (A,B) and Bendszus and Stoll (2003) 2007). Safety concerns for both, SPIO and USPIO compounds are little. A recent meta-analysis of 37 phase I to III clinical trials with ferumoxtran-10, an USPIO agent, revealed back pain, pruritus, headache, and urticaria as the most frequent adverse events (Bernd et al., 2009). They are usually mild and of short duration (Anzai et al., 2003). Similarly, SPIO have been associated with low human toxicity. In numerous clinical trials they were shown to cause neither cardiovascular side effects nor clinically relevant laboratory changes Reimer and Balzer, 2003). EAE and MS Several proof-of-principle studies have demonstrated that USPIO and SPIO enhanced MRI allow visualization of macrophage infiltration in EAE (Dousset et al., 1999;Rausch et al., 2003;Floris et al., 2004). Brochet et al. (2006) showed that rats with USPIO positive lesions at the first attack suffer from more severe clinical affection and extensive axonal damage at the second EAE bout. This implies that the extent of macrophage infiltration in early EAE may predict the ensuing disease course. Interestingly, in a follow-up study by the same group severely affected rats with USPIO enhancement at the onset of disease showed an imbalanced, proinflammatory macrophage activation profile, both in CNS lesions and the peripheral blood (Mikita et al., 2011). Moreover, macrophage invasion spread from the upper spinal cord and brainstem to the cerebellum and subcortical regions during disease progression. When USPIO were injected in the recovery phase no signal abnormalities were observed indicating that macrophage infiltration in EAE is a timely restricted event that clearly depends on the disease stage. In addition, iron-enhanced MRI was applied as a non-invasive tool for preclinical treatment surveillance. Deloire et al. (2004) demonstrated that macrophage recruitment to the EAE lesions detected by USPIO enhanced MRI is not fully blocked under therapy with natalizumab, a VLA4-antagonist that is in frequent use in patients with relapsing-remitting MS. Moreover, the efficacy of Fingolimod, an oral drug that was recently approved for MS treatment, was successfully monitored by USPIO enhanced MRI in EAE (Rausch et al., 2004). In MS it is a common conception that the poor clinicoradiological association may be explained in part by diffuse inflammatory activity in the so-called normal appearing white matter (NAWM) which is concealed behind an intact or repaired BBB (Barkhof, 2002;Kutzelnigg et al., 2005). Indeed, microscopic inflammation within the NAWM not amenable by conventional MRI seems to contribute more strongly to disability than the T2 lesion load (Parry et al., 2002;Traboulsee et al., 2003). Thus, it was not surprising that the application of iron-enhanced MRI in EAE and the transfer of this technology to MS patients in small trials have provided insights into CNS inflammation that exceed conventional Gd-DTPA enhanced MRI. Vellinga et al. (2008) detected 188 USPIO positive cerebral lesions in 14 patients with active relapsing-remitting MS. Interestingly, the vast majority of USPIO positive lesions (144/188) showed no concomitant Gd-DTPA enhancement. Furthermore, none of the three different types of USPIO enhancement (i) focal lesions (ii) "return to isointensity" lesions and (iii) ring-enhancing lesions was particularly related to Gd-DTPA uptake (Vellinga et al., 2008). A more recent clinical trial by the same group suggested USPIO as a potential marker for diffuse inflammation in the NAWM of relapsing-remitting and primary progressive MS patients (Vellinga et al., 2009). T1 histogram and region-of-interest analysis in the NAWM showed diffuse T1-shortening after USPIO injection in 16 MS patients indicative of subtle inflammatory activity, but not in gender-and age-matched healthy controls. Ischemic stroke It has long been established that cerebral ischemia induces a profound inflammatory response involving neutrophils, T-cells, and macrophages (Stoll et al., 1998). In the simple model of photothrombotic infarction (see above), entry of phagocytic hematogenous macrophages is delayed by several days and peaks around days 6-9 after lesion induction (Schroeter et al., 1997). Accordingly, SPIO-induced signal alterations did not occur until day 5 despite persistent breakdown of the BBB in PT lesions. However, at day 6 a hypointense rim appears followed by signal loss in more central areas reflecting the influx of hematogenous macrophages as confirmed by histological analysis (Kleinschnitz et al., 2005). Macrophage inflammation could also be visualized by SPIO/USPIO enhanced MRI after transient or permanent middle cerebral artery occlusion (MCAO), but results are more heterogeneous and conflicting. Some groups found contrast-induced signal loss in the lesion boundary 24 h after permanent MCAO (Rausch et al., 2001). In a transient MCAO model, Denes et al. (2007) did not find USPIO-related signal loss within the first 3 days after stroke induction. Moreover, neither focal signal intensity changes nor iron-positive macrophages were detected in the ischemic hemisphere of Wistar rats when USPIO were applied at a subacute stage 6 days after MCAO (Farr et al., 2011). By contrast, Kim et al. (2008) found areas of signal loss in SPIO enhanced MRI 3 days after reperfusion that corresponded to the accumulation of iron-laden macrophages. A possible explanation for these contradictory results is that the labeling efficacy of monocytes by USPIO in comparison to larger nanoparticles such as SPIO is rather low (Oude Engberink et al., 2007). Moreover, the timing of the contrast agent application and the subsequent MRI appears to have critical impact on the processes depicted by iron-enhanced MRI. Hence, especially in the early infarct phase, focalized signal alterations might rather be caused by trapping of the iron particles in the vasculature than phagocyte infiltration (Desestret et al., 2009). Several open-label pilot trials with USPIO enhanced MRI were conducted in patients with ischemic stroke. Saleh et al. (2004) applied USPIO in a series of 10 patients 5-6 days after stroke onset. MRI scans were performed 24 and 48 h after USPIO infusion. As principal finding two distinct USPIO-related signal changes were observed: blood pool effects that appeared as signal loss on T2/T2 * -w images and decreased from the first to the second scan as well as parenchymal contrast enhancement on T1-w images that increased over time and was attributed to macrophage infiltration. When applied early (24-36 h after stroke) in a subsequent study, USPIO enhancement was spatially heterogeneous and only present in a minority of patients (Saleh et al., 2007). Nighoghossian et al. (2007) recruited patients with anterior circulation stroke and administered USPIO 6 days after admission. Three days later, 9 of 10 patients showed USPIO enhancement in the brain parenchyma. Interestingly, whilst most patients featured mild Gd-DTPA uptake, the patient with the most severe BBB breakdown did not exhibit USPIO enhancement. Nerve trauma Traumatic injury to peripheral nerves similarly induces a profound inflammatory response with macrophage infiltration (Stoll et al., 1989) that leads to rapid removal of myelin debris and a Frontiers in Neurology | Neurotrauma growth-promoting cellular and molecular milieu. Thus, WD is the prototype of a tissue protective M2 type macrophage response (Ydens et al., 2012). SPIO enhanced MRI helped to define the kinetics of macrophage entry into the degenerating nerve segments which starts around day 1 or 2 at the lesion site and extends to the entire distal stump within the first 2 weeks after injury ; Figures 4C,D). Accumulation of SPIO-laden macrophages was visible as focal signal loss on T2-w MRI. When SPIO particles were applied 10 days after crush or later degenerating nerves did no longer exhibit signal loss (Figure 4E) despite the presence of numerous myelin-laden macrophages in the endoneurium. There are two possible explanations for this finding: (i) SPIO-based MRI depicts active migration of macrophages from the circulation into nerves, and lack of signal at later stages of WD indicates no further macrophage recruitment ( Figure 4F). (ii) Alternatively, myelin-loaded macrophages no longer phagocytose iron particles within the injured nerves. We favor the first possibility since the dynamics of macrophage infiltration shown by SPIO enhanced MRI closely resembles the local expression pattern of macrophage attracting chemokines, which ceases around day 10 (Toews et al., 1998;Tofaris et al., 2002). Moreover, our data suggests that simple leakage of SPIO particles through a defective BNB does not significantly change the intrinsic nerve MR signal, at least in the PNS. These results were recently confirmed in a rodent model of radicular pain. Seven days after transient dorsal root compression T2 * -w MRI showed significant iron-induced signal alterations in the nerve roots of the injured, but not of the sham-operated group after systemic application of SPIO (Thorek et al., 2011). PERFLUOROCARBON ENHANCED 19 F MRI Despite its excellent sensitivity iron particle based cellular MRI has several shortcomings. These include false-positive results caused by hemorrhages, blood pool effects and, in some instances, (depending on the half-time of the compound) passive diffusion via a defective BBB or BNB. Additionally, the large 1 H background signal from mobile water renders unambiguous detection of labeled cells in vivo difficult especially if their biodistribution is unclear. Recently, fluorine ( 19 F) MRI has emerged as an alternative approach for cellular imaging (Stoll et al., 2012;Temme et al., 2012). 19 F markers exhibit favorable MR imaging characteristics, such as a magnetic sensitivity close to the proton nucleus and high natural abundance. Moreover, because of the lack of endogenous 19 F-containing molecules in the body signals originating from injected 19 F compounds are specific. Another virtue of 19 F MRI is that it can be performed in a quantitative manner. Ideal fluorine tracers should provide a high payload of 19 F nuclei. PFC compounds fulfill this requirement and thus were established as tracers for 19 F MRI in recent years. In animal studies PFC agents were generally well-tolerated (Ebner et al., 2010). Several PFC compounds have also been elaborately studied as artificial blood substitutes (Spahn et al., 2002). In one of these clinical trials cerebral hemorrhages occurred after cardiopulmonary bypass surgery but thorough analysis of the safety data revealed that the study conduct, and not the PFC emulsion itself was responsible for the adverse events (Riess, 2006). Moreover, extensive analysis disclosed no perturbation of hemostasis or blood viscosity after i. v. treatment with PFC that could be related to the observed bleeding tendency (Riess, 2006). Flogel et al. (2008) were the first to show that systemically injected 19 F emulsions are efficiently phagocytosed by circulating monocytic cells. Moreover, in mice with myocardial infarction they were able to monitor a time-dependent PFC accumulation in the infarct area. It is well known that myocardial ischemia induces an inflammatory response dominated by cells of the monocyte/macrophage system. Consistently, rhodamine-labeled PFC allowed the identification of 19 F-positive cells within the infarction as macrophages. Subsequently, in vivo 19 F MRI was successfully applied to monitor immune cell responses in mice with LPS-induced pneumonia (Ebner et al., 2010), abscess formation (Hertlein et al., 2011), and in models of acute allograft rejection (Flogel et al., 2011;Hitchens et al., 2011). However, studies using 19 F MRI in the nervous system are sparse. Probably due to the low accumulation of PFC compounds in inflammatory lesions of the CNS/PNS compared to other organ systems sensitivity is a major concern (Stoll et al., 2012). Lately, we established 19 F MRI to depict macrophage infiltration in a rat model of focal inflammatory peripheral nerve injury (Weise et al., 2011). Focal injection of lysolecithin chemically dissolves myelin sheaths and, thereby, elicits a strong inflammatory response within the demyelinated nerve segment (Griffin et al., 1990). In vivo MRI 5 days after sciatic nerve damage revealed massive migration of 19 F labeled cells to the injured nerve section. Intraneural application of saline to the contralateral nerve provoked a slight inflammatory reaction restricted to the perineurium which could also be visualized by 19 F MRI. However, quantification of the signal strength by ex vivo 19 F spectroscopy indicated a significantly higher number of fluorine-labeled cells in the lysolecithin-damaged nerve than in the contralateral control. Attempts to visualize macrophage responses in EAE by 19 F MRI in vivo failed so far, most likely due to insufficient sensitivity (reviewed in Stoll et al., 2012). Thus, despite its superiority in specificity over SPIO/USPIO-based MRI, better coils and more sensitive MRI sequences are needed for further in vivo application of 19 F MRI in the nervous system. THE RELATION BETWEEN BREAKDOWN OF THE BBB/BNB AND CELLULAR INFILTRATION There is an ongoing debate whether leakage of the BBB for soluble factors simultaneously provides an unrestricted access of inflammatory cells to the CNS. Histological data from chronic neurodegenerative diseases and EAE already challenged this notion by showing that leukocytes infiltrated the perivascular space without concomitant BBB leakage (Perry et al., 1997;Engelhardt and Wolburg, 2004). Using serial section electron microscopy Wolburg et al. (2005) revealed that mononuclear cells traverse cerebral microvessels by a transcellular pathway, leaving the endothelial tight junctions intact. Moreover, diffusion of hydrophilic molecules and leukocyte recruitment into the CNS take place at distinct sites of the cerebral vascular tree. While the diffusion barrier for solutes is formed by specialized endothelial cells at the level of capillaries leukocyte extravasation usually occurs in the post-capillary segments (Bechmann et al., 2007;Engelhardt and Sorokin, 2009). The advent of novel MR imaging techniques contributed to the partial solution of this controversy. In the first paragraphs www.frontiersin.org we described contrast agents allowing assessment of BBB/BNB dysfunction (Gd-DTPA, Gf M) as well as SPIO/USPIO enhanced MRI to monitor macrophage infiltration. Combined use of these MR contrast agents in individual animals and selected patients disclosed that breakdown of the BBB/BNB and macrophage infiltration can occur independently: in other words inflammatory cells can cross the BBB without acutely disturbing the BBB, and, vice versa, long-lasting disruption of the BBB/BNB does not necessarily entertain permanent cellular infiltration. EAE AND MS First MR evidence that migration of inflammatory cells to the nervous system is not compulsorily associated with BBB/BNB opening arose from experiments in EAE animals. Rausch et al. (2003) described EAE lesions exhibiting either USPIO or Gd-DTPA enhancement. This mismatch was most prominent during the first relapse when large numbers of USPIO enhancing lesions did not show any Gd-DTPA uptake. Floris et al. (2004) claimed that impairment of the BBB (as shown by Gd-DTPA enhancement) preceded monocyte infiltration (as assessed by USPIO enhancement) in EAE thereby implicating a firm sequence of events in lesion formation. This view was later on challenged by others (Bendszus et al., 2005a;Berger et al., 2006). These studies described USPIO/SPIO positive lesions in areas with an intact BBB suggestive for a different pattern of lesion evolution. Interestingly, in the latter study USPIO enhancing lesions disappeared after the acute inflammatory attack, while areas with BBB damage recovered more slowly (Berger et al., 2006). As discussed above Gd-DTPA enhancement underestimates the number of lesions with breakdown of the BBB. We therefore took advantage of the more sensitive MR contrast agent Gf and compared the number and location of Gf and SPIO enhancing lesions in individual rats with EAE (Ladewig et al., 2009). Numerous Gf positive lesions appeared in the spinal cord, brain stem, and optic nerves and roughly a similar number of lesions showed signal loss after SPIO application indicative for macrophage infiltration. However, the spatial distribution of the lesions was completely different with almost no overlap (Figures 2E,F). These findings provide further evidence that macrophages can enter the CNS leaving the BBB intact (SPIO positive, Gf negative lesions) and that breakdown of the BBB (Gf positive lesions) is not necessarily associated with cellular inflammation (SPIO positive lesions). At present it is unclear how these processes interact, e.g., whether cellular infiltration evokes BBB leakage with a delay or vice versa transient disturbances of the BBB predetermine sites of later cellular invasion in EAE. Though Gf has not yet been developed for human application, clinical studies in MS patients using Gd-DTPA strongly support the results obtained in EAE animals. Dousset et al. (2006) performed a Gd-DTPA/USPIO enhanced MRI study in a cohort of 10 relapsing-remitting MS patients and found that 31 out of 57 lesions were collectively enhanced with both contrast agents. Importantly, 24 Gd-DTPA enhancing lesions were USPIO negative while two USPIO lesions did not show Gd-DTPA uptake. Vellinga et al. (2008) compared USPIO enhanced MRI to the longitudinal pattern of Gd-DTPA enhancement in 19 relapsing-remitting MS patients. They found that 77% of USPIO positive lesions were located in areas with an intact BBB. Moreover, USPIO enhancing lesions were more abundant than Gd-DTPA enhancing lesions and remained visible for longer time periods than Gd-DTPA. In 4% of USPIO positive lesions USPIO enhancement preceded Gd-DTPA uptake by several weeks. Just recently, Tourdias et al. (2012) longitudinally assessed disease activity with combined Gd-DTPA and USPIO enhanced MRI in 10 relapsing patients and 14 patients with progressive MS over a 6 month period. In this study, the use of both contrast agents considerably increased the diagnostic yield enabling the detection of 51% more lesions than with Gd-DTPA alone. USPIO enhancement was also observed in patients with a progressive disease course lacking Gd-DTPA enhancement (Tourdias et al., 2012). Thus, these studies unanimously support the assumption that SPIO/USPIO and Gd-DTPA enhanced MRI cover different aspects of MS pathophysiology and activity. ISCHEMIC STROKE Several MRI studies conducted in experimental stroke also indicate that BBB opening and cellular inflammation are not necessarily linked. In the PT model it was shown that SPIO-laden macrophages enter the lesion at a subacute stage (around day 6) while BBB breakdown occurs immediately after lesion evolution and persists for weeks (Kleinschnitz et al., 2003(Kleinschnitz et al., , 2005. Beyond that, macrophages were still abundant in the infarction at later time points but did not show any iron uptake (Kleinschnitz et al., 2003). In MCAO models available data is more controversial. While some studies failed to show iron-induced signal loss in the infarction despite prolonged opening of the BBB and infiltration of neutrophils (Denes et al., 2007), others suggest that USPIO penetrate in the CNS as free particles over a disrupted barrier (Desestret et al., 2009). Thus, there is an ongoing controversy to what extent USPIO enhancement corresponds to leukocyte infiltration or passive leakage through a defective BBB. Most importantly, diverse signatures of conventional Gd-DTPA and iron-enhanced MRI persisted when the application of USPIO was transferred to patients with ischemic stroke. Saleh et al. (2004) found USPIO-related signal changes in the subacute infarct stage of ten stroke patients, while Gd-DTPA enhancement occurred in only six of them. In another study, Nighoghossian et al. (2007) confirmed spatial discrepancies between USPIO-related signal alterations and BBB breakdown assessed by Gd-DTPA in patients with anterior circulation stroke. NERVE TRAUMA The conception that breakdown of barriers in the nervous system is not necessarily congruent with cellular infiltration is further reinforced by studies in nerve trauma. Axotomy or crush of a peripheral nerve leads to degeneration of the distal nerve segment (WD) accompanied by a rapid breakdown of the BNB as shown by extravasation of albumin in histological studies (Seitz et al., 1989;Bouldin et al., 1991). Accordingly, degenerating nerves show continuous Gf enhancement throughout WD which terminates not until successful regeneration is accomplished (Bendszus et al., 2005b). Upon SPIO enhanced MRI a different picture emerges . SPIO-induced signal loss indicative of macrophage infiltration starts at the lesion site within 2 days, extends distally within the first week after injury and suddenly Frontiers in Neurology | Neurotrauma ceases after 10 days (Figure 4). Thus, SPIO application later than 10 days after nerve injury is not further accompanied by signal alterations on T2-w MRI despite the fact that the BNB is still defective. Functionally, these findings indicate that, despite persistent breakdown of the BNB for up to 4 weeks after injury (when nerve regeneration is completed), there is no continuous macrophage invasion at the late phase of WD and regeneration. The pattern of macrophage invasion revealed by SPIO enhanced MRI very well corresponded to the local expression of chemokines in the degenerating nerve supporting the notion that the local molecular environment, but not the simple breakdown of the BNB is responsible for the attraction of inflammatory cells. CONCLUSION The ability to visualize nervous tissue by MRI has revolutionized clinical neurology during the last three decades, but the signal alterations seen in diseases are mainly non-specific. The use of Gd-DTPA as a MR contrast agent allows detection of tumors and areas with a disturbed BBB, however, sensitivity is limited. In an attempt to depict molecular and cellular processes more precisely novel contrast agents have been developed. In experimental studies, Gf allows more sensitive assessment of disturbances of the BBB and BNB than Gd-DTPA. Thereby, the diagnostic yield is highly increased in animal models of MS, functional and transient changes of BBB properties hitherto undisclosed can be assessed and, finally, the process of nerve regeneration which is linked to sealing of the BNB, can be accurately followed. Cellular contrast agents such as iron-containing SPIO/USPIO or PFC allow tracking of inflammatory responses by MRI, mainly macrophage infiltration into the NS. By combining both imaging technologies it became increasingly clear that breakdown of the BBB/BNB and leukocyte infiltration are distinct processes showing much less overlap than previously anticipated. Further clinical development of these MR contrast agents is warranted since they hold promise to provide unique insights into the pathophysiology and dynamics of inflammatory disorders of the NS and could improve treatment surveillance.
2016-06-17T14:13:58.365Z
2012-11-26T00:00:00.000
{ "year": 2012, "sha1": "bd4c2eac3ae25e5ed8f22e5ba19a9b2652d373b3", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2012.00178/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bd4c2eac3ae25e5ed8f22e5ba19a9b2652d373b3", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237334940
pes2o/s2orc
v3-fos-license
The Research of BDPCA Identifying Emotion by EEG Electroencephalogram (EEG) data contain wealthy information about the brain’s and body’s pathology and physiological state. It’s not easily to identify the truth that EEG contained. The unsupervised learning method don’t need to take label by human. Without subjective feeling, it greatly improve training accuracy. In this current paper, adopted improved Density Peak Clustering Algorithm (DPCA) to train EEG data. To solved the problem that difficult to determined cluster center number, Bayesian Information Criterion(BIC) was introduced. The algorithm was verified feasibility that in EEG processing by experiment which divided fatigue state level in lab. And used SJTU Emotion EEG Data set (SEED) identifying different emotions. Compared with other cluster algorithms, BDPCA accuracy totally raised about 5%. And BDPCA behavior was steadier in different emotion types. Introduction Electroencephalogram (EEG) is a weak electrical signal which generated by the activity that have Spontaneous and rhythmical ion exchange between neurons in brain. It is representing of comprehensive physiological electrical activity in brain [1]. The EEG contains a wealthy information about the brain's and even body's pathology and physiological state [2]. Due to the bioelectricity signal have the characteristics that well real-time difference and difficult to disguise [3] and Brain Computer Interface (BIC) [4]. which can translate EEG signal into user intentions and output is implementation. These Provide the conditions that EEG Widely used. With the rapid development of artificial intelligence technology, the methods based on artificial intelligence have attracted more attention in data processing. Jianfeng Hu [5] applied K Nearest Neighbor(K-NN) and Support Vector Machine (SVM) into EEG data set analysis. Saeedi Maryam [6]applied SVM and Multilayer Perceptron (MLP) into the EEG data set processing and analysis. T Waili [7] applied Artificial Neural Network (ANN) into EEG data set to identify different people personal identity verification. Bouallegue Ghaith [8] used Artificial Neural Networks (ANN) analyzed EEG data set, achieved Alzheimer's disease (AD) identifying. Shen lei [9] used convolutional neural networks (CNN) identify epilepsy using EEG data set. However, the knowledgeable of state which could represented by EEG signal is not sufficient. It's a mainly considered problem that how to fully excavate useful information from EEG signal. The classification algorithm usually divides the state which was known [10]. Comparing with the supervised learning method, the unsupervised learning method like clustering algorithm don't need take the label by human and it can automatically divide different EEG signal into different categories. DPCA is based on the local density of data and the distance between data points as gist divide different cluster, but DPCA can't determine the number k of cluster center. Therefore, Bayesian Information Criterion (BIC) was introduced to improve DPCA. In the current paper, the feasibility has been verified by experiment of fatigue state dividing. The emotion identifying experiment by SEED was used to analyze BDPCA and other cluster algorithm performance. Emotion identifying mechanism The EEG consist of abundant rhythmic wave [10]. When the peoples physical state changed, the rhythmic wave accordingly changed [11]. The relationship between the EEG that generated by the brain system and the physical state proved that it was possible that divide different physical state by EEG. The steps of analyzing different moods based on the EEG are as follows: collecting EEG ,determining the clustering analysis algorithm, EEG preprocessing, extracting EEG feature, dividing different moods. The methods of EEG feature extraction usually process in frequency domain or time domain. The time domain analyzing methods are simple and intuitive, it extracts parameters such as amplitude, mean value, variance, regularity and synchronization in general. The simple time domain feature extracting methods are not suitable for complex EEG analyzing. The frequency domain analyzing methods mapping the signal into the frequency domain for analyzing. However, EEG have randomness, and most of useful information is transient. According to the "uncertainty principle", signal processing can't have good time resolution and frequency resolution at the same time, so it is necessary to combine time domain and frequency domain to process. The wavelet packet transform not only considerate the frequency resolution, but also retain the time resolution. The wavelet packet transform can divide the low frequency signal more careful compared with the wavelet transform. Therefore, the wavelet packet transform method was determined to extract the eigenvalue from the EEG data in this current paper. Then, the improved BDPCA was used to train the extracted feature vectors, and the division of different moods was carried out. A complete fatigue state division mechanism flow chart is shown as Figure 1. EEG data preprocessing The EEG raw data collected by the experiment which contain a lot of interference such as Electromyography (EMG), Electrocardiogram (ECG) and power frequency interference noise [12]. Therefore, it is necessary to preprocessing EEG raw data to eliminate the above noise before extracting feature from EEG data [13]. Firstly, the EEG raw data is re-sampled with 128Hz, which not only reduces the size of the memory that EEG data occupied, but also makes the extraction of energy features parameters by wavelet packet transform easier. Then, the notch filter is used to remove the 50Hz power notch and the 0.45-35Hz band-pass filter is used to remove low frequency noise and high frequency interference. Finally, the independent component analysis (ICA) that in EEGLAB toolbox of MATLAB is adopted to remove obvious artifacts noise components. After the above steps, the EEG data is used for subsequent feature extraction. Extracting feature from EEG The purpose of extracting feature is to extract the information from the preprocessing EEG data for subsequent analysis. The EEG data mainly contains five wave components with different frequency band ranges, such as delta band (0.5-4Hz), theta band (4-7.5Hz), alpha band (8-13Hz), beta band (14-30Hz)and gamma band (30-45Hz) [14]. Wavelet packet transform was adopted to decompose EEG data in this current paper, and energy parameters in different frequency bands were extracted as the Table 1. Table 1. Wavelet packet nodes correspond to the band range. Where i j P represents the decomposition coefficient of the i node in layer j in the wavelet packet decomposition. The energy of delta band, theta band, alpha band and beta band can be calculated by the above formula. According to Wei-Long Zheng[15], the beta band (14-30Hz) and gamma band(30-45Hz) accuracy of mood identify was higher. In this current paper, Select beta band and gamma band energy parameters as feature vector in experiment of different moods dividing. The improvement of BDPCA (2) The local density is obtained according to Equation (2) and (3). (3) Calculate the cluster center distance according to Equation (4). (4) Plot decision graph according to the local density and the distance with the cluster center.Determine the type of the data points according to the graph. The cluster center have big local density in general. And cluster center point has further distance with cluster center points than normal points. Noise points far away cluster and lonely. Thus normal points are near the horizontal axis, and noise points are near the vertical axis. The center of clustering is far away from the horizontal axis and the vertical axis. (5) Clustering division. The data points which belong to the non-clustering center points are divided into the same class of points which the local density is greater than it and has the minimum distance. After all the points were judged and analyzed, the algorithm will be finished. DPCA propose the selecting principle of the cluster center point according to the decision graph. In generally, the distribution of the data set is unknown in the actual data analysis, the cluster center number is unknown in general and the points distribution in decision graph is not very clearly , so it is easy to appear that the normal points which were suspected to be the cluster center points is determined the cluster center point. It can be seen that it is difficult to accurately determine the point type of suspected points by the decision graph generated by DPCA. Therefore, BIC is used to improve the algorithm to assist in determining the cluster center points. The core idea of BIC is to maximize the probability of the currently selecting model under the current data set. In the current study of EEG, is the EEG data set, R is the number of samples in the data set. Under the initial conditions, each models have the same probability that were selected, finding the model with the highest posterior probability is equivalent to finding the model with the highest edge probability. i P is the number of model parameters. The above BIC parameters represent the loss between the model and the real data. The absolute value of BIC parameters is smaller, the selected model is better. The improved BDPCA The BDPCA which was described in 4.1 has the following disadvantages: the i  and i  of different data points are discrete values, it may lead to that i  or i  of different data points is equal. Therefore, there are to many suspected cluster center points due to the points with the highest local density. If there was several points both have the max Local density, the distance of these points is difficult to calculate because it is difficult to determined which point distance calculated by max distance. If all points which have the max local density distance calculating by using the max distance equation it would generate the mistake that could avoid. And it would generate obstacles in cluster center points determined. Although the number of optimal cluster classes can be determined by calculating BIC parameters, it will greatly increase the compute time, and will make difficult to partition the subsequent data points. Therefore, to improve the situation, use similarity to redefine the values of i  and i  in the current paper. Calculate the optimized local density and distance according by the Equations (8) The original i  and i  are replaced by calculating the similarity between the points, it greatly reduces the situations of different data points have the same i  and i  . However, the distance between two data points is simply calculated by using the similarity. It is easy to misjudge when the edge data points of one cluster is closer to the edge data points of another cluster. In order to avoid the above situation, common neighbor parameter (CNN) was introduced to optimize the calculation of similarity [18]. The optimized calculated formula is shown as (10) x . Different from the traditional similarity calculation method, the type of data points after optimization is determined by the distance and the local density between two points. The data points are divided into the same cluster with the nearest data points in the data set whose local density is greater than itself. In this way, the misjudgment that caused by the close distance between the edge points of two different clusters could be avoided. BDPCA Applying in EEG BDPCA has been used in some engineering data processing research, such as aero-engine fault data marking, and the simulation results shown that the algorithm has got a wonderful accuracy [19] . The EEG data have random, non-stationary characteristics like other engineering signals. In this current paper, BDPCA was applied to analyze EEG data. Feasibility analyze of BDPCA in EEG To verified the feasibility, using BDPCA to divide fatigue level by using EEG which collected by the emotive. The roughly process of experiment as follows.The experiment was carried out in the afternoon (13:30-16:00), which was easier to generate drowsiness. Before formally recorded the EEG data, the experimenters have took a 10min test by operating the driving simulation and avoiding the barriers to familiarize the operate system. After the formal experiment began, the experimenters was required to drive at 90 Km/h in a straight line. During the period of driving, the experimenters controlled the steering wheel to avoid obstacles which randomly appearing on the screen in front of him. The experiment was lasted for 150min. During the whole experiment period, experimenters could stop the experiment if the experimenters feels unwell, and the experiment would end after stopping. If didn't appearing special situations, the drivers would continue driving for 150 minutes without rest. During the driving period, the portable EMOTIV device collected the driver's EEG at the frequency of 256Hz. During the experiment, there were also two staff, one of whom recorded the number of blink frequency of the driver every 2 minutes on the side, and the other recorded the time of the obstacles appearance and the results of obstacle avoidance. The recording results would be used the criteria to classify the fatigue state level. Firstly, the EEG raw data was divided into 20S data segments, and 10S of the data segment was intercepted to extract feature. Then, used the wavelet packet transform to extract the energy parameters with the sample entropy of the signal as the input vector. After which trained the input vector by the BDPCA model. The experiment result proposed a method to divide different fatigue level by energy ratio parameters of alpha band and theta band with delta band. when feature of energy parameters located in 0-0.9, it represents awake state. when feature of energy parameters located in 0.9-1.4, it represents mild fatigue state. when feature of energy parameters located in 1.4-1.75, it represents moderate fatigue state. when feature of energy parameters located in 1.75-2.3, it represents severe fatigue state. when feature of energy parameters located in 2.3-2.8, it represents extreme fatigue state. The conclusion in compliance with experimenters behavior. Analyzing the result of BDPCA model trained the EEG data which collecting by EMOTIV when experimenters operate virtual driving equipment. The result in compliance with behavior data and experimenters' subjective feelings. The experiment results verified the feasibility of BDPCA model analyzing EEG data. Due to fatigue state needs take label by subjective feeling, it hardly to observe BDPCA accuracy with other cluster algorithm. To solved this problem, we used SEED to observe different cluster algorithms performance. BDPCA in identifying different moods In this current paper, the SEED was selected to observed different cluster algorithm performance. The EEG data set was collected by 15 subjects (7 males and 8 females). 15 movie fragments which can generate positive, neutral and negative moods were chosen as stimuli used in the experiments. The 15 Chinese film fragments was clipped by 6 movies. The movies name and induced emotions shown as Table 2. There is a 5s hint before each clip, 45s for self-assessment and 15s for rest after each clip in one session. The order of presentation is arranged so that two film clips targeting the same emotion are not shown consecutively. In this current paper, The EEG data was divide different trails, select the EEG trails when movies were in climax. The EEG trails were taken a preprocessing step which introduced in chapter 3.1 and extracted the feature vectors by using the step of chapter 3.2. Finally, the energy parameters were extracted which select beta band and gamma band of channel Fp1 and Fp2. The feature vectors was trained by BDPCA model. The decision graph shown as Figure 3. According to the decision graph, the number of suspected cluster center points was 2-6, and calculated BIC parameters when the number of cluster center points was 2-6. The calculating results shown as Table 3. The value of BIC parameter determined that the optimal number of clusters was 3. The EEG dataset consisted of three different type moods which was sad, happy and neutral. The analysis results of BDPCA model in SEED shown as Table 4. To observe different cluster algorithm performance more intuitive. The histogram of cluster algorithms accuracy was shown as Figure 4. Table 5 and Figure 4. It easy to found that in positive emotion identification, BDPCA accuracy raised more than 5% comprehensive than other algorithms; in neutral emotion identification, BDPCA accuracy was close to the highest accuracy in other algorithms; in negative emotion identification, BDPCA accuracy both raised about 10% than other algorithms. It can conclude that accuracy of BDPCA was higher and BDPCA performance was steadier in identifying different emotions than other cluster algorithms. Conclusion In this current paper, to improve the problem that comprehending limitation of EEG, proposed a improved cluster algorithm which based on DPCA into EEG analyzing. The experiment of fatigue level dividing in lab was successful, BDPCA model achieved different fatigue level. The result of experiment verified that BDPCA analyzed EEG data was feasible. Due to the character of dividing fatigue level that different fatigue level transition is not clearly and easy to influenced by subjective feeling, It's hardly verified BDPCA performance was better than other cluster algorithm. Thus, we adapt SEED to identify different emotions, By identifying there emotions, found that BDPCA accuracy comprehensively improved about 5% than other cluster algorithms. In identifying different emotion, BDPCA performance was steady, other algorithms accuracy existed relatively large differences. It could say that BDPCA model has better robustness than other cluster algorithms. The result of experiment verified that BDPCA don't need to take subjective label to EEG data in advance, can automatically divide EEG data with high accuracy and have robustness. Therefore, BDPCA has a good application prospect in using EEG data to detect diseases and other body abnormalities.
2021-08-28T20:07:04.060Z
2021-08-01T00:00:00.000
{ "year": 2021, "sha1": "7714f5bd93a26c815d4c094bb08c0a4b8fc64cf3", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/2003/1/012002", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "7714f5bd93a26c815d4c094bb08c0a4b8fc64cf3", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
1558869
pes2o/s2orc
v3-fos-license
Faster Algorithm of String Comparison In many applications, it is necessary to determine the string similarity. Edit distance[WF74] approach is a classic method to determine Field Similarity. A well known dynamic programming algorithm [GUS97] is used to calculate edit distance with the time complexity O(nm). (for worst case, average case and even best case) Instead of continuing with improving the edit distance approach, [LL+99] adopted a brand new approach-token-based approach. Its new concept of token-base-retain the original semantic information, good time complex-O(nm) (for worst, average and best case) and good experimental performance make it a milestone paper in this area. Further study indicates that there is still room for improvement of its Field Similarity algorithm. Our paper is to introduce a package of substring-based new algorithms to determine Field Similarity. Combined together, our new algorithms not only achieve higher accuracy but also gain the time complexity O(knm) (k<0.75) for worst case, O(*n) where<6 for average case and O(1) for best case. Throughout the paper, we use the approach of comparative examples to show higher accuracy of our algorithms compared to the one proposed in [LL+99]. Theoretical analysis, concrete examples and experimental result show that our algorithms can significantly improve the accuracy and time complexity of the calculation of Field Similarity. [US97] D. Guseld. Algorithms on Strings, Trees and Sequences, in Computer Science and Computational Biology. [LL+99] Mong Li Lee, Cleansing data for mining and warehousing, In Proceedings of the 10th International Conference on Database and Expert Systems Applications (DEXA99), pages 751-760,August 1999. [WF74] R. Wagner and M. Fisher, The String to String Correction Problem, JACM 21 pages 168-173, 1974. Introduction In many applications, it is necessary to determine the string similarity * . Text comparison [SV94,MSU97,CPSV00,ABR00,MS00,KR87,KMR72,GAL85,ME96] now appears in many disciplines such as compression, pattern recognition, computational biology, Web searching and data cleaning [HS95,BD93]. Edit distance [WF74] approach is a classic method to determine Field Similarity. A well known dynamic programming algorithm [GUS97] is used to calculate edit distance with the time complexity O(nm). (for worst case, average case and even best case) Since then, progress has been made in terms of time complexity such as O(n) [Kar93], Ω (nm) [SV96], O(kn) [LV86,MYE86], O(n poly(k)/m+n) [CH98], O(n m ) [ABR87], O(n k ) [APL00]. However, all these progresses are obtained by relaxing the problem in a number of ways. Hence, when subsequent comparison is made in this paper with respect to time complexity, we still refer to O(nm) [GUS97]. Instead of continuing with improving the edit distance approach, [LL+99] adopted a brand new approach---token-based approach. Its new concept of token-base-----retain the original semantic information, good time complex----O(nm) (for worst, average and best case) and good experimental performance make it a milestone paper in this area. Further study indicates that there is still room for improvement of its Field Similarity algorithm. Our paper is to introduce a package of substring-based new algorithms to determine Field Similarity. Combined together, our new algorithms not only achieve higher accuracy but also gain the time complexity O(knm) (k<0.75) for worst case, O( β *n) where β <6 for average case and O(1) for best case. Throughout the paper, we use the approach of comparative examples to show higher accuracy of our algorithms compared to the one proposed in [LL+99]. Theoretical analysis, concrete examples and experimental result show that our algorithms can significantly improve the accuracy and time complexity of the calculation of Field Similarity. The rest of the paper is organized as follows. Section 2 gives a background description of the algorithm of calculating Field Similarity presented in [LL+99]. Section 3 proposes our algorithms of calculating Field Similarity and exhaustively compares the new algorithms with the previous one. Section 4 provides experiments to prove the performance improvement with the introduction of the new algorithms. 2.Preliminary Background This section gives a brief description of the algorithm to calculate Field Similarity presented in [LL+99]. • If there is a total of x characters in the word, then we deduct x 1 from the maximum degree of similarity of 1 for each character that is not found in the other word. For example, if we compare "kit" and "quit", then DoS kit =1-3 1 =0.67 since the character k in "kit" is not found in "quit" and DoS quit =1-4 2 =0.5 since the characters q and u in "quit" are not found in "kit". Exercise: compute the Field Similarity of the filed "address" of record 1and 2 in table 1. 3. In the same way, we will obtain the following: Proposed new Field Similarity algorithm This section proposes a new algorithm----Moving Contracting Window Pattern Algorithm (MCWPA) to calculate Field Similarity. Firstly, we give the definition of window pattern. All characters as a whole within the window constitute a window pattern. Take a string "abcde" as an example, when the window is sliding from left to right with the window size being 3, the series of window patterns obtained are "abc", "bcd" and "cde". Let a field X have n characters (including blank space or comma, this applies to the following) and the corresponding field Y have m characters. w represents window size, Fx represents the field X and Fy represents the field Y. The Field Similarity for Fx and Fy is SSNC represents the Sum of the Square of the Number of the same Characters between Fx and Fy. SIMF(X,Y) approximately reflects the ratio of the total number of the common characters in two fields to the total number of characters in two fields. Imagine we have two windows, one for each field. The basic idea is that we begin with big window size. If window pattern in field 1 is the same as that in field 2, we record the contribution of this matching in SSNC and mark these window patterns as inaccessible to avoid revisiting in the following rounds. Every next round, window size decreases by 1. And within one round, as searching for the same window pattern is going on, windows move from left to right. The following is the complete algorithm (MCWPA) to calculate SSNC. The process of calculating SSNC with MCWPA is shown is figure 2 in detail (next page). Exercise what is the Field Similarity between the field1 "abcd" and field2 "abcd"? (the answer is 100%.) Analysis and Comparison of Two Algorithms of Field Similarity This section will give some examples to show that MCWPA can overcome some drawbacks that exist in the previous algorithm of the Field Similarity. Also the logic behind the design of MCWPA is presented. Example 2: calculate the following Field Similarity with the above two algorithms. Obviously, the two fields are quite different, only 10% common characters. However, the result of the previous algorithm shows that these two fields have 50% similarity. In contrast, the result of MCWPA is about 10%, which is quite close to the expectation. Analysis: This example shows that there is a drawback for the previous algorithm. In it, DoSx1, DoSx2,….., DoSxn, DoSy1, DoSy2,….., DoSym are the maximum of the degree of similarities for words Ox1, Ox2,….., Oxn, Oy1, Oy2,……, Oym respectively. If quite a number of words in one field are similar to only one word in the other field and dissimilar to other words, the previous algorithm will give inaccurate result. MCWPA overcomes this problem by marking the same characters in two fields as inaccessible so as to avoid revisiting. Example 3: calculate the following Field Similarity for two cases with the above two algorithms. Analysis: Clearly, the similarity in case 1 should be higher than that in case 2. However, the same results based on the previous algorithm suggest that the previous algorithm considers "abc1de" and "de1abc" in case 2 the same. This disagrees with our common sense. In the following experiment section, we will show that this is fatally erroneous in some dataset with Chinese names. Further study of the previous algorithm shows that the adoption of word as basic unit results in its inability to distinguish between two exactly the same fields and two fields with the same words in different sequences. To improve the accuracy, MCWPA is based on substring and uses the character as the unit. In this example, if the unit is word, both case 1 and case 2 have two same words. In contrast, if the unit is character, case 1 has 6 same characters and case 2 has 5 same characters. As expected, SIMF(X,Y) in case 1 is larger than SIMF(X,Y) in case 2 when MCWPA is employed. for case 2: Note: for case1, two algorithms produce the same result. Analysis: Intuitively, in case 1, "Fu Hui" and "Mr Fu Hui" should be the same person. In case 2, the likelihood exists that due to transposition error, originally "Fu Mr Hui" should be " Mr Fu Hui". However, in more likelihood, due to typographical errors, originally "Fu Mr Hui" should be " Fu Mi Hui" or "Fu Ma Hui", etc. Factually, the two common words "Fu Hui" in field 2 of case 1 are continuous. In contrast, in field 2 of case 2, they are interpolated by another word "Mr", hence the similarity between two fields is severely reduced. Thus intuitively and factually two fields in case 1 should be more similar than those in case 2. However, the previous algorithm gives the same results for case 1 and case 2. In contrast, the results based on MCWPA show that the similarity for case 1 is reasonably higher than that for case 2. With respect to characters, both case 1 and case 2 have 6 common characters ("Fu" "1Hui"). According to example 3, even MCWPA can not distinguish case1 from case 2. Further examination of the two cases reveals that in field 2 of case 1, these 6 characters are continuous while in field 2 of case 2, they are not. In order to reflect the difference in terms of continuity despite the same number of common characters, MCWPA introduces the square to the calculation of SIMF(X,Y). In the calculation of SIMF(X,Y) in example 4 with MCWPA, the fundamental reason that the result of case1 is larger than that of case2 is because 6 2 > 2 2 +4 2 . Mathematically, it is easily seen that the square of the sum of numbers is larger than the sum of the square of numbers, that is, (a+b+….+n) 2 >a 2 +b 2 +……+n 2 , (if a ≠ b….. ≠ n ≠ 0). In this way, the introduction of square in the calculation of SIMF(X,Y) can overcome the continuity problem which leads to the inaccurate result for the previous algorithm. The Comparison of Time Complexity between two Algorithms For pedagogical reasons, suppose we have two fields with the same number of words (W) and same number of characters (N). For the previous algorithm: Since every word in one field needs to be compared with every word in the other field to find the maximum For MCWPA: Some preparatory knowledge is provided as follows: When the window size is N, the complexity is O(1 2 ). When the window size is N-1, the complexity is O(2 2 ). When the window size is 1, the complexity is O(N 2 ). We will discuss the following two situations: 1) with user-specified SIMF(X,Y) Threshold (ST) 2) without user-specified ST. Since situation 1 is more common and therefore of more practical and theoretical value, it should and does deserve more space in our paper. UBWS and LBWS From figure 1, we know that MCWPA begins with the window size N and carries on with N-1, N-2…… With formula 2, According to the above analysis, the total number of operations is 2*2 2 *( 2 16 ) 2 =512 (2 words for each field). With revised version of MCWPA: With formula 4: UBWS =N* ST=17*0.8 § As mentioned before, revised MCWPA algorithm skips bigger size window and only uses window size 14 to detect whether there are matching strings. Since the matching strings "abcdefgh ijklmn" are 15character-long, the algorithm can find the matching strings "abcdefgh ijklm" in the first step and come to the conclusion that these two fields are duplicate. So the total number of comparisons is 1. What if there are not matching strings longer than UBWS? We need to continue with smaller size windows as described in figure 1. As with the idea of UBWS, can we possibly find a window size named Lower-Bound Window Size (LBWS). With LBWS, even though the field is full of maximum possible matching strings all equally with the length being LBWS and remaining matching strings, the SIMF(X,Y) still can not meet ST. For example, for two strings A="abcdefghij" and B="ghidefabcj", even though there are three 3character-long matching strings and one 1-character-long remaining matching string, namely, "abc", "def", "ghi" and "j", the SIMF(X,Y) between these two fields still can not meet the ST=0.55. Expandable Region Match Algorithm (ERMA) It can be seen that the core of UBWS, LBWS and MCWPA technology is to find the matching strings efficiently. In this subsection, we propose an algorithm--Expandable Region Match Algorithm (ERMA). It can collect information for all matching strings at O(3N) for best case, O(k*N 2 ) for worst case (k<75%) and O( β *N) ( β <6) for average case. First, we present an introductory example to demonstrate the rough idea of ERMA. How to find the matching substring "ab" with ERMA for field 1)"xxxabxx" and field 2)"yabyyyy"? Suppose now we have already had character information about field 2, that is, "y" is in position 1 of field 2, "a" is in position 2 of field 2……the last character "y" is in position 7 of field 2 and there is no "c", "d"…. "x" in field2. When we search for matching strings in field 1 character by character, we can easily know that the first three "x"s have no counterparts in field 2. When it comes to the fourth character "a", we know that we have a character "a" in the second position of field 2. Next, we compare the fifth position of field 1 with the third position of field 2 and we find anther common character "b". When we compare the sixth position of field 1 with the fourth position of field 2, we find they are not the same. Thus, we find the matching substring "ab". The crucial point for ERMA is that we must have position information for every character in field 2 in advance. Next, we introduce the ERMA in detail by several examples. For illustrative reasons, both fields consist of only ordinary characters (a-z). Example 6: locate all matching strings by ERMA. Imagine we have a character-region with 26 sub-regions, namely, "a" sub-region, "b" sub-region….We start with position 1 , 2, 3….of the field 2 (excluding blank space), put character "x" into x sub-region with the character's position information. For example 6, the result after step1 is shown in figure 3. Since "b" is in position 3, b(3) is put into "b" sub-region. In "a" subsection, there are 3 elements---"aa", "ab" and "ax" since there are three "a" occurrences in field 2. Note that ax(6) indicate that the position of "a" is 6 not that of "x". Capacity Limit for a character-region is the upper limit of the number of elements for the subregion. If the Capacity Limit for Figure 3 is 1, we need to further partition "a" subregion---expand "a" subregion. The result after expansion is shown in Figure 4. In particular, for every character in field 1: 1)get the longest matching strings starting from that character based on the character-region built in step1. 2)Record the information of length of longest matching strings starting from that character and the corresponding starting position in field 2 . For example 6, we begin with the string starting with the first character "a", namely "akabc….". Based on the character-region shown in Figure 4, the first character "a" has 3 common characters, while the second character "k" meets with a "null" in the level 2 of character-region. This means that the string starting with the first character "a" only has 1-character-long longest matching string. Since the longest matching string "a" has 3 occurrences in field 2, we randomly choose any one of them. The reason why we randomly choose is given in section 3.2.2.1.3. In practice, to guarantee that they can be chosen with equal probability, machine generated random numbers with equal probability are used to make the decision. A record is then generated with information that the length is 1 and the position is any one of the three choices "1", "2" and "6", say, "2". And this record is linked to the first character "a". (see figure 5) Easily seen, the string starting with the second character "k" does not have any matching string. For the string starting with the third character "a", namely, "abc axyz mo", similarly, based on the character-region shown in Figure 4, the first character "a" has 3 common characters, while the second character "b" meets with a "b" in the level 2 of characterregion with a pointer pointing to position 2 of field 2. Base on this information, next, the string "abc axyz m…" in field 1 compares with the string "abc axyz m…" which starts from position 2 of field 2. This comparison results in a 10-character-long matching string "abc axyz m". Similarly, A record is generated with information that the length is 10 and the position is 2. This record represents that a substring starting with "a" in field 1 has a 10-character-long matching string in field 2 that starts from position 2 of field 2. And this record is linked to the third character "a". Since we have processed the first character of the 10character-long matching string, we can skip comparisons for the next 9 consecutive characters ("bc axyz m") by the following technique----Expectation. If a character is in its expected position, we don't need to make comparison for it. Take the fourth character "b" as an example. The expected position for it is 3 in field 2 since "b" belongs to an existing matching string which starts from "a" and the position of "a" is 2 and "b" is next to it. Based on the character-region shown in Figure 4, we find that the character "b" only has one occurrence----position 3 which is the same as its expected position, obviously, we can skip comparison for it. If there are several occurrences of "b"-----this phenomenon is called Conflict Type 1---although we can skip comparison for "b" which is in expected position, for other occurrences of "b", the approach of processing them is the same as that of processing the third character "a". The result after step 2 is shown in figure 5. The top level is a group of pairs representing the length of longest matching strings starting with that particular character and the corresponding position in field 2. For example, In figure 5, since the length value (10) of the third character "a" is largest, this character should be processed first. The sorting result is shown as a train of numbers on the bottom of figure 5 that indicate the process sequence. The process starts with the character "a" since the first number in the train points to it. On the other hand, this character "a" is also the starting character of the longest matching strings for this processing round. We mark 10 consecutive characters in two fields starting from "a" as inaccessible. Correspondingly, all numbers in the train that link from these inaccessible characters are also marked as inaccessible. The result is shown in figure 6. (1,2) (0) (10,2) (9,3) (8,4) (6,6) (5,7) (4,8) (3,9) (1,11) (1,13) akabc axyz mo Field 2 aabc axyz muo Figure 6 The result after 10 characters are marked in step 3 We continue the current round with the leftmost accessible number in the train. For figure 6, it is "8" which points to "a". The information on the top level of figure 6 about the character "a" indicates that this character has a matching string starting from position 2 of field 2. Unfortunately, the character in this position has been marked as inaccessible, which means this character has already belonged to another matching string. This phenomenon is called Conflict Type 2. The solution to Conflict Type 2 is that if we 1 2 3 4 5 6 7 8 9 10 11 1 2 3 4 5 6 7 8 9 10 11 find that a character "x" with length "l" has been marked as inaccessible, we ignore processing "x" and continue to process other characters with the same length "l". After all characters with length "l" are finished, we go to a new round by repeating step 2 and step 3, but all inaccessible characters are not processed any more. In figure 6, we continue with the next accessible number "10" in the train. It points to "o" and the length of "o" is also 1, so we find another matching strings and mark them in two fields. Since the length of the character "k" linked from the next accessible number in the train is 0 and less than 1, the current round ends. Implementation of ERMA and Time Complexity For step 1, there are two types of implementation: 1) Fixed size (26) array to represent character-region with Capacity Limit equal to 1. 2) A tree whose nodes have no more than 26 children. The disadvantage for the array-based implementation is more storage. For example, in Figure 3, it needs to store "k" even though k's value is "null" while tree-based implementation does not. The advantage coupled with the space disadvantage is faster search. For example, to find "c", we simply check whether array [3] is "null" or not because "c" is the 3rd alphabetically. While for the tree-based implementation, along the path to find the leaf, comparison needs to be made at non-leaf nodes even though it is negligibly cheap. The characterregion with either of these two types of data structures can be built in O(N) time. In addition, another choice is Fixed size (26) array to represent character-region with Capacity Limit greater than 1. It is a compromise between array implementation and tree implementation with regard to time and space. For step 2, if there is no conflict type 1, we can collect information for all characters in field 1 at O(N). In worst case where there is heavy conflict, the time complexity is O(k*N 2 ). (k<50%) (for example, field 1 is "abababab" and field 2 is "aaaaaaaa") In average case, empirically and experimentally, the conflict type 1 occurs within small scope, so the time complexity is O( β *N) where β <2. For step 3, when we sort characters according to the length of longest matching strings starting from that particular character, we can use Radix sort approach [CP01]. The time complexity for Radix sort is O(N). If there is no conflict type 2, one round is enough to find all matching strings. The time complexity is O(1). In worst case where there is heavy conflict, because the number is randomly chosen as mentioned before, the time complexity is O(k*N 2 ). (Empirically and experimentally, k<25%) (for example, field 1 is "abababab" and field 2 is "aaaaaaaa") In average case, we can find all matching strings within 2 rounds. The time complexity is correspondingly for step3 O( β *N) where β <2. Summary of the Situation with Given ST Having • For MCWPA, if we find matching strings at least equal to UBWS, we conclude that two fields are duplicate. If we can not find, we need to make choice once again. One is continue with MCWPA with LBWS*(N-LBWS) 2 . The other is ERMA with 6N. We choose the smaller of LBWS*(N-LBWS) 2 and 6N as our scheme. If MCWPA is our choice, we use window size equal to LBWS to search for matching strings. If we can not find, we conclude that two fields are NOT duplicate. If unfortunately we can find, the situation will be quite complicated, we switch to ERMA. • For ERMA, (we discuss average case in terms of conflict type to the third round. The same process will carry on until either we can come to the conclusion whether they are duplicate or all characters are marked inaccessible. Every more round will cost less and less because more and more characters are marked inaccessible. As discussed before, the time complexity 2) For ERMA (right-lower area), if there is not conflict type 2, we can safely reach the conclusion with <5N. 3) For ERMA, if there is conflict type 2 and we come to the conclusion within the first round, the time complexity is <5N. If we come to the conclusion within the second round, the time complexity is <5N+N. Empirically, in average the whole process will end within 3 rounds which corresponds to about 6N. In summary, we reach the conclusion within 2 rounds, the time complexity is 6N=60. In this example, the ST=0.48 is quite low, so MCWPA can not be used. Empirically, if ST is greater than 90%, in majority of the cases, MCWPA will be used. That means, the time complexity will be less than O(6N). Without User-Specified ST: In this situation, because of unavailability of ST, all matching strings need to be found so that formula 2 can be used to calculate SIMF(X,Y). ERMA is employed to perform this task. Hence part of the above conclusion applies to here. If there is no conflict type 2 (we discuss average case for conflict type 1), within one round, we can find all matching strings. The time complexity for this is O(5N). In worst case where there is heavy conflict type 2, the time complexity is O(k*N 2 ). (Empirically and experimentally, k<75%) In average case, the time complexity is O( β *N) where β <6. Experiment Result We conducted four sets of experiments with both algorithms. The first dataset is a merger of two datasets that come from two campus surveys conducted through an electronic form within a mass-sent email. The dataset has 782 records. The second dataset is from the 1990 US Census which is a free downloaded dataset coming from http://www.cs.toronto.edu/~delve/data/census-house/desc.html. It has 22784 records. The third and fourth datasets are generated synthetic datasets both with more than 200,000 records. We compare two algorithms by two criteria: 1) Miss Detection (duplicate records are not detected) and 2)False Detection (similar non-duplicate records are treated as duplicate records). The results are presented in figure 8 ~11. Analysis: Experimental results on four datasets consistently indicate that with regard to Miss Detection, the two algorithms perform roughly the same. However, in terms of False Detection, MCWPA performs much better than the previous algorithm. Further study of the testing datasets shows that in the name field, there are some similar non-duplicate names such as "Gao Hua Ming" and "Gao Ming Hua". As analyzed in the example 3, the previous algorithm treats two fields with the same words in different sequences as the matching fields. Thus the high False Detection rate for the previous algorithm begins to make sense. In addition, there are also some similar cases that the previous algorithm treats some names such as "zeng hong" and "zeng zeng" the same. As analyzed in the example 2, MCWPA identifies a large difference in the calculation of Field Similarity between this type of two fields. Generally, from several examples we presented above, the previous algorithm tends to over-evaluate the SIMF(X,Y), while MCWPA does not. We observe from both experiments that MCWPA is roughly equally effective across the entire range of SIMF(X,Y) threshold. As opposed to this, the False Detection rate based on the previous algorithm increases significantly as the SIMF(X,Y) threshold becomes lower and lower. The Miss Detection diagrams show that both algorithms can only perform well in the low SIMF(X,Y) threshold region. However, the False Detection diagrams indicate that in the low SIMF(X,Y) threshold region, the False Detection rate from the previous algorithm is very high. This means, with the previous algorithm, if we choose low SIMF(X,Y) threshold to satisfy Miss Detection rate requirement, we will inevitably obtain poor False Detection performance. This conflict does not show itself in MCWPA. Conclusion This paper has presented a new algorithm (MCWPM) for the calculation of Field Similarity. In essence, MCWPM improves the previous algorithm in the following aspects: 1) The introduction of marking the common characters as inaccessible to avoid revisiting, which is presented in example 2. 2) The adoption of the character as unit for the calculation of Field Similarity instead of words to improve accuracy, which is presented in example 3. 3) The introduction of square to the calculation of Field Similarity to reflect the difference in terms of continuity despite the same number of common characters, which is presented in example 4. 4) The introduction of UBWS, LBWS and ERMA to achieve higher efficiency, which is presented in example 5.
2001-12-25T13:29:22.000Z
2001-12-20T00:00:00.000
{ "year": 2003, "sha1": "41d03d629944e1d2c0c1c73a3bd86fc6bcfc4ff1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cs/0112022", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "41d03d629944e1d2c0c1c73a3bd86fc6bcfc4ff1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
125196399
pes2o/s2orc
v3-fos-license
An Exploration of the Mechanism of Educational Scientific Decision-making in View of Bounded Rationality The scientific decision-making of education policies is not absolutely a complete rational process. The bounded rationality which takes rationality as the judgement standard is the essential connotation of the scientific decision-making of education policies. Therefore, based on this view and through shaping reasonable education policy values, this research gives full play to the driving force of educational scientific research on education policies and designs effective education decision-making information content and running agenda, thus optimizing the weigh principle of education policy proposals, advocating risk assessment and practice of education decisions, constructing institutional rationality of educational scientific decision-making, and finally realizing the rationality and scientificity of education policy decision-making. Introduction Educational scientific decision-making is a key factor concerning whether the education policy activities can duly run and gain good desired results or not. Thus, improving the scientificity of education policy decision-making is not just the initial premise and basis to make all important education policies, but also the important decision supporting and driving force to complete the task of education plans. Therefore, how to improve the scientificity, effectiveness and pertinence of education policy decision-making has become a major issue in the current education development strategy. It is generally acknowledged that "the ultimate goal of education decision-making comes down to the rational pursuit of education order" or in other words, the scientific decision-making of education policies must strictly follow the principle of absolute rationality. However, educational scientific decision-making activities are not absolutely rational activities, and whether the education decision-making is in a reasonable education interest weighing range or not is the intrinsic standard and basis to testify its scientificity. Generally speaking, rationality refers to the rational characteristics which are subject to the social development law and presented on ideological behaviors, cognitive levels, and social practices of people who are in the social institution and adapted to socially recognized ideals, value pursuit and standard principles. This kind of characteristics is objective and also has a subjective initiative (Yang, 1999, pp. 34-38). After years' of research  Acknowledgements: This paper is funded by the youth project of education of the National Social Science Foundation "Research on the value foundation of educational policy from the perspective of political philosophy" (CAA150123). ZHU Mei-xia, Master of Education, One classes of Chinese Teaching, School of Education, Henan university. DA VID PUBLISHING D and investigation, H.A. Simon, the founder of modern decision theory, creatively proposes bounded rationality theory, laying a foundation for modern decision theory. That is to say, bonded rationality theory holds that absolute "rational man" does not exist in the decision-making process, and only the "administrative man" with bounded rationality possesses the practical existence possibility and significance (Meng, 1999, pp. 69-75). Thus, the entire education policy activities show the main content of bounded rationality. The requirements of scientific decision-making are mainly manifested in two aspects: one is to "understand the various factors of decision-making as more as possible and estimate the possible outcomes and possibilities of every action programs as accurately as possible"; and the other is to "master decision-making principles and methods as proficiently as possible until discovering new principles and methods" (Meng, 1999, pp. 69-75). Similarly, whether the education policy decision-making which is filled up with education interests can base on the relative satisfaction degree and effectiveness of the many interested parties or not will determine the scientificity of education decision-making to the greatest extent, that is to say the educational scientific decision-making will not always realize its all intents, but can ensure the relative optimality of decision-making. Therefore, the study of educational scientific decision-making in the view of reasonable bounded rationality becomes particularly important. The scientization of education decision-making involves the scientization of education decision-making idea and technique (Zhang, Hu, & Qu, 2009, pp. 138-158), so the educational scientific decision-making mechanism in the view of bounded rationality should include the following running principles and mechanisms. I. Shaping Reasonable Education Decision-making Values The essence of reasonable education decision-making is the central embodiment of intentionality, regularity and scientificity. The educational scientific decision-making cannot exhaust all the laws, and the principles it follows are as follows: firstly, the fundamental goal of education activities is to cultivate humans and promote human development and perfection. So human scale and factor are the primary premise and objective need to be concerned, and the law of human somatopsychic development will be an important basis and guidance to evaluate the scientificity of education decision-making and the proper meaning to reflect the perfection of personality, which also constitutes the value premise and logical starting point of the scientificity of education decision-making. Secondly, deeply understanding and complying with education development law and promoting the balanced development of education activities are the intrinsic requirement and necessary choice of educational scientific decision-making. Through reflecting education laws and pursing a reasonable and fair education order to standardize the education development direction, guide education practices and promote the perfection of education behaviors will become the main content of educational scientific decision-making. Thirdly, education, as an important part of the social system, shoulders the great responsibility of realizing social development; therefore, how to effectively promote the progress of social civilization and achieve the fair and justice values of social structure is a vital function of educational scientific decision-making. From the perspective public policy decision-making, educational scientific decision-making needs to comply with the social development law consciously, and thus settling current social issues, looking forward the education development blueprint and realizing the social values of scientific decision-making on the basis of summarizing the experiences of previous decision-making activities. II. Giving Full Play to the Driving Force of Educational Scientific Research on Education Policies Scientific research is the theoretical foundation of all scientific decision-making activities. It is pointed out in Program for the Reform and Development of Education in China that, on the basis of fully understanding education development laws and combining education development practices at the present stage and the future developing trend, vigorously developing educational scientific research is a strong guarantee to improve the scientific level of education decision-making and promote education reform and development. National Education Plan further advocates to construct the educational consulting think tank with Chinese characteristics and strive to conduct pioneering and tactical education policy consulting studies, especially making studies concerning national overall development strategy and plan high on the agenda. Philosophy holds that understanding the world and changing the word are the two major themes of human activities, and science believes that the purpose of scientific research is scientific practice and application. Therefore, it is should thoroughly study education laws, sum up education experiences and answer significant theoretical and practical questions related to education reform and development by virtue of the power of research individuals and teams, and all these will converge on problematic consciousness and practical content relating to educational scientific decision-making. In nature, "education research is not essentially to self-fulfil and self-realize for a knowledge system, it is a kind of practical rationality based on the nature and foundation of education practices" (Yang, 2010, p. B1 ). Due to the limitation of researchers' knowledge and strength, this kind of practical rationality is a kind of "bounded rationality" substantially, and this valid scientificity based "bounded rationality" is one of the driving factors to realize the scientificity of education decision-making. III. Designing Effective Education Decision-making Information Content and Running Agenda System theory and information theory deem that the process of activities is an information circulating, feed backing and enriching process. As for the environment of education decision-making, the input, output, re-input, and re-output of information make the education decision-making process increasingly complex, so how to reasonably control the dynamic information, maintain the ecological balance of education decision-making and maximize its impact is an important topic. Educational scientific decision-making can make the decision-making information flow reasonably, and keep the transparency, openness and timeliness of information, avoiding unscientific problems like information asymmetry, malfunction and even faults effectively and improving the effectiveness and scientificity of education decision-making. The decision-making subjects need to constantly improve their information perceiving, obtaining and processing abilities, and on this basis, discover the current education contradictions, look into the potential education policy issues and proposes scientific countermeasure plans, particularly for those contradictions which are closely related to the education interests of the general public, and they should always maintain high waking consciousness and sharpness, make scientific decisions timely and defuse educational crises. Meanwhile, the value and function of consultants and executors in the decision-making process should be attached importance to. Education policy consulting is an important path and mechanism of the scientific decision-making of education policy, and as the subject, consultants are usually the group who pay attention to the running law and trend of education activities, their concerned and consulted contents are always closely associated with their own education interests. From the point of expressing their own education interests and on the basis of consulting, questioning and even criticizing, these consultants put forward opinions, suggestions and research achievements related to education policy decision-making, thus influencing the entire education decision-making process. Once brought into the scope of government education decision-making, education policy decision-making consulting will push the scientific advancement of education decision-making with its unique concerning effect and stress function. Therefore, paying attention to the education decision-making consulting mechanism plays an important supporting and securing role in improving the scientificity of education decision-making. IV. Optimizing the Weigh Principle of Education Policy Proposals Science and public policy studies consider that the reasonable adjudication procedure and principle is a significant tool and method to ensure the scientific decision-making. The reference to modern scientific method and technology, the basis of policy adjudication principle and the form of a reasonable policy proposal adjudication procedure will largely ensure the realization of educational scientific decision-making. Aiming at the problems of excessive external cost and decision-making cost resulting from "strategic behavior", public choice theory develops two kinds of policy proposal adjudication principles: "demand-revealing process" and "voting against" (Mueller, 1992, pp. 70-81). The former is a process of selecting public resources in line with individual preference, and its essence lies in efficiently promoting individual choice on public resources through tax revenue collection to avoid high cost of decision-making and improve decision-making efficiency in essence; while the latter is a comparing and weighing program, the action is to weigh and balance according to its suggested various policy proposals and certain principle, and to successively screen out the policy proposals with a low level of reasonability and finally to choose the relatively best policy proposal by means of voting against. "Voting against" adjudication principle can effectively reduce "self-interest and self-concern" in decision-making process and improve the scientificity of education policy decision-making. V. Advocating Risk Assessment and Practice of Education Decisions Due to the constantly changing social environment and the complexity of education activities, education decision-making process involves many risk factors, and thus risk assessment and practice of education decisions have become an important part of educational scientific decision-making. Education decision-making risk analysis belongs to macro education decision-making behavior process. According to the requirement and standard of system theory and complex science, education decision-making risk analysis will make specific examinations and investigations on all the factors and causes in the complex system of education policy, and find out possible risk assumption and trend. The early evaluation on the implementation of education policies is the core issue of education policy risk analysis, this evaluation is based on the requirements of education activities and the practices of education policies, and its essence is to reduce adverse reactions in the process and further to optimize the efficiency of education policy implementation. That is to say, the analysis and evaluation by means of education decision-making risk can prevent policy crises in time, defuse potential policy risks and advance the scientificity of education decision-making. Meanwhile, relying only on decision-making risk analysis cannot resolve the risk factors of education decision-making completely, and that is why there is a need to establish and perfect an education decision-making risk prevention and error-correcting mechanism with education decision-making experiments as the core. Selecting pilots to conduct education policy experiments is one of the universal decision-making experiments now, and this method can correct education decision-making errors as early as possible. Besides, there are no successful precedents to follow for policy decision-making plans, especially new education decision-making plans, so the experiences, methods and efficiency of policy implementation can only be obtained and displayed in the process of policy experiments. Therefore, the policy experiment of education decision-making is not merely a critical path to maintain the sustainable and healthy implementation of education policies, but also the necessary requirement of educational scientific decision-making characterized by bounded rationality. VI. Building the Institutional Rationality of Educational Scientific Decision-making Institutional rationality is the systemization, institutionalization and organization of rational value and principle. Educational scientific decision-making is a public policy decision-making behavior with rationality and availability as its goal, and institutional rationality is its important basis and guarantee. Generally, the subject and procedure of education decision-making is to "correctly balance the relation between the central and the local, the centralized management and decentralized management, the centralized decision-making and decentralized decision-making" (Yang, 2010, p. B1), and the essence is to well handle the relation between the centralized power and decentralized power in education decision-making, this handling of relations comes from the grasp of education decision-making pattern on rationality and the demarcation of the scope of bounded rationality. One of the core issues of scientific decision-making theory is the problem of the relation between the centralized power and decentralized power which is particularly important in the field of education policy decision-making. Specifically, it needs to make clear and demarcate the reasonable distribution of specific decision-making content and information, namely which issues need to centralize power and which issues need to decentralize, or which issues need the combined effort of both etc. The essence of centralized power and decentralized power issue in the process of education decision-making is to solve the problem of efficiency and applicability, which will do great benefits to improve the scientificity of education decision-making. However, the education policy problem is an unusually complex problem domain, it will be affected by various factors like the change of external social environment and the flow of internal environment. As a matter of fact, for decision-making, neither the centralized pattern nor the decentralized pattern can exhaust all the education policy issues. Hence, on the premise of constantly changing education decision-making environment at now and in the future, whether to centralize or to decentralize depends on the nature and feature of education decision-making issues, and the core issue in the process remains the applicability of decision-making methods. In some sense, the organic combination of these two decision-making patterns provides an effective institutional rationality guarantee for developing the scientific characters of education decision-making and further advancing the reasonable running of education policy activities.
2019-04-22T13:12:00.230Z
2018-07-08T00:00:00.000
{ "year": 2018, "sha1": "7b80cfe7113dc786663e99be815e52a65ec73f3d", "oa_license": "CCBYNC", "oa_url": "http://www.davidpublisher.org/Public/uploads/Contribute/5b31bc2694c5f.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "60b2eaf58b5c482afbd2df2b392ae11ca4519924", "s2fieldsofstudy": [ "Education", "Philosophy" ], "extfieldsofstudy": [ "Economics" ] }
17482221
pes2o/s2orc
v3-fos-license
Book review of: "Clinical aspects of electroporation" by Stephen T Kee, Julie Gehl, Edward W Lee This article is a review of the book: Clinical aspects of electroporation, by Stephen T. Kee, Julie Gehl, Edward W Lee, which is published by Springer Press. Basic information that should be helpful in deciding whether to read the book and whether to use it as a reference book is presented. This includes an introduction, a description of all the sections of the book, and a comparison with recently published books on the topic. Introduction Electroporation is a technique to drive molecules in intracellular targets that is gaining momentum in basic science and clinical practice. This book is focused on providing the reader with a concise yet thorough summary of the recent breakthroughs in the field of electroporation. It comprises 21 chapters divided in four parts: introduction, electrochemotherapy, gene electrotransfer and irreversible electroporation. The book is overall well structured and easy to follow. The book can be used in part as an excellent reference work for biomedical engineers, scientists, and clinicians. Part I. Introduction The introduction provides the reader with a comprehensive review of the physics and physiology of electroporation. In particular, there is a clear description of the phenomena that occur at the cellular level after the application of electric pulses. The transient and permanent effects of different types of electrical stimulations are discussed in depth and described and illustrated in a manner to be clearly understood by readers who are not familiar with the topic. The difference between reversible electroporation (transiently increased cell permeability) and irreversible electroporation (non-reversible cell permeability resulting in altered cross-membrane ion flow and ultimately in cell death) are clearly described. The mathematical and physical aspects of electroporation are described with a practical approach in order to be helpful for the basic scientist and the clinical investigator. The chapter on different equipments at the end of the introduction is more similar to a company catalogue than a book chapter. Perhaps a broader discussion of the advantages and disadvantages of the different waveforms and electrical protocols would have been more useful. Part II. Electrochemotherapy The second part of the book is focused on electrochemotherapy and in the first two chapters the basics of this technique are described at the cellular level and at the vascular level. More precisely, the increased captation of chemotherapy drugs and the electroporation-induced vascular disruption are explained very well. The clinical experience on cancer patients is limited to two chapters, the first one summarizes the experience of the ESOPE Group on small tumors (especially melanoma). This chapter is well structured with a section devoted to guide clinicians on techniques to treat dermatological malignancies. The photographic support is more than adequate. Unfortunately the chapter does not report the experience of other groups outside the ESOPE study. Considering the small number of groups working worldwide on this therapy, this is a significant limitation for the reader. The second chapter is devoted to the description of the clinical approach to large malignancies, especially the palliation of large cutaneous metastases. Again, the chapter is well-written and exhaustive, with a good photographic support and a good algorithm to guide the clinicians. This section could have been potentially strengthened by adding a small report on the experience in veterinary patients, where large tumors have been successfully treated over the past decades. The last three chapters, despite being well written, are somewhat disappointing because they focus on electroporation in the bone, brain and organ lumens but the discussion of the subject is limited to the experimental results obtained in laboratory animals and one dog with a spontaneous neoplasm. Part III. Gene electrotransfer The third part is focused on gene electrotransfer and thoroughly describes the different strategies for using this technique for curative and prophylactic purposes (gene therapy and DNA vaccine). The different chapters point out the many challenges posed by gene delivery in different biological targets such as tumors, skin, muscle and lungs. While most of the presented data are obtained from laboratory animals, the author strive to translate their experience into a patient-oriented setting, providing several insights potentially useful to clinicians willing to start clinical trials on this therapy. Part IV. Electroporation The fourth and last part is focused on a recently-developed electroporation-based strategy: the direct tumor ablation by using irreversible electroporation (IRE). The three chapters of this section are thorough, well written and show a sequential construction. The first chapter describes the basics of irreversible electroporation and the first preclinical experience on rabbit and swine liver; the second chapter reports the results of IRE in a rabbit model of head and neck cancer, and finally the third chapter is focused on the translation of the above results to human patients with liver metastases. The imaging support of the chapters is very comprehensive and allows an easy grasp of the fundamentals of this technique, helping the reader to understand the state of the art of IRE. Comparison with the current literature There is currently little available for readers except for few other books such as: 1) Irreversible Electroporation By: Boris Rubinsky; New York: Springer; 2010. Price: $ 169. ISBN 978-3-642-05419-8 # 328 pages, Binding: Hardcover, 13 chapters. In this book there is the deepest analysis of irreversible electroporation from basic studies to clinical applications, the authors are well known opinion leaders in the field. Considering that the book is exclusively focused on irreversible electroporation, it has an in depth analysis that cannot be reached by books that deal with a broader vision of electroporation. On the other hand, this book lacks completely a discussion on all the other aspects of electroporation. 2) Electroporation in Laboratory and Clinical Investigations By: Enrico Spugnini and Alfonso Baldi; New York: Nova Science; 2011. Price $ 145. ISBN: 978-1-61668-327-6 ISBN: 978-1-61668-383-2 Binding: Hardcover, 18 Chapters. This book offers a comprehensive and up to date overview on electroporation in mathematic modeling, bioengineering, molecular biology, plant biology, pathology, veterinary and human oncology. This book provides the reader with an in depth analysis of the mathematical and physical basis of electroporation, describes the different types of electroporation currently adopted and has a very good clinical section, strengthened by two chapters of electroporation in veterinary oncology that give the book an edge. 3) Advanced electroporation techniques in biology and medicine By: Pakhomov Andrei G.; Miklavcic Damijan; Markov Marko S. Boca Raton: Florida; CRC Press; 2010. Price € 112. ISBN 1439819068 Binding: Hardcove, 29 Chapters. This book summarizes most recent experimental findings and theories related to permeabilization of biomembranes by pulsed electric fields. It focuses on biophysical mechanisms of electroporation and applications of this phenomenon in biomedical research and medicine. It provides a broad examination of the discoveries and clinical orientations in this field. Final considerations on "Clinical aspects of electroporation" Despite some limitations, there is a clear intent to provide as much information as possible to researchers interested in adopting electroporation techniques. Unfortunately, the title is misleading, since the clinical part is somewhat limited. As a result, this book will be more helpful to basic scientists than clinicians; however it is definitely a book worth having as a reference.
2016-01-22T01:30:34.548Z
2011-10-07T00:00:00.000
{ "year": 2011, "sha1": "2e0cc9d006fadc4380a51e5da76ad52d061bebca", "oa_license": "CCBY", "oa_url": "https://biomedical-engineering-online.biomedcentral.com/track/pdf/10.1186/1475-925X-10-89", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e0cc9d006fadc4380a51e5da76ad52d061bebca", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13907373
pes2o/s2orc
v3-fos-license
Scaling limits for equivariant Szego kernels Suppose that the compact and connected Lie group G acts holomorphically on the irreducible complex projective manifold M, and that the action linearizes to the Hermitian ample line bundle L on M. Assume that 0 is a regular value of the associated moment map. The spaces of global holomorphic sections of powers of L may be decomposed over the finite dimensional irreducible representations of G. In this paper, we study how the holomorphic sections in each equivariant piece asymptotically concentrate along the zero locus of the moment map. In the special case where G acts freely on the zero locus of the moment map, this relates the scaling limits of the Szego kernel of the quotient to the scaling limits of the invariant part of the Szego kernel of (M,L). Introduction Let (M, J) be an n-dimensional complex projective manifold, and let (L, h) be an Hermitian ample line bundle on M. Suppose that the unique compatible connection on L has curvature Θ = −2i ω, where ω is a Hodge form on M. The pair (ω, J) puts an Hermitian structure H = g −iω on the (holomorphic) tangent bundle T M, hence a Riemannian structure g on M. Let G be a compact connected g-dimensional Lie group, and suppose given a Hamiltonian holomorphic action of G on (M, ω, J) unitarily linearizing to (L, h). For every k = 1, 2, . . ., there is a natural Hermitian structure on each space of holomorphic global sections H 0 (M, L ⊗k ), and a naturally induced unitary representation of G on H 0 (M, L ⊗k ). Let {V ̟ } ̟∈Θ be the finite dimensional irreducible representations of G, and for every ̟ ∈ Θ let H 0 (M, L ⊗k ) ̟ ⊆ H 0 M, L ⊗k be the maximal subspace equivariantly isomorphic to a direct sum of copies of V ̟ . There are unitary equivariant isomorphisms The action of G on L dualizes to an action on the dual line bundle L * in a natural manner; on the other hand, the G-invariant Hermitian metric h on L naturally induces an Hermitian metric on L * , still denoted by h, which is also G-invariant. Let X ⊆ L * is the unit circle bundle, with projection π : X → M. Then by the above the action of G on L * leaves X invariant. Furthermore, X is a contact manifold, with contact form given by the connection 1-form α. Since G preserves both the Hermitian metric and the holomorphic structure, it preserves the unique compatible connection, and therefore it acts on X as a group of contactomorphisms; given this, X has a standard G-invariant Riemannian metric. By these underlying structures, in the following we shall tacitly identify functions, densities and half-densities on X. In the following, to avoid cumbersome notation, we shall use the same symbol µ g for the symplectomorphism of M and the contactomorphism of X induced by g ∈ G. As is well-known, the spaces of smooth sections C ∞ M, L ⊗k may be unitarily and equivariantly identified with the spaces C ∞ (X) k of smooth functions on X of the k-th isotype for the S 1 -action, that is, obeying the covariance law f (e iϑ · x) = e ikϑ f (x) for x ∈ X and e iϑ ∈ S 1 . Let H(X) k ⊆ C ∞ (X) k be the subspace of functions corresponding to H 0 (M, L ⊗k ) under this isomorphism, so that H(X) =: +∞ k=0 H(X) k is the Hardy space of X. Thus (1) translates into (2) In this paper, we are concerned with certain C ∞ functions Π ̟,k on X naturally associated to each pair (̟, k) ∈ Θ × N. Namely, let us choose for any (̟, k) ∈ Θ × N an orthonormal basis s Then Π ̟,k is well-defined, that is, independent of the choice of the orthonormal basis, and in fact it can be intrinsically described as the distributional kernel of the orthogonal projection P ̟,k : L 2 (X) → H(X) ̟,k . We shall study here the asymptotic properties of the functions Π ̟,k , as ̟ is fixed and k → +∞. Let g be the Lie algebra of G, and denote by Φ : M → g * the moment map of the action of G on (M, 2ω). In [P1], it has been shown that for fixed ̟ one has Π ̟,k (x, x) = O(k −∞ ) as k → +∞, unless Φ π(x) = 0. On the other hand, if Φ π(x) = 0, and G acts freely on Φ −1 (0) ⊆ M, then by Corollary 1 of [P2] (working with a different normalization convention for the total volume) there is an asymptotic expansion where V eff : (Φ • π) −1 (0) → R is the effective potential of the action [BG]; its value on x ∈ (Φ • π) −1 (0) is the volume of the G-orbit in M through π(x). Thus the effective potential of the action controls the asymptotics of the restriction of Π ̟,k to the diagonal of X × X. In the particular case of the trivial representation ̟ = 0, V eff relates the asymptotics of Π 0,k and of the Szegö kernel of the symplectic reduction (M 0 , ω 0 , L 0 ) of (M, L, ω), expressing an obstruction to the conformal unitarity of the . Further developments on this problem are due to Charles [Ch], Hall and Kirwin [HK], Hui Li [L], Ma and Zhang [MZ]. Turning momentarily to the action free case, the fast decay of Szegö kernels away from the diagonal has stimulated interest in the asymptotics of their scaling limits near the diagonal. More precisely, suppose x ∈ X, and let ρ(z, θ) be a Heisenberg local chart for X centered at x, as in (18) below; in particular, if m =: π(x) this unitarily identifies (T m M, H m ) and C n with its standard Hermitian structure. As shown in Theorem 3.1 of [SZ], for any w, v ∈ C n the following asymptotic expansion holds as k → +∞ for the level-k Szegö kernel Π k : where and the a j are polynomials in w and v (see also [BSZ] for the leading term). Recall that here H m denotes the Hermitian structure of T M induced by ω =: i 2 Θ; this normalization convention accounts for the factor 1 π n in (3), unlike the earlier work [Z]. We shall conform here to [SZ]; thus the total volume of M is vol(M) = π n n! M c 1 (L) n . In this article, we shall study the scaling limits of the equivariant Szegö kernels Π ̟,k , and show that to leading order they are still simply related to the effective volume, certain data associated to the representation ̟, and (in the special case where G acts freely on Φ −1 (0) ⊆ M) the scaling limits of the Szegö kernel of the symplectic reduction. Furthermore, we shall see that equivariant scaling limits can also be expressed by the product of an exponentially decaying factor in v, w times an asymptotic expansion whose coefficients are polynomials in v and w. We remark that in the toric case equivariant asymptotics have been studied in [STZ]. To express our results, we need some basic facts about the local geometry of M along M ′ =: Φ −1 (0) [GS], [GGK]. Recall that if 0 ∈ g * is a regular value of Φ, then M ′ is a g-codimensional connected coisotropic submanifold of M, whose null-fibration is given by the orbits of the G-action. At any m ∈ M, let us denote by g M (m) ⊆ T m M the tangent space to the orbit through m, and by J m : . Therefore, we have orthogonal direct sum decompositions Given (4), if m ∈ M ′ and w ∈ T m M, we shall decompose w as . The labels stand for vertical, horizontal, and transverse. This hints to the fact that in the special case where G acts freely on M ′ , the latter is a principal G-bundle on the symplectic reduction M 0 = M ′ /G; thus g M (m) is the vertical tangent fibre, while Q is a connection projecting unitarily to the tangent bundle of M 0 . Before stating our Theorem, another definition is in order. To this end, recall that if 0 ∈ g * is a regular value of the moment map then the action of G on Φ −1 (0) ⊆ M is locally free. Therefore, any m ∈ Φ −1 (0) has finite stabilizer subgroup G m ⊆ G. Suppose x ∈ X, Φ π(x) = 0. If G π(x) ⊆ G is the (finite) stabilizer subgroup of π(x), for every g ∈ G π(x) there exists a unique h g ∈ S 1 such that µ g (x) = h g · x, where µ g : X → X is the contactomorphism induced by g. We shall then let where χ ̟ : G → C is the character of the irreducible representation ̟. As above, ω =: i 2 Θ, where Θ is the curvature, and h is the Hermitian metric on T M associated to ω. Furthermore, as in (3) we shall express the asymptotic expansion for Π ̟,k in a Heisenberg local chart ρ centered at x. However, given that the dependence of Π ̟,k on θ and θ ′ is given by the factor e ik(θ−θ ′ ) and carries no geometric information, in the following we shall generally take θ = θ ′ = 0; with the identification T m M ∼ = C d induced by ρ understood, we shall set x + w/ √ k =: ρ w/ √ k, 0 . We then have: Theorem 1. Suppose that 0 ∈ g * is a regular value of Φ, and x ∈ X, Φ π(x) = 0. Let us choose a system of Heisenberg local coordinates centered at x. For every ̟ ∈ Θ and w, v ∈ T π(x) M, the following asymptotic expansion holds as k → +∞: and the a ̟j 's are polynomials in v, w with coefficients depending on x and ̟. We integrate the statement by the following remarks. • The remainder term can be given a 'large ball estimate' (that is, for u , v k 1/6 ), similar to the ones in [SZ]. More precisely, let R N (x, v, w) be the remainder term following the first N summands in (6). Given the description of Π ̟,k as an oscillatory integral (cfr (51 below), we may adapt the arguments in §5 of [SZ] to obtain that for u , v k 1/6 we have The bound also holds in C j -norm. • In the special case where G acts freely on Φ −1 (0), denote by X 0 ⊆ L * 0 the circle bundle of the reduced pair (M 0 , L 0 ), and by Π k the level k Szegö kernel of X 0 . If Φ π(x) = 0, let us denote by x its image in X 0 , and if w h ∈ Q π(x) let w h be its isometric image in the tangent space to M 0 . By (3) and Theorem 1, we obtain • Arguing as in §2.3 of [DP], one can see that Π ̟,k = O (k −∞ ) uniformly on compact subsets of the complement in X × X of the locus Thus it is natural to consider scaling limits at any (x, y) ∈ I(Φ). Given , a minor modification of the arguments in the proof of Theorem 1 leads to an asymptotic expansion • We are primarily interested in the case of complex projective manifolds. In view of the microlocal description of almost complex Szegö kernels appearing in [SZ], the results of this paper can however be extended to the context of almost complex symplectic manifolds. After this paper was completed, I learned of the rich paper [MZ] alluded to above. Using analytic localization techniques of Bismut and Lebeau for spin c Dirac operators, Ma and Zhang obtain among other things an asymptotic expansion for the trivial representation. Examples In the non-equivariant case, a key feature of scaling asymptotics of Szegö kernels expressed by (3) is the universal nature of the leading term, essentially the level-one Szegö kernel of the reduced Heisenberg group H n red . To express this more precisely, recall that the latter may be viewed as the unit circle bundle of the trivial line bundle L = C n × C over C n , endowed with the Hermitian metric The unit circle bundle is thus given by A Heisenberg chart for X centered at (0, 1) is As shown in [BSZ], for every k = 1, 2, . . . the level-k Szegö kernel is In the linear case, we shall derive from (7) an asymptotic expansion in the spirit of Theorem 1, at any x = z 1 , e −z 1 /2 for which the map γ z 1 : g ∈ G → µ g (z 1 ) ∈ C d is an embedding (that is, z 1 has trivial stabilizer in G); with minor changes, the arguments below apply when γ z 1 is an immersion (that is, z 1 has finite stabilizer in G). Example 2.1. Let A : G → U(n), g → A g , be a unitary representation, so that the underlying action on (C n , ω 0 ) is µ g (z) =: A g z (z ∈ C n ); here ω 0 =: i 2 n j=1 dz j ∧ dz j is the standard symplectic structure on C n . The standard Hermitian structure on C n is then A linearization to L is given by For any z 1 ∈ C n , a Heisenberg chart for X centered at z 1 , e − z 1 2 /2 is Thus, given x = z 1 , e − z 1 2 /2 ∈ H n red and v ∈ C n , in our notation Given an irreducible representation ̟ and w, v ∈ C n , by a straightforward computation using (7) we obtain where dg is the density on G associated to an invariant Riemannian metric of total volume one, and Given the simplifying assumption that z 1 has trivial stabilizer in G, there ), where e ∈ G is the unit and dist G is the Riemannian metric on G. Thus, it follows from (7) that the integrand of (8) on the other hand, we can transfer the integration to the Lie algebra g by the exponential map exp G : g → G, η → e η , and apply the rescaling η = 1 √ k ξ. Let A : g → u(n), η → A η , be the differential of the morphism of Lie groups A : G → U(n). Thus On the upshot, after some computations we obtain where now here Φ : V → g * is the moment map, and Φ ξ =: Φ, ξ . Suppose, to begin with, that Φ z 1 = 0. Then the linear phase ξ → Φ ξ z 1 has no stationary point in ξ, and since by (11) the integrand in (10) is absolutely convergent, the stationary phase Lemma applies to show that where in the latter equality integration has been shifted from g to the tangent space g(z 1 ) ⊆ C n at z 1 to the G-orbit of z 1 by the change of variables The Gaussian integral in (12) is (2π) g/2 e iω 0 (vt+wt,wv−vv)− 1 2 vt+wt 2 , and from this one computes Before considering the next example, let us recall from [BSZ] that for k = 1, 2, . . . an orthonormal basis of here J! =: d l=0 j l !, z J =: d l=0 z j l l . Example 2.2. The unitary representation of S 1 on C 2 given by t · (z 0 , z 1 ) =: (t −1 z 0 , tz 1 ) descends to a symplectic action on P 1 , with a built-in linearization to the hyperplane line bundle. The associated moment map is Clearly, any [z 0 : z 1 ] ∈ Φ −1 (0) has stabilizer subgroup {±1}. Since any S 1 -orbit in S 3 has length 2π and doubly covers its image in P 1 , the effective volume is identically equal to π = 2π 2 on Φ −1 (0). Therefore, Given that we have for ̟ ∈ Z and k ∈ N: By the Stirling formula, if b is fixed and a → +∞ we have Suppose then k = ̟ + 2s, s ∈ N, and choose (z 0 , z 1 ) ∈ S 3 lying over [z 0 : z 1 ]; in view of (13), (14) and (15), which fits with the asymptotic expansion of Theorem 1. Preliminaries In this section we shall collect some preliminaries and set some notation. If (M, J) is a complex manifold, any Kähler form ω on it determines an Hermitian metric h on the tangent bundle of M, and ω = −ℑ(h). The Riemannian metric g =: Since Heisenberg local coordinates centered at a given x ∈ X will be a key tool in the following, we shall briefly recall their definition [SZ]. Thus we now assume that L → M is an Hermitian ample line bundle, and ω = i 2 Θ, where Θ is the curvature of the unique compatible covariant derivative. Let us choose an adapted holomorphic coordinate system (z 1 , · · · , z n ) for M centered at π(x). This means that, when expressed in the z i 's, ω evaluated at π(x) is the standard symplectic structure on C n , that is, ω π(x) = i 2 n j=1 dz j ∧ dz j . Thus the choice of the z i 's determines a unitary isomorphism T π(x) M ∼ = C n . Let us next choose a preferred local frame e L for L at π(x), in the sense of [SZ]. Thus e L is a holomorphic local section for L in the neighborhood of π(x), satisfying (17) where ∇ is the covariant derivative of the connection, and h = g + iω. The local holomorphic frame for L uniquely determines a holomorphic dual local frame e * L for L * , determined by the condition (e * L , e L ) = 1, For δ > 0, let B 2n (0; δ) ⊆ C n ∼ = R 2n be the ball of radius δ centered at the origin. For an appropriate δ > 0, a system of Heisenberg local coordinates for X centered at x is then given by the map where a(z) =: e * L 2 = e L −2 . If w ∈ T π(x) M ∼ = C n , we shall denote by x + w the point in X with Heisenberg local coordinates (w, 0). It will simplify our exposition to make a little equivariant adjustment to the previous construction. Suppose that m ∈ M has finite stabilizer subgroup G m ⊆ G (this will always be the case when Φ(m) = 0 if 0 ∈ g * is a regular value of the moment map). Let U ⊆ M be a G m -invariant open neighborhhod of the identity, and suppose that a local holomorphic frame σ = e * L satisfying (17) has been chosen on U. Clearly, for every g ∈ G m we have g * (σ)(m) = h g · σ(m) (recall that g * (σ) = µ g • σ • µ g −1 ). We may then consider the new frame Then σ(m) = e L (m), and since the metric and the connection are G-invariant σ also satisfies (17). Moreover, we now have In the following, the underlying preferred local holomorphic frame in the definition of Heisenberg local coordinates will be assumed to satisfy (19). For ξ ∈ g, we shall denote by ξ M and ξ X the vector fields on M and X, respectively, associated to ξ by the infinitesimal actions of g. The moment map Φ : M → g * for the action on (M, 2ω) is related to the G-invariant connection form α on X by the relation Φ ξ = −ι(ξ X ) α, where Φ ξ = Φ, ξ . Proof of Theorem 1. To begin with, let us fix an invariant Haar metric on G, and let dg denote the associated measure; by Haar metric we mean that G dg = 1. Now if ρ : G → GL(W ) is linear representation on a complex vector space, for any ̟ ∈ Θ the projection P ̟ of W onto the the ̟-isotypical component W ̟ is given by [Di]. On the other hand, the unitary representation of G on H k (X) ⊆ L 2 (X) induced by the action on X is given by (g · f )(y) =: f (µ g −1 (y)) (f ∈ L 2 (X), y ∈ X). Therefore, the equivariant Szegö kernel Π ̟,k is given by where µ g : X → X is the contactomorphism associated to g ∈ G. Suppose x ∈ X, Φ(x) = 0, and set m =: π(x). We assume given a system of Heisenberg local coordinates for X centered at x. This choice gives a meaning to the expression x + w, for any w ∈ T m M ∼ = C n . Then for every ̟ ∈ Θ and k ∈ N we have where χ ̟ : G → C is the character of the irreducible representation V ̟ [Di]. We shall now split the integration in dµ as the sum of two terms, one which is rapidly decaying as k → +∞, and another where integration is over a suitably shrinking neighborhood of the (finite) stabilizer subgroup G m ⊆ G. To this end, let us define for every k ∈ N an open cover {A k , B k } of G by setting A k =: g ∈ G : dist G g, G m > k −1/3 , (towards application of the stationary phase Lemma later in the proof, the exponent −1/3 used in the definition of A k and B k , could be replaced by −a, for any a ∈ (0, 1/2)). Here dist G : We may then split (22) as the first (respectively, second) summand in (23) is (22) with the integrand multiplied by a k (respectively, b k ). Proof of Proposition 4.1. Let dist M : M × M → M be the Riemannian distance function. We have: Lemma 4.1. There exists a positive constant C (dependent on w and v, but independent of k) such that for all k ≫ 0 and g ∈ A k we have Proof of Lemma 4.1. If not, we can find N ∋ k j ↑ +∞ and g j ∈ A k j such that ∀j = 1, 2, . . . we have Since dist G (g j , G m ) is bounded above by the diameter of the compact Lie hence also dist M µ g j (m) , m → 0. Therefore, g j → G m ; after passing to a subsequence, therefore, we may assume that g j → g 0 for some g 0 ∈ G m . Let us write g j = g 0 h j , where h j → e, and dist G (h j , e) = dist G (g j , G m ) ≥ k −1/3 j . Using the exponential map exp G : g → G, for all j ≫ 0 we can write h j = e ν j , for unique ν j ∈ g such that ν j = dist G (h j , e). Since G acts locally freely on Φ −1 (0), there exists c > 0 such that ν M (m) ≥ c ν , ∀ m ∈ Φ −1 (0), ν ∈ g (the former norm is in T m M, the latter in g). Hence, Working in preferred local coordinates centered at m, we have By definition of preferred local coordinates, it follows from (26) and (27) that On the other hand, we can rewrite (25) as a contradiction. Q.E.D. Returning to the proof of Proposition 4.1, by Lemma 4.1 and the offdiagonal estimate on the Szegö kernel in (6.1) of [C], we conclude whenever k ≫ 0 and g ∈ A k . The statement follows easily from (29). Q.E.D. Proof. If k ≫ 0, g ∈ B jk and |ϑ| > ǫ/2, then (w and v are held fixed). Since the singular support of the Szegö kernel Π is the diagonal diag(X) ⊆ X × X, we conclude that is a bounded family of smooth functions on S 1 when k ≥ k 0 , g ∈ B jk and |ϑ| > ǫ/2; here γ 0 is interpreted as γ 0 (e iϑ ), a cut-off function supported on a small open neighborhood of 1 ∈ S 1 . In the same range, therefore, for every l ∈ N we can find a constant C l > 0 such that Ψ (s) k,g < C l s −l for every s ∈ N, where Ψ (s) k,g denotes the s-th Fourier coefficient of Ψ k,g . In particular, this is true for s = k, hence Ψ (k) k,g < C l k −l . The same estimate then holds after integrating over B k , and this implies the statement. We are reduced to studying the asymptotics of Π To proceed, let us introduce the parametrix for the Szegö kernel contructed in [BS]. Thus, up to a smoothing term which does not contribute to the asymptotic expansion, we can represent Π as a Fourier integral operator of the form Π(y, y ′ ) = +∞ 0 e itψ(y,y ′ ) s(y, y ′ , t) dt (y, y ′ ∈ X), where the phase satisfies ℑ(ψ) ≥ 0, and the amplitude is a semiclassical symbol admitting an asymptotic expansion s(y, y ′ , t) ∼ +∞ t=0 t n−j s j (y, y ′ ). In view of Lemma 4.2, inserting (33) into (32), and multiplying the integrand by γ 0 , we obtain in the last equality we have performed the coordinate change t kt, and set A ̟kj (g, t, ϑ) Let exp G : g → G be the exponential map, and let E ⊆ g be a suitably small open neighborhood of the origin 0 ∈ g, over which exp G restricts to a diffeomorphism E → E ′ =: exp G (E). Since the shrinking open neighborhood E k ⊆ G of the identity e ∈ G is definitely contained in E ′ , we may express the integration in dg using the exponential chart. To this end, let us fix an orthonormal basis of g, so as to unitarily identify g with R g , and let us write ξ for the correspondig linear coordinates on g. We shall denote by H G (ξ) dξ the local coordinate expression of the Haar measure dg under the exponential cart; the orthonormality of the chosen basis of g implies that H G (0) = 1. With some abuse of language, we shall write b k for the composition b k • exp G , and assume that b k (ξ) = b 3 √ k ξ for a fixed function b = b 1 on E. We shall also leave exp G implicit in the expression for Ψ kj and A ̟kj , which shall be viewed in the following as functions of ξ ∈ E. Thus, replacing g by ξ ad dg by H G (ξ) dξ in (34), and then performing the change of variable ξ = ν/ √ k, we obtain Our next step will be to Taylor expand Ψ kj in descending powers of k 1/2 , by relying on (64) and (65) of [SZ]; to this end, we shall need the Heisenberg local coordinates of µ g −1 • r e i(ϑ+ϑ j ) µ g −1 Recalling that m = π(x) and G m ⊆ G is the stabilizer subgroup, let us consider the isotropy representation G m → GL T m M , g → d m µ g ; for every j = 1, . . . , N x and w ∈ T m M, let us set w j =: d m µ g −1 j (w) ∈ T m M. In view of our choice of ω = i 2 Θ as the reference Kähler form in our construction of Heisenberg local coordinates, we then have: Lemma 4.3. Suppose x ∈ X, Φ • π(x) = 0, and fix a system of Heisenberg local coordinates centered at x. Then there exist C ∞ functions Q, T : C n × R g → C n , vanishing at the origin to third and second order, respectively, such that the following holds. For every w ∈ T π(x) M, −π < ϑ < π, ν ∈ g, as k → +∞ the Heisenberg local coordinates of where Q j , T j : C n × R g → C n vanish at the origin to third and second order, respectively. Therefore, the Heisenberg local coordinates of X j,k (x, w) have the form θ k −1/2 , k −1/2 w j − ν M (m) + T k −1/2 w j , k −1/2 ν , for an appropriate smooth function θ : (−δ, δ) → R. Proof of Claim 4.1. Recall that Heisenberg local coordinates depend on the choice of a preferred local holomorphic frame e L of L an open neighborhood U ⊆ M of m; as discussed in §3, without loss of generality we may assume that U is G m -invariant and g * (e * L ) = h g · e * L , ∀ g ∈ G m . Let us write σ = e * L . We have x + sw = σ(m + sw)/ σ(m + sw) , where m + sw ∈ U is the point with local preferred holomorphic coordinates w ∈ C n . Therefore, has Heisenberg local coordinates −ϑ j , z j (w, s) , where z j (w, s) are the local preferred holomorphic coordinates of µ −1 g j (m + sw). Therefore, θ(s, 0) = −ϑ j for all s. We conclude that θ(s, t) = −ϑ j + t R(s, t), for some smooth function R. On the other hand, X ′ is G-invariant, and G acts horizontally on it (in other words, for every x ∈ X ′ and ξ ∈ g we have ξ X (x) = ξ ♯ M π(x) , where ξ ♯ M denotes the horizontal lifting of ξ M ). Lemmata 2.4 and 3.3 of [DP] then imply that θ(0, t) = t 3 S(t) for a smooth function S(t). Thus, R(s, t) = t 2 R 1 (t) + s d(s, t) for smooth functions R 1 (t), d(s, t). We conclude that θ(s, t) = t 3 R 1 (t) + st d(s, t), and the statement follows by setting Returning to the proof of Lemma 4.3, in order to determine d 0 we recall that the expression for α in Heisenberg local coordinate is α = dθ + p dq − q dp + β( z 2 ). Inserting the local expression for γ s (t) that we obtain from (38) and Claim 4.1, we obtain where G 1 vanishes to second order for (s, t) = (0, 0). On the other hand, we have d t (γ s )(1) = −ν X γ s (t) . Therefore having used that Φ ν = −ι(ν X )α. Because of the G-equivariance of Φ, Φ • π γ 0 (t) = Φ • π µ g j e tν (x) = 0 for every sufficiently small t; therefore, ∂Φ•γ ∂t (0,t) = 0 identically, where with abuse of language we have written Φ for Φ • π : X → g. This implies where m = π(x), and G 3 vanishes to second order for (s, t) = (0, 0). Comparing (40) and (41) with (39), we obtain d 0 = ω m (ν M , w), G 2 = G 3 . To complete the proof of Lemma 4.3, we need only take s = t = 1/ √ k, and remark that in Heisenberg local coordinates r e iϑ j is simply translation by ϑ j . Q.E.D. Let us set ψ 2 (u, v) = u·v− 1 2 ( u 2 + v 2 ) (u, v ∈ C n ). Invoking (63)-(65) of [SZ], in view of Lemma 4.3 we obtain that Ψ kj in (35) has the form: where R j : (C n ) 3 → C is a smooth function vanishing to third order at the origin. Let us now insert (42) in (37). We obtain A straightforward computation shows that (notice that the map w → w j induced by the isotropy action of g −1 j ∈ G m ⊆ G is an isometry of T m M, since G preserves the metric of M). We may insert in (36) the asymptotic expansion for the classical symbol s(x, y, t) appearing in the parametrix for Π, and use Taylor expansion in g = ν/ √ k, w/ √ k and v/ √ k in descending powers of k 1/2 , to deduce that where every coefficient has the form a ̟jl (ν, w, v, t, ϑ) = e te iϑ (T h +Tt+Tv+Tvt) p ̟jl (ν, w, v, t, ϑ) and each p ̟jl (ν, w, v, t, ϑ) is a polynomial in ν, w and v with coefficients depending on x, t, ϑ and ̟. In particular, the leading coefficient is Thus, To determine the leading asymptotics of (48), let us first integrate (47) in dν. By our choice of Heisenberg local coordinates, we may unitarily identify (T m M, ω m ) with (C n , ω 0 ), where ω 0 is the standard symplectic structure on C n ; let g 0 be the standard scalar product on C n , so that ω 0 a, b = −g 0 a, J 0 (b) , ∀ a, b ∈ C n , where J 0 is multiplication by i. We shall view S m : ν → ν M (m) as a map g → C n . Let us set λ = t e iϑ . Up to a multiplicative factor, we are led to integrating in dν. Recall that the ν coordinates are induced by the choice of an orthonormal basis of g; we can shift the integration to the tangent space of the G-obit through m, g M (m) ⊆ T m M. Let us then choose an orthonormal basis of g M (m), and let β be the corresponding linear coordinates. We can use β as integration variable, by the relation β = S m (ν). By Lemma 3.9 of [DP], after performing the change of variables β → β − (w jv − v jv ) we are left with t e iϑ/2 e − 1 2 t e iϑ w tj +v tj 2 ; in fact, since t > 0 (50) is valid when ϑ = 0 because − 1 2 β 2 equals its own Fourier transform, and consequently by analytic continuation it holds for all ϑ ∈ (−π/2, π/2). Let us next consider the case a general a ̟jl (ν, w, v, t, ϑ). Up to multiplicative factors polynomial in w and v, we are led to integrate the product of (49) times a monomial in ν. Again up to an appropriate scalar factor, this amounts to multiplying the integrand in (50) by a monomial in β, hence to evaluating an appropriate higher derivative of e − β 2 /2 in J 0 (w jt + v jt ) ∈ g M (m). We are thus left with the product of the right hand side in (50) times a polynomial in w t and v t . Thus we are left with an oscillatory integral whose phase Ψ, given by (44), is the same phase appearing in the discussion of the scaling asymptotics of non-equivariant Szegö kernels in §3 of [SZ]. In particular, Ψ has nonnegative imaginary part, and a unique stationary point for t = 1 and ϑ = 0; furthermore, at this point the Hessian of Ψ is Ψ ′′ (1, 0) = 0 1 1 i . Hence (1, 0) is a non-degenerate stationary point of Ψ. Arguing as in loc. cit., the contribution coming from |t| ≥ 2, say, is rapidly decreasing, and by the stationary phase method for complex oscillatory integrals (Theorem 7.7.5 of [H]) there is an asymptotic expansion: where L 0 is the identity, and L s is a suitable differential operator of degree 2s in (t, θ) for any s = 0, 1, 2 . . .. The statement then follows from the previous description of the phase; in particular, each coefficient in the asymptotic expansion is the product of e Γ(w,v) and a polynomial in w, v. The statement of the Theorem follows by summing over j.
2014-10-01T00:00:00.000Z
2006-12-19T00:00:00.000
{ "year": 2006, "sha1": "cdfeb75d21d6a9b038259e4a4cb62db5c1456c2f", "oa_license": null, "oa_url": "http://www.intlpress.com/site/pub/files/_fulltext/journals/jsg/2008/0006/0001/JSG-2008-0006-0001-a002.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "7091d2e3207412b8ba99adb9d8a9acd0aa7432ea", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
137977335
pes2o/s2orc
v3-fos-license
The Influence of Pressure Die Casting Parameters on Distribution of Reinforcing Particles in the AlSi11/10% SiC Composite The method of pressure die casting of composites with AlSi11 alloy matrix reinforced with 10 vol. % of SiC particles and the analysis of the distribution of particles within the matrix is presented. The composite castings were produced at various values of the piston velocity in the second stage of injection, at diverse intensification pressure values, and various injection gate width values. The distribution of particles over the entire cross - section of the tensile specimen is shown. The index of distribution was determined on the basis of particle count in elementary measuring fields. The regression equation describing the change of the considered index was found as a function of the pressure die casting parameters. The conclusion presents an analysis of the obtained results and their interpretation. Introduction Composite suspensions are characterised with much greater viscosity than liquid metals, therefore their castability and capability of filling the mould cavity is significantly lower. As a result, the production of castings made of such slurries is only possible due to the casting technologies which apply the forced filling of the mould cavity. The high-pressure die casting seems to be the most suitable technology for the production of metal composite castings [1][2][3][4][5]. The magnitude of pressure exerted on metal during formation of a casting in the pressure die can be modified at will during the subsequent stages of the process, taking the values from 0.2 to 300 MPa. The high injection velocity, the high pressure during the die filling, and the quick crystallization of a casting in the die contribute to such advantages of the pressure die casting technology as high productivity, from 30 to 3600 shots per hour, high accuracy and dimensional stability of castings, as well as the precision of a replica, the high surface smoothness of castings which allows to avoid further machining, the possibility of obtaining the thin-walled castings of the wall thickness even less than 0.5 mm, better mechanical, physical, and chemical properties due to the fine-grain structure of castings [1,2,[6][7][8]. The factors limiting the application range of the die casting technology include: high costs of tooling (pressure die and consumable parts) and the production machines (pressure die casting machine, manipulators), the limited size and weight of pressure castings, the limited quantity of foundry alloys which can be processed in this way. The character of filling the die cavity with molten metal depends on the die cavity shape, the type of applied pressing unit and the assumed casting parameters, and is decisive for the quality of castings. In modern pressure die casting machines the piston velocity in the sleeve is varied during the injection cycle in order to reduce or eliminate the gas entrapment in the system and to decrease the porosity of castings. Three stages of piston action are employed as a standard, but there are also systems allowing for a continuous change of its velocity. The basic parameters of pressure die casting with regard to metal matrix composites, i.e. the injection speed, the filling time, and the injection pressure are calculated according to the appropriate formulae generally applied in metal casting [2,9,10]. As far as cast composites are concerned, the properties of castings are influenced most significantly by the type, the size, and the percentage of the reinforcing phase particles, as well as by their distribution within the matrix. The particles of the reinforcing phase can be distributed uniformly or non-uniformly, in the latter case occupying the intergranular regions in quite disadvantageous way. The distribution pattern depends on the quality of the produced suspension, as well as on the casting technology and conditions under which a casting solidifies in the die. The quantitative determination of reinforcing phase distribution within the matrix allows to derive the functional, analytical relationships between the structural parameters and the properties of a casting [11][12][13][14][15][16]. The material and the method of investigations The commonly used AlSi11 (EN AC-44000) foundry alloy of aluminium and silicon was selected for the composite matrix. Its composition provides for good wettability of particles, thus enabling the introduction of silicon carbide into the matrix without additional treatment or modification of the alloy. The 98C silicon carbide of particle size 71-100 μm was applied in the experiment. The prepared slurry contained 10 vol. % of the reinforcing phase. The composite suspension was prepared by mechanical mixing. The laboratory stand at which it was prepared was equipped with the resistance heating furnace with a crucible of about 25 kg capacity, and the turbomixer of 0.25 m diameter with four blades inclined at 45 degrees. The turbomixer rotor was placed axially in the crucible, at a distance of one third of the melt height from the bottom of crucible. The rotor, made of the WNLV steel, was covered with the protective coating which ensured thorough mixing of the whole liquid phase volume and the relatively long lifespan of the mixer itself. The complete mixing system was constructed in such a way that it was possible to close the furnace after adding all components to the crucible. The mixing time was equal to 15 min, and the angular velocity of the rotor was fixed at the level of 500 rpm. The suspension was injected into a test die on the cold chamber horizontal pressure die casting machine of 1.6 MN clamping force. The examination was performed according to the 2 3 type of design of experiment, where the variable factors were: the piston velocity in the second stage of injection (v II ) taking the values of 1.2 or 3.6 m/s, the intensification pressure (p III ), being 20 or 40 MPa, and the gate width (d w ) equal to 1.5 or 3 mm. A casting with specimens for castability and impact strength tests, as well as for measuring the mechanical properties, is shown in Fig. 1. The assessment of the distribution uniformity of the reinforcing phase was done for the non-etched metallographic microsections taken from the tensile specimen shoulders. The examined area was a circle of 10 mm diameter. Panoramic digital images of the whole microsection surfaces were taken at the magnification 50×, then the images were merged to achieve an integral image of the entire microsection. A square grid 1×1 mm were superimposed on the area to be measured, dividing it into 79 separate measuring fields. The reinforcing phase particles were counted for each of these unit fields in such a way that the particles crossing the right or the bottom edge of the field were excluded. The observations were performed by means of the OLYMPUS EPIPHOT optical microscope cooperating with a digital image recorder and the MULTISCAN computer data analysis software tool. The degree of uniformity of distribution of SiC particles within the volume of matrix was found on the basis of the ν index, which can be calculated from the following equation [14]: ]. The ν index assumes the values from 0 to 1. The zero value identifies the distribution as the uniform one, also called the complete spatial random (CSR) distribution, the value of 1 identifies it as the non-uniform one, also called clustering distribution. Hence the smaller value of the ν index, the more uniform is the distribution of the reinforcing phase particles. The obtained results were juxtaposed with the values of the surface fraction of reinforcement corresponding to the really introduced volume of particles in order to correlate the obtained values. Then the theoretical quantity of particles over the area of 1 mm 2 was calculated from the relationship [14]: where: H -average size of a particle [mm], W V -the volume of a single particle [mm 3 ], ϕ -volume fraction of particles in the composite [%]. The theoretical value N A does not take into account the distribution of particles over the microsection area and can serve only as a control value for the measured quantity. A comparison of the obtained and the theoretical values of N A can be a measure of macroscopic distribution of particles. Results of investigation The measurement results concerning the distribution of particles within the matrix of composite castings obtained by the methods of quantitative metallography are presented in Table 1. Results from Table 1 were used to find the regression equation describing the influence of pressure die casting parameters on the distribution of reinforcing phase particles within the matrix of composite castings. This equation for coded values of independent variables (x 1 =v II, x 2 =p III i x 3 =d w ) takes the following form: The graphic representation of Eq. 3 is presented in Fig. 4. Conclusion The analysis of composite microstructures shows that the uniformity of the reinforcing phase distribution was indeed influenced by the variable parameters of casting process, and this was also confirmed by calculations. It was noticed that the low values of parameters support the non-uniform distribution or even generation of particle clusters in the composite suspension (Fig. 2). Equation 3 reveals a strong influence of both the piston velocity in the second stage of injection and the intensification pressure on the distribution of particles within the metal matrix. An increase in each of these parameters results in the increase in the uniformity of distribution of the SiC particles in the matrix. The influence of gate width is of minor significance for the improvement of the uniformity of reinforcing phase distribution. The obtained results point out unequivocally to the optimum parameters of pressure die casting for composites of this type. An increased piston velocity combined with the reduced gate width also improved the uniformity of the reinforcing phase distribution. This increase results from the fact of intensive mixing of the suspension in the gate and the rapid filling of the die with the prepared suspension. The intensification pressure occurred to exert less significant influence on the uniformity of distribution of the particles at the large values of injection velocity, however -as far as the castings containing 10% of reinforcement are concerned -the uniformity achieved at high intensification pressure and low injection velocity was better than the one achieved for low intensification pressure and low injection speed. The combined influence of piston velocity at the stage of die filling and of the gate width (or its cross-sectional area) can be expressed by the rate of the die filling (the injection rate). This rate changed from 16 m/s to 96 m/s in the course of examinations. The influence of the injection rate on the index of distribution of SiC particles in composites is presented in Fig. 5. For the purpose of comparison, the figure shows also a similar relationship for a composite containing 20 vol. % of SiC particles, its results of examination not being cited in the present paper. It results from the above diagram that for injection rates exceeding 50 m/s there is no significant improvement in the distribution of the reinforcing phase particles in the matrix of composite castings. The examinations allow to draw the following conclusions: -application of the pressure die casting technology for production of the AlSi11/SiC composite castings allow to modify the character of distribution of ceramic particles in metal matrix within a wide range; -parameters of the production process, i.e. the piston velocity in the second stage of injection, the intensification pressure and the gate width, exert the essential influence on the type of composite structure with respect to the distribution of the reinforcing phase particles; this distribution can be uniform, non-uniform, or non-uniform with clusters; -the increase in injection rate due to the increase in piston velocity in the second stage of injection and the reduction of gate area strongly promote the uniform distribution of reinforcing particles within the volume of castings.
2018-12-29T19:15:57.449Z
2013-09-01T00:00:00.000
{ "year": 2013, "sha1": "46cb9be04371ff162b0442c269cea6a1cef4f796", "oa_license": null, "oa_url": "https://doi.org/10.2478/afe-2013-0061", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7429201edaaa7013e0673fa1f11996c46295aa0d", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
259907122
pes2o/s2orc
v3-fos-license
Manoalide Induces Intrinsic Apoptosis by Oxidative Stress and Mitochondrial Dysfunction in Human Osteosarcoma Cells Osteosarcoma (OS) is the most common primary malignant bone tumor that produces immature osteoid. Metastatic OS has a poor prognosis with a death rate of >70%. Manoalide is a natural sesterterpenoid isolated from marine sponges. It is a phospholipase A2 inhibitor with anti-inflammatory, analgesic, and anti-cancer properties. This study aimed to investigate the mechanism and effect of manoalide on OS cells. Our experiments showed that manoalide induced cytotoxicity in 143B and MG63 cells (human osteosarcoma). Treatment with manoalide at concentrations of 10, 20, and 40 µM for 24 and 48 h reduced MG63 cell viability to 45.13–4.40% (p < 0.01). Meanwhile, manoalide caused reactive oxygen species (ROS) overproduction and disrupted antioxidant proteins, activating the apoptotic proteins caspase-9/-3 and PARP (Poly (ADP-ribose) polymerase). Excessive levels of ROS in the mitochondria affected oxidative phosphorylation, ATP generation, and membrane potential (ΔΨm). Additionally, manoalide down-regulated mitochondrial fusion protein and up-regulated mitochondrial fission protein, resulting in mitochondrial fragmentation and impaired function. On the contrary, a pre-treatment with n-acetyl-l-cysteine ameliorated manoalide-induced apoptosis, ROS, and antioxidant proteins in OS cells. Overall, our findings show that manoalide induces oxidative stress, mitochondrial dysfunction, and apoptosis, causing the cell death of OS cells, showing potential as an innovative alternative treatment in human OS. Cell Culture MG63 cells (CRL-1427™-ATCC, Human osteosarcoma) and 143B cells (CRL-8303™-ATCC, Human osteosarcoma) were cultured with Eagle's minimum essential medium (Gibco BRL, Rockville, MD, USA). The medium contained 10% FBS (fetal bovine serum) and glutamine-penicillin-streptomycin (2 mM-00 U/mL-100 µg/mL) (Gibco BRL). Cells were incubated under a humidified atmosphere of 5% CO 2 room air at 37 • C. For subculture, the cells were treated with trypsin-EDTA (Gibco BRL). After centrifugation of the cells and removal of the supernatant, the cells were replanted into the dish. When the connected cells reached confluence, they had the shape of cobblestones under a microscope. Cell Viability Assay Cell proliferation (viability) was assessed using an MTT assay following treatment with different concentrations of manoalide for 24 and 48 h. MTT is a yellow substance and will interact with succinate dehydrogenase (complex II) in the electron transport chain in living cells to generate a purple substance. The cells can be lysed with DMSO to release the purple substance, and the number of living cells can be directly estimated by detecting the absorbance at 570 nm. The cells were plated in triplicate at a density of 5 × 10 3 cells/well in 96-well plates. The cells were treated with manoalide (in 0.2% DMSO) at concentrations of 0, 0.1, 1, 5, 10, 20, and 40 µM for 24 and 48 h after overnight incubation. Following that, preliminary cell pattern observations were made under a phase-contrast inverted microscope (Lecia Microsystems DMI 3000B; Wetzlar, Germany). The culture solution was removed after the MTT interacted with the living cells to produce the purple substance, and 50 µL/well of DMSO was added to dissolve the purple substance fully, and the absorbance was measured at 570 nm with a spectrophotometer reader (Dynatech Laboratories, Chantilly, VA, USA). After the absorbance value of the blank group was subtracted from the absorbance values of different treatments, the following formula was used to obtain cell viability (%). Cell viability (%) = [OD570 (treatment)/OD570 (control)] × 100%. The data were expressed as the mean ± SEM. Annexin V-FITC/Propidium Iodide (PI)-PE Staining MG63 cells were treated with manoalide at the indicated concentrations of 0-10 µM for 24 h; then, the culture medium was removed, washed in PBS, trypsinized, and centrifuged, and cells were resuspended (6 × 10 5 cells/mL) in 1× binding buffer. The samples were treated according to the manufacturer's instructions for the FITC Annexin V Apoptosis Detection Kit (#556547, BD Biosciences, San Jose, CA, USA). Cells were first resuspended in 100 µL 1× binding buffer (6 × 10 4 cells), and then 3 µL Annexin V-FITC and 3 µL PI-PE were added to each sample for fluorescent labeling. The samples were gently vortexed and placed at room temperature for 15 min in the dark. At the end of the incubation, 400 µL of 1× binding buffer was added to each sample, and the samples were analyzed using a CytoFLEX LX flow cytometer (Beckman-Coulter, MI, USA) with CytExpert analysis software version 2.0. We used four-quadrant flow cytometry software to detect live cells (bottom left), early apoptotic cells (bottom right), late apoptotic cells (top right), and necrotic cells (top left). At least 20,000 cells were analyzed per sample. Intracellular ROS Intracellular ROS (iROS) was evaluated by determining the level of H 2 O 2 using the fluorescence probe chloromethyl derivative 2 ,7 -dichlorofluorescin diacetate (CM-H 2 DCFD-DA), useful as an indicator of ROS in cells. This indicator exhibits much better retention in live cells than H 2 DCFDA. MG63 cells were treated with manoalide at 0, 0.1, 1, 5, and 10 µM concentrations for 4 h, incubated with 5 µM DCFH-DA in a medium for 25 min at 37 • C, washed, trypsinized, centrifuged, and re-suspended in 1 mL of PBS. The samples were analyzed using a Beckman CytoFLEX LX flow cytometer and histograms of CytExpert analysis software. At least 20,000 cells were analyzed per sample. CellROX ® Green Staining CellROX ® Green reagent is a new fluorescent probe for measuring cell cytosol and nuclear oxidative stress in live cells. In six-well dishes, a density of 3 × 10 5 cells/well was plated and left to attach overnight. After treatment with manoalide at a concentration of 0, 0.1, 1, 5, and 10 µM for 4 h, the cells were washed with PBS. The cells were then loaded with CellROX ® Green (5 mM) in media at 37 • C for 25 min staining, washed, trypsinized, centrifuged, and re-suspended in 1 mL of PBS. The samples were analyzed using a Beckman CytoFLEX LX flow cytometer and histograms of CytExpert analysis software. At least 20,000 cells were analyzed per sample. Seahorse Real-Time Cell Metabolic Analysis The Seahorse XF24 Extracellular Flow Analyzer (Seahorse Bioscience Inc., Chicopee, MA, USA) measures the OCR (oxygen consumption rate) and ECAR (extracellular acidification rate) in living cells, which are direct real-time quantitative indicators of mitochondrial respiration and glycolysis. For comparison between experiments, the data are expressed as OCR of pmoles/min/mg protein and ECAR of mpH/min/mg proteins. At the start, 5 × 10 4 cells were seeded on Seahorse XF24 microplates. After overnight incubation at 37 • C, cells were treated with 0, 0.1, 1, 5, and 10 µM manoalide for 6 h. After washing the cells with 0.5 mL of Seahorse XF medium, 700 µL of Seahorse XF medium was added to each well and placed into the machine for further examination. Basal OCR was measured and plotted as a function of cells under basal conditions followed by the sequential addition of oligomycin (1 µM), carbonyl cyanide-4-(trifluoromethoxy)phenylhydrazone (FCCP; 0.5 µM), and rotenone (1 µM). At the end of the recording, cells were harvested and the amount of protein was measured using the BCA assay; then, OCR and ECAR values were calculated after normalization with the amount of protein (mg). The cationic dye 3,3 -dihexyloxacarbocyanine iodide (DiOC6) is also a type of green fluorescent dye. It is well known as a mitochondrial membrane probe. DiOC6 can pass through the cell membrane and detect the mitochondrial membrane potential (∆Ψ m ). In six-well dishes, a density of 3 × 10 5 cells/well was plated and left to attach overnight. After treatment with manoalide at a concentration of 0, 0.1, 1, 5, and 10 µM for 4 h, the cells were washed with PBS. The cells were then loaded with DiOC6 (5 µM) in media at 37 • C for 20 min staining, washed, trypsinized, centrifuged, and re-suspended in 1 mL of PBS. The samples were analyzed using a Beckman CytoFLEX LX flow cytometer and histograms of CytExpert analysis software. At least 20,000 cells were analyzed per sample. JC-1 Kit The positively charged mitochondrial dye JC-1 (5 ,5 ,6,6 -tetrachloro-1,1 ,3,3 -tetraethyl benzimidazolylcarbocyanine iodide) was employed to measure ∆Ψ m . The ∆Ψ m was polarized in living cells, and JC-1 will accumulate on the membrane and form a JC-1 aggregate that emits red light. The mitochondrial membrane was depolarized in dead cells. JC-1 will leave the mitochondrial membrane and enter the cytoplasm to form the JC-1 monomer and produce green light. In the six-well plate, culture medium containing 3 × 10 5 cells and different concentrations of drugs were added for 4 h. In six-well dishes, a density of 3 × 10 5 cells/well was plated and left to attach overnight. After treatment with manoalide at a concentration of 0, 0.1, 1, 5, and 10 µM for 4 h, the cells were washed with PBS. The cells were then loaded with JC-1 (5 µg/mL) in media at 37 • C for 20 min staining, washed, trypsinized, centrifuged, and re-suspended in 1 mL of PBS. The samples were analyzed using a Beckman CytoFLEX LX flow cytometer and four-quadrant of CytExpert analysis software. At least 20,000 cells were analyzed per sample. Western Blotting In 10 cm plates, culture medium containing 3 × 10 6 cells and different concentrations of drugs were added for 24 h. The proteins were dissolved in a protein extraction reagent after the cells were lysed with buffer (Thermo Scientific, Waltham, MA, USA). The total protein concentration was quantified by the Bradford method (Bio-Rad, Hercules, CA, USA), and the molecular weights of the proteins in the samples differed in size, which were then separated using 8-15% SDS-PAGE electrophoresis gels, followed by transfer to PVDF (Millipore, Bedford, MA, USA) membranes. The membrane was blocked with 5% nonfat milk and then incubated overnight at 4 • C with the primary antibodies shown in Table 1. After the secondary antibody was coupled to horseradish peroxidase for 1 h at 37 • C, the signal on the membrane was detected using enhanced chemiluminescence (ECL-kit; Millipore). Photographs were taken of the visualized bands using UVP BioChemi Imaging (UVP LLC, Upland, CA, USA). The relative densitometric quantification of bands was performed using ImageJ 1.50d software (National Institutes of Health, Bethesda, MD, USA). As a loading control, the polyvinylidene fluoride membrane was re-probed with a GAPDH antibody. Statistical Analysis Data for this study were created using Microsoft Excel and plotted with Graph-Pad Prism 5.0 software for graphics processing. Results are expressed as the numerical mean ± standard error (SE). Student's t-test was used to compare statistically significant differences between groups, where ** p < 0.01 or * p < 0.05 was considered statistically significant. Experiments were performed at least three times to verify reproducibility. Manoalide Treatment Increased Intracellular, Mitochondrial, and Total ROS Levels but Decreased Oxidative Stress Defense Enzyme Expression in OS ROS are mainly produced by mitochondria, and excessive ROS production can cause oxidative stress and programmed cell death (apoptosis) [30]. Therefore, in MG63 cells treated with different dosages of manoalide, we used three ROS detection stains. The fluorescent stain probes CM-H 2 DCFDA, MitoSOX TM Red, and CellROX ® Green were used to detect O 2 •− and •OH in the cellular components, mitochondria, and nucleus, respectively. mtROS were detected by flow cytometry using MitoSOX TM Red staining; the figure shows a histogram produced using CytExpert analysis software, and we found a considerable shift to the right in MG63 cells treated with different concentrations of manoalide ( Figure 2A). Based on MitoSOX TM Red signals, the quantitative results indicated that mitochondrial O 2 •− levels were significantly increased in a dose-dependent manner to 61.81 ± 16.01%, 99.78 ± 0.16%, and 99.83 ± 0.05% at 1, 5, and 10 µM, respectively, in MG63 cells compared with 0 µM manoalide (9.82 ± 0.45%, Figure 2B). iROS were detected by flow cytometry using CM-H 2 DCFDA staining; the figure shows a histogram produced using CytExpert analysis software, and we found a considerable shift to the right in MG63 cells treated with different concentrations of manoalide ( Figure 2C). Based on DCF fluorescent probe signals, the quantitative results indicated that intracellular hydrolytic and oxidative product levels were significantly increased in a dose-dependent manner to 26.30 ± 3.86%, 81.43 ± 9.26%, and 91.82 ± 7.06% at 1, 5, and 10 µM for MG63 cells, respectively, compared with 0 µM manoalide (9.30 ± 0.54%, Figure 2D). Similarly, O 2 •− and •OH levels in the mitochondria and nucleus were detected by flow cytometry using CellROX ® staining; the figure shows a histogram produced using CytExpert analysis software, and we found a considerable shift to the right in MG63 cells treated with different concentrations of manoalide ( Figure 2E). The quantitative results indicated that O 2 •− and •OH levels were significantly increased in a dose-dependent manner, based on CellROX ® Green probe signals, to 14.77 ± 3.56%, 31.34 ± 10.39%, and 35.59 ± 12.17% at 1, 5, and 10 µM, respectively, in MG63 cells compared with 0 µM manoalide (10.59 ± 1.11%, Figure 2F). We used the "human oxidative stress defense enzymes Western blot cocktail" antibody containing catalase, superoxide dismutase 1, thioredoxin, and alpha smooth muscle actin, which are involved in protecting cells against oxidative stress and the regulation of ROS activity. Superoxide dismutase 2 (SOD2; Mn-SOD) is situated in the mitochondrial matrix to scavenge ROS and avoid superabundance of mtROS production and prevent oxidative stress [31]. Figure 2G shows the Western blot analysis revealed that treating MG63 cells with various dosages of manoalide for 24 h increased the expression level of SOD2 but decreased the expression of catalase, superoxide dismutase 2 (Cu-Zn SOD, SOD1), and thioredoxin (TRX) proteins, with GAPDH and alpha smooth muscle actin serving as indicators of the normalization of protein loading. Manoalide was applied to MG 63 cells at concentrations of 5 and 10 M for 24 h, and the protein levels of catalase/alpha smooth muscle actin were significantly decreased to 0.80 ± 0.08 and 0.79 ± 0.08, respectively, compared with 0 µM manoalide (1.00 ± 0.01), and the protein expression of TRX/alpha smooth muscle actin was significantly decreased to 0.65 ± 0.11, 0.64 ± 0.13, and 0.52 ± 0.07, respectively, compared with 0 µM manoalide (1.00 ± 0.02) ( Figure 2H). We also observed that the protein levels of SOD1/alpha smooth muscle actin were significantly reduced to 0.84 ± 0.03, 0.84 ± 0.05, and 0.75 ± 0.03, respectively, compared with 0 µM manoalide (1.00 ± 0.04), tested at concentrations of 1, 5, and 10 M in MG63 cells. However, in MG63 cells treated with 5 and 10 µM manoalide, a significant increase in SOD2/GAPDH expression was observed at 1.92 ± 0.21 and 2.94 ± 0.21, respectively, compared with 0 µM manoalide (1.00 ± 0.04) ( Figure 2I). Taken together, manoalide induced intracellular, mitochondrial, and nuclear ROS overproduction in OA cells while unbalancing the activity of antioxidant enzymes, causing oxidative stress and contributing to cell apoptosis. Manoalide Treatment Reduces OCR and Oxidative Phosphorylation (OXPHOS) Protein Expression in MG63 Cells The Seahorse XF24 extracellular flux bioenergy metabolism analyzer, developed by Seahorse Bioscience in the United States, is the only platform in the world that can evaluate the overall energy metabolism of living samples. To complete the mitochondrial function test, the Cell Mito Stress Test kit was used to first detect the basic oxygen consumption; then, an ATP synthase inhibitor (oligomycin) was added to inhibit the mitochondria's production of ATP, and the inhibited oxygen consumption indicates how much oxygen is involved in the synthesis of ATP. Then, at the appropriate concentration, FCCP, an uncoupler medication, was added without disrupting the electron transport chain and enabling the mitochondria to remain idle under harsh conditions to determine the mitochondria's maximum oxygen consumption capacity. Finally, Complex I, an electron transport chain inhibitor, was added. The background value for its detection is determined by the inhibitor rotenone and the complex III inhibitor Antimycin A, which fully shuts down mitochondrial oxygen utilization. As a result, the following mitochondrial respiration parameters can be calculated: basal respiration, ATP-linked production (coupled respiration), maximal respiration, spare respiratory capacity, and non-mitochondrial respiration. MG63 cells were treated with various dosages of manoalide, followed by sequential addition of oligomycin, FCCP, and rotenone/actinomycin, which were found to decrease OCR parameters ( Figure 3A). With the increase in manoalide concentration in MG63 cells, the mitochondrial basal respiratory values decreased significantly to 136.12 ± 5.40, 135.38 ± 7.66, 129.07 ± 7.28, and 102.97 ± 4.31 pmoles/min/mg protein at 0.1, 1, 5, and 10 µM compared with the 0 µM manoalide group (143.43 ± 6.09) ( Figure 3B). With the increase in manoalide concentration in MG63 cells, the mitochondrial ATP production values decreased significantly to 90.29 ± 10.33 and 63.71 ± 8.13 pmoles/min/mg protein at 5 and 10 µM compared with the 0 µM manoalide group (111.00 ± 6.94) ( Figure 3C). With the increase in manoalide concentration in MG63 cells, the mitochondrial maximal respiration values decreased significantly to 179.11 ± 13.15, 170.74 ± 20.08, 167.48 ± 9.43, and 126.85 ± 5.83 pmoles/min/mg protein at 0.1, 1, 5, and 10 µM manoalide concentrations compared with the 0 µM manoalide group (198.63 ± 12.13) ( Figure 3D). With the increase in manoalide concentration in MG63 cells, the mitochondrial spare respiration capacity values decreased significantly to 40.46 ± 13.87, 39.56 ± 18.12, 38.42 ± 11.16, and 23.88 ± 4.82 pmoles/min/mg protein at 0.1, 1, 5, and 10 µM manoalide concentrations, compared with the 0 µM manoalide group (55.20 ± 13.32) ( Figure 3E). With the increase in manoalide concentration in MG63 cells, the nonmitochondrial respiration values decreased significantly to 41.66 ± 7.86 and 41.56 ± 7.64 pmoles/min/mg protein at 5 and 10 µM manoalide concentrations compared with the 0 µM manoalide group (57.03 ± 19.23) ( Figure 3F). We used the "Total OXPHOS Human WB Antibody Cocktail" antibody including complex I-V to detect five OXPHOS complexrelated proteins. Figure 3G shows in the Western blot analysis that the application of MG63 cells with various dosages of manoalide for 24 h decreased OXPHOS complex I-V protein expression, with GAPDH used as an indicator of normalization of protein loading. Manoalide was applied to MG 63 cells at concentrations of 5 and 10 µM, and the protein levels of the complex I-NDUFB8/GAPDH were observed to be significantly decreased to 0.56 ± 0.04 and 0.41 ± 0.06, respectively, compared with 0 µM manoalide (1.00 ± 0.14) ( Figure 3H). Manoalide was applied to MG 63 cells at concentrations of 5 and 10 µM, and the protein expression of the complex II-SUHB/GAPDH was observed to be significantly decreased to 0.53 ± 0.05 and 0.47 ± 0.03, respectively, compared with 0 µM manoalide (1.00 ± 0.08); however, the protein expression of the complex III-UQCRC2/GAPDH was observed to be significantly decreased to 0.79 ± 0.03, 0.53 ± 0.02, and 0.28 ± 0.01 at 1, 5, and 10 µM, respectively, compared with 0 µM manoalide (1.00 ± 0.02) ( Figure 3I). Manoalide was applied to MG 63 cells at concentrations of 5 and 10 µM, and the protein expression of the complex IV-COX II/GAPDH was observed to be significantly decreased to 0.63 ± 0.07 and 0.52 ± 0.06, respectively, compared with 0 µM manoalide (1.00 ± 0.10). However, the protein expression of the complex V-ATP5A/GAPDH was observed to be significantly decreased to 0.75 ± 0.04, 0.50 ± 0.03, and 0.38 ± 0.02 at 1, 5, and 10 µM, respectively, compared with 0 µM manoalide (1.00 ± 0.05) ( Figure 3I). These findings suggest that manoalide effectively decreased mitochondrial respiration function and OXPHOS complex I-V protein expression, causing a loss of mitochondrial function in MG63 cells. In MG63 Cells, Manoalide Regulates Mitochondrial Transmembrane Potential (∆Ψ m ) and Mitochondrial Dynamic Protein Although mitochondria are the source of ROS, excessive ROS generation may be the cause of oxidative stress and cell death, followed by ∆Ψ m loss and mitochondrial dynamic imbalance [32]. Several lipophilic cationic fluorescent dyes, such as DiOC6 and JC-1 (37 • C, 20 min), bind to the mitochondrial matrix in live eukaryotic cells, and the amplification or weakening of their fluorescence suggests an enhancement or decrease in mitochondrial internal membrane electronegativity. ∆Ψ m was detected by flow cytometry using a DiOC6 probe; the figure shows a histogram created using CytExpert analysis software, and we observed a significant shift to the left in MG63 cells treated with various doses of manoalide ( Figure 4A). Based on DiOC6 signals, the quantitative results indicated that ∆Ψ m levels were significantly decreased in a dose-dependent manner, to 85.61% ± 1.97% and 60.83% ± 5.02% at 5 and 10 µM in MG63 cells, respectively, as compared to controls (90.62% ± 0.15%, Figure 4B). In healthy cells, the JC-1 dye accumulates on the inner membrane of the mitochondria, forming the aggregate that emits red light. When apoptosis occurs, the membrane potential of the mitochondria decreases and the dye returns to the cytoplasm, comprising monomer and emitting green light [33]. Figure 4C shows the typical four-quadrant diagram in which findings reveal a dot plot that moved from the right upper quadrant to the right bottom quadrant in MG63 cells treated with manoalide for 4 h. The percentage of low ∆Ψ m (16.09 ± 4.25% and 54.37 ± 2.56%, respectively) at 5 and 10 µM manoalide was considerably greater than that in the 0 µM manoalide group (5.37 ± 0.71%), but the percentage of high ∆Ψ m (83.87 ± 3.35% and 45.58 ± 4.70%, respectively) was substantially lower than that in the 0 M manoalide group (94.58 ± 0.74%) ( Figure 4D). Mitochondrial fission and fusion are involved in mitochondrial quality control and transfer of energy state. The increased production of OXPHOS and ATP during mitochondrial fusion may induce mitochondrial fusion protein to prevent cell death. In contrast, mitochondrial fission leads to ATP depletion and OXPHOS deficiency, causing apoptosis [34]. Figure 4E shows the Western blot where the treatment of MG63 cells with various dosages of manoalide for 24 h increased the expression levels of fission-associated protein DRP1 but decreased the expression level of fusion-related proteins OPA1, with GAPDH used as an indicator for the normalization of protein loading. As MG63 cells were treated with 10 µM manoalide, the protein levels of OPA1/GAPDH were found to be considerably lower, at 0.60 ± 0.14, when compared to the control (1.00 ± 0.11, 0 µM manoalide) ( Figure 4F). When MG63 cells were treated with 10 µM manoalide, the protein expression of DRP1/GAPDH increased significantly to 1.28 ± 0.12 compared to the control (1.00 ± 0.14, 0 µM manoalide) ( Figure 4G). These findings demonstrated that utilizing varying doses of manoalide reduced high ∆Ψ m and increased mitochondrial fission protein but reduced the fusion proteins in MG63 cells, resulting in cell death. Figure 3H). Manoalide was applied to MG 63 cells at concentrations of 5 and 10 µM, and the protein expression of the complex II-SUHB/GAPDH was observed to be significantly decreased to 0.53 ± 0.05 and 0.47 ± 0.03, respectively, compared with 0 µM manoalide (1.00 ± 0.08); however, the protein expression of the complex III-UQCRC2/GAPDH was observed to be significantly decreased to 0.79 ± 0.03, 0.53 ± 0.02, and 0.28 ± 0.01 at 1, 5, and 10 µM, respectively, compared with 0 µM manoalide (1.00 ± 0.02) ( Figure 3I). Manoalide was applied to MG 63 cells at concentrations of 5 and 10 µM, and the protein expression of the complex IV-COX II/GAPDH was observed to be significantly decreased to 0.63 ± 0.07 and 0.52 ± 0.06, respectively, compared with 0 µM manoalide (1.00 ± 0.10). However, the protein expression of the complex V-ATP5A/GAPDH was observed to be significantly decreased to 0.75 ± 0.04, 0.50 ± 0.03, and 0.38 ± 0.02 at 1, 5, and 10 µM, respectively, compared with 0 µM manoalide (1.00 ± 0.05) ( Figure 3I). These findings suggest that manoalide effectively decreased mitochondrial respiration function and OXPHOS complex I-V protein expression, causing a loss of mitochondrial function in MG63 cells. Figure 4F). When MG63 cells were treated with 10 µM manoalide, the protein expression of DRP1/GAPDH increased significantly to 1.28 ± 0.12 compared to the control (1.00 ± 0.14, 0 µM manoalide) ( Figure 4G). These findings demonstrated that utilizing varying doses of manoalide reduced high ΔΨm and increased mitochondrial fission protein but reduced the fusion proteins in MG63 cells, resulting in cell death. Figure S2B. The protein levels of OPA1 (F) and DRP1 (G) were quantified using ImageJ software and normalized to that of GAPDH and were expressed as fold changes. Each bar represents the mean ± SE (n = 3) of three independent experiments, and the results were analyzed using Student's t-test. * p < 0.05 and ** p < 0.01 relative to the control (0 µM manoalide). DiOC6: 3,3 -dihexyloxacarbocyanine iodide; ∆Ψ m : mitochondrial membrane potential; JC-1: 5 ,5 ,6,6 -tetrachloro-1,1 ,3,3 -tetraethylbenzimidazolyl-carbocyanine iodide; OPA1: optic atrophy 1; DRP1: dynamin-related protein 1; GAPDH; glyceraldehyde-3-phosphate dehydrogenase; PVDF: polyvinylidene difluoride. N-Acetylcysteine Pre-Treatment Reduces Manoalide-Induced Apoptosis, Cellular ROS Production, and Oxidative Stress Defense Enzyme Expression N-acetylcysteine (NAC) is a reducing agent that functions as an antioxidant by depleting ROS in cells [35]. MG63 cells were or were not exposed to 5 mM NAC for 2 h to determine the effects of NAC on manoalide-induced cellular ROS overproduction, the reduction of oxidative stress defense enzymes, and apoptosis. After that, 10 µM manoalide was administered and allowed to respond for 24 h to evaluate immunoblot expression levels of cleaved PARP and cleaved caspases 3 from MG63 cells treated with or without NAC and 10 µM manoalide. The results demonstrated that manoalide dramatically enhanced the expression levels of cleaved caspase 3 and cleaved PARP, whereas NAC treatment reversed this and decreased the levels ( Figure 5A,B). We pretreated MG63 cells for 2 h with or without 5 mM NAC and then performed studies with or without 10 µM manoalide for 4 h in the incubator, followed by staining MG63 cells with CM-H 2 DCFDA dye and analyzing flow cytometry. The findings showed that NAC did not generate iROS and that iROS levels were strongly increased following manoalide therapy, which was substantially reduced by NAC pre-treatment ( Figure 5C,D). To evaluate the immunoblot expression levels of catalase, TRX, SOD1, and SOD2 proteins, MG63 cells were administered 10 µM manoalide and 5 mM NAC. The results showed that manoalide dramatically reduced the protein expressions of catalase, TRX, and SOD1; this was reversed when NAC was administered. Meanwhile, it was observed that NAC did not cause SOD2 protein changes, and SOD2 protein was significantly elevated after manoalide treatment, which was partially restored by NAC pre-treatment. (Figure 5E-G). These findings show that NAC dramatically reverses apoptotic protein, ROS production, and oxidative stress defense enzyme expression generated by manoalide, confirming ROS as the primary underlying mechanism causing the aforementioned effects. Figure S2D. The protein levels of catalase (F), TRX (F), SOD1, and SOD2 (G) were quantified using ImageJ software and normalized to that of actin, and GAPDH was expressed as the fold change. Each bar represents the mean ± SE (n = 3) of three independent experiments, and the results were analyzed using ANOVA. * p < 0.05, **; p < 0.01 relative to the control group (without NAC and manoalide), and # p < 0.05 relative to the experimental group with 10 µM of manoalide alone. iROS: intracellular ROS; NAC: N-acetylcysteine; PARP: poly(ADP-ribose) polymerase; SOD1: superoxide dismutase 1, TRX: thioredoxin. Discussion OS is the most frequent primary bone tumor [11], resulting from malignant mesenchymal spindle cells that produce immature osteoid [36]. Surgery, chemotherapy Figure S2D. The protein levels of catalase (F), TRX (F), SOD1, and SOD2 (G) were quantified using ImageJ software and normalized to that of actin, and GAPDH was expressed as the fold change. Each bar represents the mean ± SE (n = 3) of three independent experiments, and the results were analyzed using ANOVA. * p < 0.05, **; p < 0.01 relative to the control group (without NAC and manoalide), and # p < 0.05 relative to the experimental group with 10 µM of manoalide alone. iROS: intracellular ROS; NAC: N-acetylcysteine; PARP: poly(ADP-ribose) polymerase; SOD1: superoxide dismutase 1, TRX: thioredoxin. Discussion OS is the most frequent primary bone tumor [11], resulting from malignant mesenchymal spindle cells that produce immature osteoid [36]. Surgery, chemotherapy medicines, and radiation therapy are the three major treatments for OS [14]. Patients with metastatic OS continue to have a terrible prognosis, with only a 10-40% survival rate and >70% mortality [15]. Therefore, one strategy to improve survival is to research or develop new drugs. In the last ten years, there has been significant growth in the number of biologically active medications for cancer therapy and prevention, and manoalide is one of them. Manoalide is a natural sesterterpenoid, a marine medicine obtained from sponges, whose structure is shown in Supplementary Figure S1A [4]. Calcium channel blockers [3] and phospholipase A2 (PLA2) inhibitors [37] are two known modes of action for manoalide. PLA2 is a phospholipid-metabolizing enzyme that mainly synthesizes and secretes arachidonic acid oxidation products from cyclooxygenase and lipoxygenase, contributing to tumor microenvironment development, angiogenesis formation, and tumor growth. Apart from its anti-inflammatory effects, the anticancer effects of manoalide have not been extensively studied. It is only cytotoxic to oral cancer [10], human squamous cell carcinoma [3], and epidermoid cancer cells [6], induced by oxidative stress [10], apoptosis, and DNA deterioration to oral cancer [5]. The treatment and molecular mechanisms of action of manoalide in OS have not been studied. Our experimental results showed that manoalide exhibited the most potent inhibitory effect on the proliferation of MG63 and 143B cells, and low doses disrupted cell growth with IC 50 of approximately 8.7 versus 10.9 µM for 48 h. Manoalide has been reported to have antitumor activity in oral cancer studies with an IC 50 of approximately 14.0 µM for 48 h, similar to our experiments [10]. However, we found that the IC 50 of MG63 cells was approximately 8.9 µM for 24 h, with very little difference from 48 h. Our study found that the difference in manoalide had a distinct typical anti-viability effect on human OS cancer cells. Most newly manufactured chemicals are thought to have complex mechanisms that promote apoptosis, and targeting apoptosis signaling is developing as a method for novel cancer therapies [38][39][40]. The caspase family of apoptosis is typically classified into two categories: intrinsic and extrinsic activators, of which the intrinsic activation pathway belongs to the mitochondrial pathway including caspase-9/-3. Caspases' most important function in cells is to operate as a catalytic inactivator of genes, which requires proteolytic activation during apoptosis, and N-terminal peptides have no similarity, and once caspases are activated, most cellular targets are proteolytically cleaved by effector caspases, which results in cell death [41]. Boulares et al. (1999) demonstrated that apoptosis in the cell requires the immediate interruption of nucleoprotein poly(ADP-ribosyl)ation, accompanied by cleavage by caspase-3 catalyzed PARP; PARP is then cleaved into fragments of 89 and 24 KDa, enclosing the enzymatic activity and the DNA-binding domain [42]. Our study of manoalide showed that anticancer activity occurs through the intrinsic apoptotic pathway. Annexin V/PI staining of cells revealed quantitative early and late apoptotic bodies, and the cleaved forms of caspase-9/-3 and PARP were activated. Thus, our study shows that manoalide induces apoptosis by activating caspase-9/-3 and PARP cleavage in an intrinsic manner. Oxidative stress is a biochemical situation defined by the presence of relatively large amounts of harmful reactive species, primarily made up of ROS, and an imbalance between antioxidant defense mechanisms. ROS are primarily produced in cells as byproducts of regular mitochondrial metabolism and have long been linked to apoptosis induction [43,44]. NAC is an aminothiol that acts as an intracellular precursor for the synthesis of cysteine and glutathione, making it a significant antioxidant. NAC has been frequently employed as a research tool in the field of apoptosis research to investigate the role of ROS in apoptosis induction. Manoalide triggers the overproduction of mtROS, iROS, and nROS, affecting the reduction of intracellular antioxidant enzyme proteins (oxidative stress defense enzymes: catalase, SOD1, and TRX), but the only increase is the mitochondrial antioxidant enzyme SOD2. SOD2 will convert mitochondrial superoxide O 2− to H 2 O 2 , and then the antioxidant enzyme protein (TRX) that removes H 2 O 2 is converted into nontoxic H 2 O because TRX is decreased and it is too late to remove ROS, and ROS are sent to the cytoplasm, resulting in a large increase in intracellular ROS, which induces oxidative stress. The iROS can destroy proteins and DNA to induce pathology, leading to apoptotic cell death. Oral cancer studies showed that manoalide increases ROS [5,10], but there is no proof that antioxidant enzyme proteins change, and we are the first to find that manoalide-induced antioxidant enzyme protein (oxidative stress defense enzymes: catalase, SOD1, and TRX) decreased and the mitochondrial antioxidant enzyme SOD2 increased in OS cells. Therefore, we know that manoalide can affect oxidative stress to cause ROS accumulation and inhibit antioxidant enzyme protein, but the increase in SOD2 can also cause a large amount of ROS to be generated, and the double addition can lead to apoptosis, finally resulting in cell death. Mitochondria play an important role in eukaryotic cells, where their function is to generate ATP during OXPHOS. Studies have shown that manoalide reduces nonmitochondrial (in the cytoplasm) and OXPHOS (in the mitochondria) respiration, including basal respiratory capacity, ATP production, maximal respiratory capacity, spare respiratory capacity, and nonmitochondrial respiratory. The inner mitochondrial membrane has several folds, among which are components of the respiratory chain or OXPHOS complexes I to V. Complexes I to V are multi-subunit enzymes that can synergistically generate an electrochemical proton gradient on the inner mitochondrial membrane. According to research, manoalide reduces the total number of OXPHOS complex I to V proteins, which combined with complex V (ATP synthase) form the mechanism for ATP generation [45]. It is worth emphasizing that mitochondrial malfunction occurs before ∆Ψ m damage, nuclear condensation, and the generation of apoptotic bodies [46]. Studies have shown that the potent cytotoxicity and induction of apoptosis caused by manoalide in OS cells are achieved through the induction of mtROS, mitochondrial dysfunction, and the destruction of ∆Ψ m . Mitochondria are active organelles that perform fusion (combination of fragments) and fission (splitting into small fragments). The inner membrane protein OPA1 is required for mitochondrial fusion, and the DRP1 protein is required for mitochondrial fission. For rapid and efficient apoptosis, mitochondria must be expressed in fragments through a highly permeable outer surface, and cristae should be separated for controlling mitochondrial morphology and not allowing content exchange between mitochondria [47]. As a result, mitochondrial fission is critical for the response to oxidative stress and apoptosis [48]. Our findings support the previous assessment that manoalide-induced apoptosis is responsible for the reduction in mitochondrial fusion protein expression and the rise in mitochondrial fission protein expression in OS cells. Although not all cells or signaling pathways are linked to apoptosis and mitochondria, many studies show that mitochondrial abnormalities involved in the aging process, the occurrence of many diseases (Parkinson's disease, Alzheimer's disease, Huntington's disease, and cancer), and cellular apoptosis all play a very important role; additionally, mitochondria are more closely related to the generation of free radicals. Although oral cancer studies showed that manoalide decreased ∆Ψ m [5,10], there was no proof of mitochondrial dysfunction including OXPHOS respiration, OXPHOS protein, and dynamic changes. We are the first to discover that manoalide increases mitochondrial fission protein and lowers OXPHOS respiration, OXPHOS complex I-V protein, and mitochondrial fusion protein. Conclusions The ROS, mitochondrial malfunction, and mitochondrial (intrinsic) apoptosis pathways of the manoalide-induced apoptosis mechanism in human osteosarcoma MG63 cells are summarized in light of the present findings ( Figure 6). Initially, the manoalide-induced overproduction of mitochondrial, intracellular, and nuclear ROS was associated with disrupted antioxidant enzymes (Cu-Zn SOD, catalase, and thioredoxin), whereas increased Mn-SOD antioxidant enzymes led to oxidation stress-damaged cells, nucleus, and mitochondria. On the other hand, manoalide-increased mtROS in MG63 cells led to a decrease in the OXPHOS complex I-V protein of mitochondrial inner membrane bioactivity, ∆Ψ m , and ATP production and affected the down-regulation of the mitochondrial fusion OPA1 protein and the up-regulation of the mitochondrial fission DRP1 protein, resulting in impaired mitochondrial function. Manoalide-induced cytotoxicity and apoptosis via intrinsic apoptosis proteins activated and cleaved caspases-9/-3 and PARP in OS cells. Adding NAC to reverse the effects of manoalide caused changes in apoptosis pathway proteins, cellular ROS, and antioxidant enzymes. This confirms that oxidative stress is a significant issue in the presence of manoalide. In conclusion, manoalide is a PLA2 inhibitor and shows potential as an innovative alternative treatment in OS, and further advancement of this compound into the preclinical phase is warranted. mitochondrial inner membrane bioactivity, ΔΨm, and ATP production and affected the down-regulation of the mitochondrial fusion OPA1 protein and the up-regulation of the mitochondrial fission DRP1 protein, resulting in impaired mitochondrial function. Manoalide-induced cytotoxicity and apoptosis via intrinsic apoptosis proteins activated and cleaved caspases-9/-3 and PARP in OS cells. Adding NAC to reverse the effects of manoalide caused changes in apoptosis pathway proteins, cellular ROS, and antioxidant enzymes. This confirms that oxidative stress is a significant issue in the presence of manoalide. In conclusion, manoalide is a PLA2 inhibitor and shows potential as an innovative alternative treatment in OS, and further advancement of this compound into the preclinical phase is warranted. Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1, Figure S1: The text and results display original and uncropped images of the Western blots for Figures 1G and 2G. Figure S2: The text and results display original and uncropped images of the Western blots for Figures 3G, 4E, and 5A,E. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/antiox12071422/s1, Figure S1: The text and results display original and uncropped images of the Western blots for Figures 1G and 2G. Figure S2: The text and results display original and uncropped images of the Western blots for Figures 3G, 4E and 5A,E.
2023-07-16T15:09:07.577Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "b3577fd0093e0457fbd8c231a82ea6613afe73b7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3921/12/7/1422/pdf?version=1689317619", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a7fcbf227943e94156035d30db99a67dd39cb1b4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
4829864
pes2o/s2orc
v3-fos-license
Fatty acid composition from the marine red algae Pterocladiella capillacea This study evaluated the chemical composition and antioxidant activity of fatty acids from the marine red algae Pterocladiella capillacea (S. G. Gmelin) Santelices & Hommersand 1997 and Osmundaria obtusiloba (C. Agardh) R. E. Norris 1991. The gas chromatography mass spectrometry (GC-MS) identified nine fatty acids in the two species. The major fatty acids of P. capillacea and O. obtusiloba were palmitic acid, oleic acid, arachidonic acid and eicosapentaenoic acid. The DPPH radical scavenging capacity of fatty acids was moderate ranging from 25.90% to 29.97%. Fatty acids from P. capillacea (31.18%) had a moderate ferrous ions chelating activity (FIC), while in O. obtusiloba (17.17%), was weak. The ferric reducing antioxidant power (FRAP) of fatty acids from P. capillacea and O. obtusiloba was low. As for β-carotene bleaching (BCB), P. capillacea and O. obtusiloba showed a good activity. This is the first report of the antioxidant activities of fatty acids from the marine red algae P. capillacea and O. obtusiloba. INTRODUCTION Oxidative stress represents a considerable increase in the intracellular concentration of oxidizing species, such as reactive oxygen species (ROS), simultaneously accompanied by the loss of antioxidant defense.This process can cause tissue damage or cell death, which occurs primarily by necrosis and apoptosis.Also, the oxidative stress plays a key role in inflammatory processes, aging and diseases such as atherosclerosis, cancer, central nervous system disorders, arthritis, diabetes, cardiovascular and neurological disorders (Parkinson's and Alzheimer's) (Boisvert et al. 2015, O'Sullivan et al. 2011, Tierney et al. 2013).DANIEL B. DE ALENCAR et al. In addition to the damage caused to cellular components, ROS can also breakdown fatty acids present in the food.This change is responsible for the development of rancid odor and flavor resulting in diminished nutritional quality and safety, due to the formation of secondary products, potentially toxic (Ngo et al. 2012, O'Sullivan et al. 2011). Consumption of antioxidants and/or the incorporation in food products are intended to promote a protective effect against these phenomena, thus extending the food shelf life.Several synthetic antioxidants, such as butylhydroxyanisole (BHA), butylated hydroxytoluene (BHT) and tertbutylhydroquinone (TBHQ), are available on the market, being widely used in food industries.The drawback in the use of these chemical compounds lies in the fact that some toxicological studies have proved that, depending on the concentration, synthetic antioxidants can promote the development of tumor cells in rats (Huang andWang 2004, Souza et al. 2011). Given the toxic and carcinogenic effects caused by synthetic compounds, the search for natural antioxidants has attracted considerable attention in the last decade.Studies have examined marine organisms for being promising sources of bioactive compounds with valuable nutraceutical and pharmaceutical potential, including algae, which are among the richest sources of biologically active molecules with different properties (Dolatabadi andKashanian 2010, Ngo et al. 2011). Marine algae live in complex habitats and are subjected to wide fluctuations in temperature, salinity, light, nutrients, contaminants like heavy metals etc., and thus naturally forced to adapt to changing environmental conditions, producing a wide range of primary and secondary metabolites that cannot be found in other organism from terrestrial environments (Francavilla et al. 2013, Lordan et al. 2011, Rodrigues et al. 2015). The search for new sources of bioactive compounds with potentially beneficial properties currently has a huge importance in the biomedical and pharmacological areas.These compounds have various biological activities and may act as antioxidant, antimicrobial, antiviral, antiinflammatory, antinociceptive, antitumor, anticoagulant, and anticonvulsant agents.From a nutritional perspective, in Western countries, it has seen a greater interest in adopting increasingly healthy eating habits and, in this context, algae have been treated as functional foods (Alencar et al. 2014, 2016, Fernandes et al. 2014, Holdt and Kraan 2011, Plouguerné et al. 2014). In recent years, the lipid composition of marine algae has attracted the attention of researchers due to the high content of polyunsaturated fatty acids, mainly the α-linolenic, octadecatetraenoic, arachidonic and eicosapentaenoic acids.This class of fatty acids was considered as essential nutritional components for humans and animals, playing an important role in preventing cardiovascular disease, osteoarthritis, diabetes and also presenting antiviral, anti-inflammatory, antitumor, antimicrobial and antioxidant activities (Kendel et al. 2015).Henry et al. (2002) reported the antioxidant activity of 29 saturated and unsaturated fatty acids commercially available.These authors observed that most unsaturated fatty acids showed good antioxidant activity.These lipophilic constituents of marine algae may be useful in the food industry for protection against lipid peroxidation due to low polarity and ease of dissolution (Huang and Wang 2004). hexane.The hexane extract (Hex) was concentrated in a rotary evaporator. FATTY ACID EXTRACTION Fatty acid extraction followed the method described by Joseph and Ackman (1992).In separate, we weighed 80.3 mg and 50.1 mg of hexane extracts from P. capillacea and O. obtusiloba, respectively.Then, there were added 6 mL 0.5 M NaOH solution in methanol; the tubes were taken to a water bath at 100°C for 10 min and then cooled to room temperature.After cooling, there were added 6 mL 14% boron trifluoride (BF 3 ) in methanol and the tubes were heated again in a water bath, at 100°C for 30 min, to occur methylation of fatty acids.After cooled to room temperature, there were added 15 mL saturated solution of sodium chloride, stirred and then added with 6 mL n-hexane to extract the fatty acid methyl esters.The organic fraction (hexane) was analyzed for the composition and quantification of fatty acids by gas chromatography coupled to mass spectrometry (GC-MS) and gas chromatography with flame ionization detector (GC-FID). Gas chromatography mass spectrometry (GC-MS) The qualitative analysis of fatty acids in the form of methyl esters was performed on GC-MS (Shimadzu GC/MS QP-2010 Ultra) with silica nonpolar capillary column Restek Rtx-5ms (30 m x 0.25 mm i.d.x film thickness 0.25 μm.)The injection volume was 1 µL at a concentration of 1,000 µg mL -1 of the sample at 1:10. Chromatographies were made by adjusting the injector temperature at 250°C and the detector temperature at 200°C.The carrier gas used was helium with a flow rate of 1.4 mL min -1 .The oven temperature was initially kept at 80°C for 2 min and then programmed to increasing gradients of This study aimed to analyze for the first time the composition of fatty acids from lipid fraction present in the marine red algae Pterocladiella capillacea (S.G. Gmelin) Santelices & Hommersand 1997 and Osmundaria obtusiloba (C.Agardh) R. E. Norris 1991, by GC-MS (qualitatively) and GC-FID (quantitatively), and to evaluated its "in vitro" antioxidant activity.Algae collected were washed with distilled water to remove impurities and macroscopic epiphytes and then placed on absorbent paper to remove excess water and frozen at -24ºC until analyses. Specimens of the marine red algae The species were identified in the Department of Fisheries Engineering, Federal University of Ceará.The voucher specimens of P. capillacea and O. obtusiloba were deposited in the Prisco Bezerra Herbarium, Department of Biology of the same University, with the numbers 447310 and 56432, respectively. LIPID EXTRACTION Fresh algae were dried in a circulating air oven at 40°C for 15 h and then ground.Portions of dried material of P. capillacea (134 g) and O. obtusiloba (120 g) were exhaustively extracted with cold 10°C min -1 , from 80°C to 200°C, and 4°C min -1 between 200°C and 270°C.The mass spectra (GC-MS) were obtained with the ionization voltage at 70 eV and registered in a range m/z 30-500 Da. Each peak in the chromatogram corresponded to a compound, each compound was identified based on the retention index (considering a homologous series of C 8 -C 26 n-alkanes), the Kovats Index (KI) by comparing the fragmentation pattern of each compound with the mass spectra deposited in the virtual database and those reported in the literature (Adams 2012). Gas chromatography (GC) equipped with flame ionization detector (FID) Quantitative analysis was performed on a GC equipped with FID using a silica nonpolar capillary column Restek Rtx-5ms (30 m x 0.25 mm i.d.x 0.25 μm film thickness) under the same conditions described for GC-MS.The relative amounts of the fatty acids in the algae, expressed in percentage, were calculated based on the peak areas in the chromatograms recorded without using correction factors, considering the total area of the peaks at 100%. DPPH RADICAL SCAVENGING CAPACITY The DPPH radical scavenging activity of fatty acids from marine red algae P. capillacea and O. obtusiloba was determined according to the method of Blois (1958).The sample consisted of mixing an aliquot of 0.5 mL fatty acids at different concentrations (from 12.5 to 100 µg mL -1 ) and 2.5mL DPPH methanol solution 75 µM.In the blank sample, the DPPH methanol solution was replaced with MeOH, and in the control, were used only 3 mL DPPH methanol solution.The tubes (sample, blank sample and control) were incubated in the dark for 30 min at room temperature, and absorbance read at 517 nm on a microplate reader (Biochrom Asys UVM 340).Ascorbic acid was used as positive control.The DPPH radical scavenging percentage was calculated according to the following expression: Determination of FIC of fatty acids from marine red algae P. capillacea and O. obtusiloba was made according to the method of Wang et al. (2009).The sample consisted of 1 mL fatty acids at different concentrations (from 12.5 to 100 µg mL -1 ), 1.35 mL distilled water, 50 µL 2 mM ferrous chloride and 100 µL 5 mM ferrozine.In the blank sample, 100 µL distilled water replaced ferrozine, while in the control, 1 mL water was used instead of fatty acids.Sample, blank sample and control were incubated for 10 min at room temperature, and the absorbance read at 562 nm on a microplate reader (Biochrom Asys UVM 340).Ethylenediaminetetraacetic acid (EDTA) was used as a positive control.FIC percentage was calculated according to the following expression: Determination of FRAP of fatty acids from marine red algae P. capillacea and O. obtusiloba was made according to Ganesan et al. (2008).To 1 mL fatty acids at different concentrations (from 12.5 to 100 µg ml -1 ) were added 2.5 mL 0.2 M phosphate buffer (pH 6.6) and 2.5 mL 1% potassium ferricyanide.This mixture was incubated for 20 min at 50°C, cooled in ice water and then added with 2.5 mL 10% trichloroacetic acid.After stirring, 2.5 mL were taken and mixed with 2.5 mL distilled water and 0.5 mL 0.1% ferric chloride.After 10 min incubation at room temperature, the absorbance was read at 700 nm on a microplate reader (Biochrom Asys UVM 340).As a positive control, we used butylhydroxyanisole (BHA).Increases in absorbance indicated increases of FRAP, that is, the higher the absorbance, the greater the FRAP. β-CAROTENE BLEACHING (BCB) Determination of BCB of fatty acids from marine red algae P. capillacea and O. obtusiloba was performed by the method of Chew et al. (2008), with minor modifications.To 400 mg Tween 40 emulsifier were added 2.5 mg β-carotene and 40 mg linoleic acid, both solubilized in chloroform. Then, the chloroform was evaporated in a rotary evaporator, and 100 mL ultrapure water saturated in O 2 was added.The mixture was vigorously stirred to form an emulsion, from which were taken 3 mL, added with 1 mL fatty acids at different concentrations (from 12.5 to 100 µg mL -1 ) and then initial absorbance was read at 470 nm.Tubes were incubated at 50°C for 3 hours, after which, the absorbance was read again at the same wavelength. The two readings performed in microplate reader (Biochrom ASYS UVM 340).The butylhydroxyanisole (BHA) was used as positive control.The antioxidant activity was calculated according to following equation.The two readings were performed in a microplate reader (Biochrom Asys UVM 340). STATISTICAL ANALYSIS All data were presented as mean ± standard deviation.The results were subjected to one-way ANOVA, followed by Tukey's test, whenever the null hypothesis is rejected, at 5% significance level (p < 0.05).Fatty acid composition of Hex extracts from red marine algae P. capillacea and O. obtusiloba were determined by GC-MS (Figure 1). CHEMICAL COMPOSITION OF FATTY ACIDS OF THE HEXANE EXTRACT OF MARINE RED ALGAE The identified fatty acids were classified into saturated fatty acids (myristic acid, palmitic acid and stearic acid), monounsaturated fatty acids (palmitoleic acid, oleic acid and elaidic acid) and polyunsaturated fatty acids (linoleic acid, arachidonic acid and eicosapentaenoic acid).This is the first report on the fatty acid composition of marine red algae P. capillacea and O. obtusiloba. Fatty acid profiles of Hex extracts from marine red algae P. capillacea and O. obtusiloba were similar, with a difference: the monounsaturated DANIEL B. DE ALENCAR et al. The fatty acid profile of P. capillacea showed a high percentage of saturated fatty acids (90.6%) due to the content of palmitic acid (C16:0), which, alone, contributed with 88.8% of the total.Oleic acid (C18:1 cis) was the major constituent among monounsaturated fatty acids and arachidonic (20:4) and eicosapentaenoic (C20:5) acids among polyunsaturated fatty acids. In O. obtusiloba, the percentage of saturated fatty acids was 63.4%, due to the contents of palmitic (C16:0) and myristic (C14:0) acids, which together amounted to 61.6% of the total.Oleic acid (C18:1 cis) and nonadecenoico acid (C19:1 n9) were the major constituents of monounsaturated fatty acids and arachidonic (20:4) and eicosapentaenoic (C20:5) acids were the majority among polyunsaturated fatty acids.This is the first report of the fatty acid composition of marine red alga P. capillacea and O. obtusiloba.Other studies on fatty acids with an alga belonging to the same family of P. capillacea and O. obtusiloba, Rhodophyta Gracilaria gracilis, showed that it is rich in polyunsaturated fatty acids, especially arachidonic and eicosapentaenoic acids (Francavilla et al. 2013). Fatty acids present in seaweed are important for human and animal health, and are precursors of eicosanoids and act as bioregulators of cellular processes (Khotimchenko 2005).Polyunsaturated fatty acids ω-3, eicosapentaenoic (EPA) and docosahexaenoic (DHA) are recognized as cardioprotective, reducing the levels of triglycerides and cholesterol, with anti-inflammatory activity and anti-cancer effects (Francavilla et. 2013).Linoleic and arachidonic acids present in P. capillacea and O. obtusiloba have the above functions. The fatty acids exhibited DPPH radical scavenging activity at all concentrations tested (Figure 2).There was a small increase in activity (25.90% to 29.97%) with increased concentration, but without statistical significance (p > 0.05).The activity of the positive control (ascorbic acid) was superior to that of samples tested, ranging from 64.23% to 96.82%. These results were expected since crude hexane extracts (Hex) from P. capillacea and O. obtusiloba showed moderate DPPH activity, 30.49% and 35.55%, respectively (Alencar et al. 2016).Similar values were obtained by Patra et al. (2015), for hexadecanoic acid, the major constituent of the oil extracted from the marine green alga Enteromorpha linza, which presented a DPPH activity around 30% at a concentration of 100 µg L -1 . There was an indirect relationship between the concentration of fatty acids and FIC activity (Figure 3), that is, as the first increases, the second decreases.At all concentrations tested, chelating activity of fatty acids from P. capillacea was higher than that of O. obtusiloba.At the concentration of 12.5 µg mL -1 , for example, they showed activities of 31.18% and 17.17%, respectively. The ferric reducing antioxidant power (FRAP) of fatty acids present in the Hex extract from marine red algae P. capillacea and O. obtusiloba was low (Figure 4).Unlike the observed for FIC, it was not possible to detect a dose dependent relationship. No concentration showed a statistically significant difference regarding the activity of fatty acids of the algae studied, except for the concentration of 50 µg L -1 .The activity of the positive control (BHA) was superior to that of samples tested, and the absorbance varied between 0.124 and 0.371. Figure 5 illustrates the antioxidant activity of β-carotene bleaching test observed in fatty acids present in Hex extracts from marine red algae P. capillacea and O. obtusiloba.It was observed the same behavior for the activity determined by FIC (inverse relationship between concentration and activity), where the highest concentration had the lowest antioxidant activity.For example, P. capillacea extract at 12.5 µg mL -1 showed 61.24% activity, while the extract from O. obtusiloba at 50 µg mL -1 exhibited the highest activity (49.13%).None of the samples showed activity superior to that of the positive control, BHT. The antioxidant activities tested by the aforementioned methods are associated with the fatty acid content from P. capillacea, which has greater amount of saturated fatty acids (90.6%) and lower of mono-and polyunsaturated fatty acids (9.4%).In O. obtusiloba, the saturated fatty acids content is 63.4% and the mono-and polyunsaturated fatty acids content, 36.6%.Possibly the mono-and polyunsaturated fatty acids are more susceptible to oxidation promoted by catalysts such as metal ions or hydroperoxide radicals due to the degree of unsaturation. The antioxidant activity of fatty acids (saturated and unsaturated) is related to the composition of these acids found in marine algae.Henry et al. (2002) evaluated the antioxidant activity of saturated and unsaturated fatty acids and verified that saturated fatty acids, such as myristic, palmitic and lauric acids, showed the best antioxidant activity in the vegetable-origin products.These authors also claimed that the antioxidant activity is directly related to the size of the hydrocarbon chain in the structure of the fatty acid molecule. According to Huang and Wang ( 2004), the antioxidant activity is also associated with the composition of fatty acids (saturated and unsaturated) present in algae.They showed that the antioxidant activity is related to the increased content of unsaturated fatty acids.Therefore, the unsaturated fatty acids seem to be the main components for contributing to the antioxidant activity of lipophilic extracts of marine alga.The antioxidant activity of fatty acids in marine alga is poorly addressed in the literature.The few studies on the subject discuss the activity given by the DPPH scavenging capacity and the β-carotene bleaching. CONCLUSIONS This is the first report on the fatty acid composition (qualitatively analyzed as fatty acids in the form of methyl esters by GC-MS and quantitatively by GC-FID) from the marine red algae Pterocladiella capillacea (S.G. Gmelin) Santelices & Hommersand 1997 and Osmundaria obtusiloba (C.Agardh) R. E. Norris 1991, as well as on the antioxidant activity of these compounds. The fatty acid profile of P. capillacea showed a high percentage of saturated fatty acids mainly because of the content of palmitic acid (C16:0).The major constituent among monounsaturated fatty acids was oleic acid (C18:1 cis), and among polyunsaturated fatty acids, arachidonic (20:4) and eicosapentaenoic (C20:5) acids. Using the method of β-carotene bleaching (BCB), fatty acids showed antioxidant activity above 50% at the lowest concentrations, suggesting that these algae can be sources of beneficial supplements for animal and human health.In addition, fatty acids from marine algae can be used in the food industry to enrich food and provide a protective effect against lipid oxidation, thus extending food shelf life. Figure 1 - Figure 1 -Chromatograms of fatty acids from marine red algae Pterocladiellacapillacea (a) and Osmundariaobtusiloba (b) obtained by gas chromatography mass spectrometry (GC-MS).
2018-04-03T04:35:44.288Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "0bfc2ebd3f38e4132f8d9049f0be961605a90394", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/aabc/v90n1/0001-3765-aabc-201820160315.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b84992c0f11bb01c71d580014acdaeeeaea391d2", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
59457129
pes2o/s2orc
v3-fos-license
NATURE AND MAGNITUDE OF GENETIC VARIABILITY, HETEROSIS AND INBREEDING DEPRESSION IN AMARANTHUS Combining ability, heterosis and inbreeding depression were estimated in grain amaranths for ten characters. Non-additive genetic variance was predominant for majority of characters in both F1 and F2 generations. The parent AG-21 was good general combiner for yield/plant also showed high GCA effects for panicles/plant and harvest index in both F1 and F2 generations. Seven characters, the best F2s on the basis of SCA involves one parent with high GCA effect and the other with poor or average GCA effects. The hybrids which exhibited highest heterosis also showed high inbreeding depression. Heterosis over better parent was highest for economic grain yield (145.047%), followed by panicles/plant (113.675%), panicle length (33.656%) and grain weight/panicle (23.566%). INTRODUCTION Grain amaranths of the genus Amaranthus contains about 20 species (wild and cultivated types) distributed throughout the world (SAUER, 1967).There are three species of grain amaranths which produce large seed heads of edible, light coloured seeds.A. cruentus L. and A. hypochondriacus L. are native of Mexico and Guatemala and A. caudatus is native to the Andean regions of Ecuador, Peru and Bolvia (SAUER, 1967).Among these A. hypochondriacus is cultivated throughout the world for its seeds occupies an important position among the pseudocereals due to its high yield and protein content (BERGHOFER and SCHOENLECHNER, 2002).In India the species is widely cultivated for its seed from Kashmir to Arunachal Pradesh.It is an out-inbred crop, upto 40% outcrossing depending upon the varying environmental conditions (WALTON, 1968, JAIN et.al., 1982, SIMMONDS, 1979, PAL, 1972).Breeding methods for improvement of allogamous crops should be based on the nature and magnitude of genetic variance (combining ability) controlling the inheritance of quantitative traits.Selection of crosses may be based on specific combining ability and per se performance linked with heterosis and inbreeding depression for cross exploitation.The present study is an attempt in this direction and undertaken to estimate the combining ability of the F 1 and F 2 populations, and the magnitude and direction of heterosis and inbreeding depression for yield and yield components in grain amaranth (Amaranthus hypochondriacus L.). MATERIALS AND METHODS The material for the present investigation comprised of six accessions of A. hypochondriacus namely, AG-16 (Calicut, Kerala), AG-26 (Mahabaleshwar, Maharastra), AG-19/2 (NBRI, Lucknow), AG-24 (Garhwal, Uttrakhand), AG-21 (Barabanki, U.P.) and AG-19/1 (NBRI selection) and their 15 cross combinations in F 1 and F 2 generations.These crosses, were developed using diallel-mating system.The experimental material was planted in randomized complete block design with three replications in rabi season at National Botanical Research Institute, Lucknow.The parents, check and F 1 s were raised in single row and F 2 s in two-row with 3m-row length, row-to-row and plant to plant distance 40cm and 15cm, respectively.Observations were recorded on 10 random plants from each plot in the parents and F 1 s and on 30 plants in F 2 generation for 10 characters.Combining ability analysis was done according to Method 2, Model I of GRIFFING (1956).Heterosis over mid parent (MP) and better parent (BP) were calculated as F 1 -MP/MPx100 and F 1 -BP/BPx100.Inbreeding depression was worked out as F 1 -F 2 /F 1 x 100.T-test of inbreeding depression (ID) was calculated as estimated value of ID/standard error of mean. RESULTS The variance due to GCA was highly significant for almost all characters in both generations except test weight (1000 grains), similarly, the variance due to SCA was highly significant for almost all the characters in both F 1 and F 2 except for grain weight/panicle and harvest index in F 1 and F 2 and days to flowering in F 1 generation (Table 1), indicating that the parents and crosses differ significantly in their combining ability effects.The estimates of GCA (6 2 gi) and SCA (6 2 sij) variances reflected the additive and non-additive genetic components are involved in determining the inheritance of these characters.The magnitude of of 6 2 sij was higher than 6 2 gi for almost all the characters in both generations.Hence, nonadditive genetic variance is playing more important role in the inheritance of these traits.RUIZ et. al. (2004) and ORTIZ and GOLMIRZAIE (2004) have reported similar observations in potato.Although the additive gene action would imply some scope for selection in segregating generations, but presence of marked non-additive action suggests the population improvement followed by recurrent selection to accumulate desirable genes and facilitating for breaking of linkage through disruptive selection would be more appropriate. The estimates of GCA effects (Table 2) revealed that AG-19/2 and AG-19/1 are the good general combiners for 6 out of 10 characters: for days to flowering, days to maturity, panicles/plant, panicle length, test weight and AG-16 and AG-26 for dwarfness in F 1 and F 2 and AG-21 for grain yield/plant in both generations, AG-24 for harvest index in F 1 and F 2 and AG-16 for protein content in both generations.The difference in genetic make up of two generations may be responsible for the difference in combining ability of different parents.It is clear from Table 2 that the parent AG-21, which is good general combiner for grain yield also showed high gca effects for panicles/plant and harvest index in F 1 and F 2 generations.Range of MP heterosis and BP heterosis designated as heterobeltiosis (FANSECO and PETERSON, 1968) and best hybrid on the basis of mean performance are presented in Table 3.The extent of inbreeding depression for various characters in F 2 is given in Table 4. Five crosses flowered earlier than their respective early parents.The cross AG-19/2 x AG-24 was earliest among all.AG-26 x AG-19/2 is only hybrid, which is shorter than the others and exhibited higher heterosis over better parent for dwarfism.None of the F 2 progenies was earlier on pooled basis than their F 1 hybrids.Five F 2 progenies were shorter in plant height than the corresponding F 1 hybrid (Table 4).Heterosis over better parent was highest for grain yield/plant (145.047)followed by panicles/plant (113.675),panicle length (33.656) and grain weight/panicle (23.566). For grain yield seven hybrids were significantly superior than both mid parent and better parent.The cross AG-24xAG-19/1 showed the best heterosis (Table 3).The inbreeding depression for grain yield ranged from -2.043 to 17.791 (Table 4).Ten hybrids over their better parent were observed significantly superior for harvest index.For protein content in F 2 generation five best F 2 s for four characters out of 10, involved one parent with high gca effect and other with poor gca effect.Out of these four characters: plant height, test weight and grain weight/panicle also had the minimum or negative inbreeding depression, such F 2 s may throw up desirable transgressive segregants.Such observations were also reported in rapeseed (RAI and VARSHNEY, 1983).For all the characters the best F 1 on the basis of mean performance, better heterosis and sca effect was same.Hence, equal importance shall be given to per se performance while making selection for these attributes. In general, the hybrids, which showed high heterosis for grain yield also had high heterosis for plant height and protein content beside other yield contributing characters.Such a situation of 'combinational heterosis' was also reported in rapeseed (HAGBERG, 1952, DAS and RAI, 1972, VARSHNEY, 1985).A close relationship between heterosis response and inbreeding depression, i. e. hybrids which showed high heterosis linked with high inbreeding depression as well as the high magnitude of 6 2 sij suggested the importance of non-additive gene action in inheritance of these characters in grain amaranths.Similar observations were reported in rapeseed (RAI andVARSHNEY, 1983), Phaseolus (SINGH andSINGH, 1970), mungbean (TIWARI et. al. 1993), lentil (GUPTA andSINGH, 1994) barley (BHATNAGAR and SHARMA, 1995), (HAYS and PARODA, 1974), (HAUNG, 1984), (EINFELDT et. al., 2005).The cross AG-19/2 x AG-19/1 had the highest yield in F 1 coupled with high harvest index and high positive heterosis for these traits with high sca effects both in F 1 and F 2 generations.Such crosses may be exploited for yield in this crop. Table 1 . Anova (M.S.S.)for combining ability parameters) in F 1 and F 2 generations of grain amaranths Table 3 . Range of heterosis and best hybrid for various quantitative characters in grain amaranths Table 4 : Range of inbreeding depression (I.D.) and crosses showing lowest and highest I.D. for different characters in F 2 generation
2018-12-21T01:36:54.298Z
2007-01-01T00:00:00.000
{ "year": 2007, "sha1": "b5415acee357b3e8219cc8457332b6f007f5cdbe", "oa_license": null, "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0534-00120702251P", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "b5415acee357b3e8219cc8457332b6f007f5cdbe", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
209246677
pes2o/s2orc
v3-fos-license
Deep neck space infection and Lemierre’s syndrome caused by Streptococcus anginosus: A case report Introduction Deep neck space infections most commonly arise from a septic focus of the mandibular teeth, tonsils, parotid gland, middle ear or sinuses, usually with a rapid onset and frequently with progression to life-threatening complications. Lemierre’s syndrome is classically defined by an oropharyngeal infection with internal jugular vein thrombosis followed by metastatic infections in other organs. Case presentation A 32-year-old female patient, with no significant past medical history, was diagnosed with a dental abscess on her left inferior 3rd molar. Six days later, the condition complicated with severe upper respiratory distress, odynophagia and trismus, and extension of the inflammatory signs to the anterior cervical region, involving the upper airway. Computed tomography scan confirmed extension to submandibular, parapharyngeal and retrosternal spaces), which required nasotracheal intubation due to compromised airway. Urgent and subsequent surgical drainages were performed, alongside with concomitant antibiotic therapy. Additionally, left internal jugular vein thrombosis was described - with later extension to the brachiocephalic vein, without other complications, consistent with Lemierre's syndrome, although without full features. Streptococcus anginosus was identified in the drained pus specimens. The patient made a satisfactory clinical progress and was discharged after 25 days, still under therapeutic hypocoagulation. Conclusion As deep neck space infections can be life-threatening, clinicians must be aware and not underestimate their potential severity. Lemierre's syndrome is a complication difficult to recognize, which requires additional awareness of the many possible presentations, for appropriate diagnostic studies and therapeutic plan. Introduction Dentoalveolar infections are one of the most common diseases in the oral and maxillofacial region, with acute dental abscess usually occurring secondary to dental caries, trauma or failed root treatment [1,2]. Deep neck space infections (DNSI) most commonly arise from a septic focus of the mandibular teeth, tonsils, parotid gland, middle ear or sinuses, usually with a rapid onset and frequently with progression to life-threatening complications, particularly the risk of compromised airway [3]. Complications are associated with a mortality rate up to 40 %, mainly in the presence of mediastinal extension; however, with the rise of modern antibiotics, mortality rates have significantly diminished [1,3,4]. Although most oropharyngeal infections are self-contained, they still can spread through the fascia and deep neck spaces while progressing inferiorly into the mediastinum. Multiple severe complications of dentoalveolar infection have been reported, such as airway obstruction, Ludwig's angina, descending mediastinitis, necrotizing fasciitis and any site typical of septic embolic spread may be involved [4][5][6][7][8]. Systematically described for the first time in The Lancet (1936), by the French professor of bacteriology André Lemierre, while working in the Claude Bernard Hospital in Paris, Lemierre's syndrome (LS) is classically defined by a pharyngeal or odontogenic infection, complicated by bacteraemia and internal jugular vein thrombosis followed by septic emboli, usually occuring in otherwise healthy adults [9][10][11][12]. Other known historical names are postanginal anaerobic septicaemia or necrobacillosis [10]. Due to the introduction of antibiotics and because of its presumptive very low prevalence, LS has been referred to as "the forgotten disease" [10,[12][13][14]. The most commonly involved and described bacteria is Fusobacterium necrophorum but others, such as Streptococcus spp., Staphylococcus spp. and Enterococcus spp. are also found in cultures [10,14,15]. We present a case of deep neck space infection and LS caused by Streptococcus anginosus. We consider this case to be relevant because it represents an incomplete LS, caused by a microbial agent that is not the most commonly identified in this entity, although it is already described in the medical literature. In addition, we aim to describe and discuss in detail our clinical approach, in order to increase available data and knowledge about this particular disease. Case presentation A 32-year-old female patient with no significant past medical history and no known allergies was diagnosed with a dental abscess on her left inferior 3 rd molar, in a countryside hospital, in early May 2019. She was initially treated, in the ambulatory setting, with oral amoxicillin/clavulanate 875 mg/125 mg every 12 h, that she was taking correctly. On the 5 th to 6 th day of treatment she noticed odynophagia and a painful left submandibular oedema, which led her to return to the emergency department. On medical examination the patient already presented severe upper airway respiratory distress, trismus and extension of the inflammatory signs to the anterior cervical region. Her vitals were stable, with oxygen saturation (FiO2 21 %) of 97 %, and no fever was documented. The initial laboratory results revealed a leukocytosis of 14.760/mL, and a Creactive protein of 33.36 mg/dL, with no other analytical values significantly altered. Cervical and thoracic computed tomography (CT) scan with contrast showed a diffuse thickening of the soft tissues with phlegmon and emphysema from the oral cavity to the anterior mediastinum, near the aortic arch. Abscess extension was confirmed to submandibular, parapharyngeal and retrosternal spaces, with secondary bulging of the walls of the airway at the base of the tongue, associated with left internal jugular vein thrombosis (Fig. 1). The patient was readily transferred to our tertiary referral centre, to be evaluated by Maxillofacial Surgery. Due to predictably difficult airway, after proper sedation, fibreoptic nasotracheal intubation was performed in the operating room. The patient underwent surgical drainage of the abscesses and was later admitted to the Intensive Care Unit for airway management. Daily bedside drainage of the submandibular region was performed, and the patient was started on intravenous corticosteroids (10 mg of dexamethasone every 12 h for 3 days) and empiric antibiotic therapy with ceftriaxone 2 g every 24 h and metronidazole 500 mg every 6 h. Because of extension of vein thrombosis to the brachiocephalic vein in the 48 h-reassessment CT-scan, intravenous non-fractioned heparin was also initiated. The direct examination of surgical drained pus through Gram staining demonstrated polymicrobial flora with a predominance of gram positive cocci and gram negative bacilli. Later, abscess cultures grew Streptococcus anginosus, susceptible to amoxicillin/ clavulanate, carbapenems and metronidazole. The 2 sets of blood cultures collected after the first administration of the intravenous antimicrobials, were sterile. HIV 1/2 antibodies/antigen assay was negative. All serum immunoglobulins were also normal. The patient showed clinical and imaging improvement, with successful extubation 8 days after the surgery and was transferred to the infirmary for postoperative care. Although CT reevaluation demonstrated persistence of left internal jugular vein thrombosis, she underwent favourable clinical progress with defervescence and neck tenderness resolution, and recovered without any sequelae, new infectious or thrombotic complications, having completed a total of 24 days of ceftriaxone and metronidazole. The patient was discharged 25 days after admission, under anticoagulation with enoxaparin 80 mg every 12 h for an expected 3 months duration, with reassessments in the Vascular and Maxillofacial Surgery clinics. Discussion Odontogenic infections, such as abscesses of the 3 rd molar tooth, are classified according to the morphological location as peritonsillar, pharyngeal or submandibular infections. The severity of these infections increases with lack of adequate treatment, mainly when there is no effective septic focus control, potentially evolving into severe DNSI, a complication of which this case is an example [1,4,5]. Because of its good sensitivity to characterize soft tissues (varying between 60%-100%), contrast-enhanced cervical CT is often used as the gold standard method to assess the extent of DNSI [3]. Ultrasonography is easily available and free of ionizing radiation, but it is less sensitive for deeper cervical tissues and for recently formed thrombi in case of thrombotic complications [11]. Concerning airway management, these lesions most frequently affect the airway at the level of the epiglottis or aryepiglottic fold. When performed by experienced physicians, fibreoptic intubation might be the first choice to secure the airway in these patients, providing a safe and atraumatic procedure [1,3]. Although without all the classical features, we consider that this clinical case may be interpreted as LS. No bacteraemia or metastatic infectious lesions were identified, but the patient was previously under broad spectrum antibiotic therapy, and the first blood cultures were collected after the first administration of intravenous antibiotics. Some reports have noticed a resurgence of LS in the recent years, sometimes without the full traditional presentation. One of the proposed hypotheses for that are the educational campaigns and trends against prescribing antibiotics for sore throats or upper airways infections that may lead to more cases developing complications that would potentially have been prevented otherwise [16]. The causative organisms of internal jugular vein thrombophlebitis are usually members of the normal oropharyngeal flora. The most commonly and historically described pathogen is the anaerobe Fusobacterium necrophorum. Other pathogens include other Fusobacterium species and also other organisms such as Eikenella corrodens, Porphyromonas asaccharolytica, Bacteroides spp., as well as streptococci, including Streptococcus pyogenes or Streptococcus anginosus-group (also known as the S. milleri group: S. anginosus, S. intermedius and S. constellatus). Even Klebisella pneumoniae has been described [11,12,14]. Regarding septic focus control, it is evident from the literature review that the priority in treatment of LS involves immediate intravenous broad spectrum antibiotic therapy with anaerobic coverage until the organism and its susceptibility has been determined, alongside surgical drainage of the infected site. Most microbiologists recommend beta-lactamase-resistant antibiotics with anaerobic activity. Once the microbial agent is confirmed by the laboratory, therapy should be targeted [10,11]. F. necrophorum is usually susceptible to penicillin, clindamycin, metronidazole and chloramphenicol. Resistance to penicillin is not found at a relevant frequency. Mean described duration of antibiotic treatment is 4 weeks, but it ranges from 10 days to 8 weeks [11,17]. Given the Gram stain results and evidence of soft tissue emphysema, we continued the empiric antibiotic coverage. After nearly 3 weeks of almost daily surgical drainage and monitoring for clinical resolution and control of septic focus, antibiotics were discontinued. The role of corticosteroids in the management of deep cervicofacial infections still lacks consensus [11,16]. However, optimal management has been studied as new evidence and reports emerge, and it seems that short-term use of high-dose corticosteroids, as an adjunctive therapy to intravenous antibiotics with proper incision and drainage as clinically needed, is safe and effective in the management of various cervicofacial infections. In several retrospective studies, no negative side effects from the acute use of corticosteroids were reported, but their type and dosage regimens were not recorded in detail [18]. Further investigation is needed to determine the role of corticosteroids in the treatment of patients with DNSI. Currently, the most controversial role in LS management is the use of anticoagulation, due to its rarity and subsequent lack of controlled studies [12]. The most pertinent questions to answer when anticoagulation is initiated are why and for how long to maintain it. It has been argued that thrombosis associated with LS will spontaneously resolve, but it is also unclear if anticoagulation will hasten the resolution of thrombosis [19]. Successful treatment has been described in patients with or without anticoagulation, in conjunction with antimicrobial therapy [11,12,17]. The American College of Chest Physicians 2012 guidelines recommend anticoagulation for 3 months, with associated reduction in recurrent thromboembolism in patients with bland internal jugular vein thrombosis [19]. Some authors strongly recommend anticoagulation for a select group of patients with: a) lack of response despite 48-72 hours of adequate antimicrobial therapy, b) persistent bacteremia, c) underlying thrombophilia and/or d) progression to intracranial thrombosis [12,17,20]. In patients where anticoagulation has been initiated, optimal duration is unclear and may range from 2 weeks to 6 months [20]. In some cases, internal jugular venous thrombosis may persist after infection has been resolved and even despite anticoagulation, while others have reported resolution just after 2 weeks [20,21]. Considering all these aspects, LS should be considered in the differential diagnosis in patients presenting with persistent sore throat, mastoiditis or recent history of a dental procedure, accompanied with neck pain and swelling, potentially involving the airway. Blood cultures should be obtained prior to antibiotic therapy and CT imaging of the neck with intravenous contrast should be performed. The treatment in DNSI and LS involves proper antibiotic therapy and surgical drainage of the infected site, while anticoagulation therapy in the latter, although controversial, should probably be considered in selected cases. The benefits of adjunctive corticotherapy are yet to be proven. These interventions will enable timely diagnosis and treatment, with improved outcome. Sources of funding The authors did not receive any specific grant from funding agencies in the public, commercial or not-for-profit sectors. Informed consent Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.
2019-11-14T17:09:08.879Z
2019-11-09T00:00:00.000
{ "year": 2019, "sha1": "32db322c17585867ac261d9f81100a993514d7ae", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.idcr.2019.e00669", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8f23e787fc8783a3f3de61fe05c138d9accbf28f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
23758863
pes2o/s2orc
v3-fos-license
A Src family kinase-Shp2 axis controls RUNX1 activity in megakaryocyte and T-lymphocyte differentiation. Hematopoietic development occurs in complex microenvironments and is influenced by key signaling events. Yet how these pathways communicate with master hematopoietic transcription factors to coordinate differentiation remains incompletely understood. The transcription factor RUNX1 plays essential roles in definitive hematopoietic stem cell (HSC) ontogeny, HSC maintenance, megakaryocyte (Mk) maturation, and lymphocyte differentiation. It is also the most frequent target of genetic alterations in human leukemia. Here, we report that RUNX1 is phosphorylated by Src family kinases (SFKs) and that this occurs on multiple tyrosine residues located within its negative regulatory DNA-binding and autoinhibitory domains. Retroviral transduction, chemical inhibitor, and genetic studies demonstrate a negative regulatory role of tyrosine phosphorylation on RUNX1 activity in Mk and CD8 T-cell differentiation. We also demonstrate that the nonreceptor tyrosine phosphatase Shp2 binds directly to RUNX1 and contributes to its dephosphorylation. Last, we show that RUNX1 tyrosine phosphorylation correlates with reduced GATA1 and enhanced SWI/SNF interactions. These findings link SFK and Shp2 signaling pathways to the regulation of RUNX1 activity in hematopoiesis via control of RUNX1 multiprotein complex assembly. Hematopoietic development occurs in complex microenvironments and is influenced by key signaling events. Yet how these pathways communicate with master hematopoietic transcription factors to coordinate differentiation remains incompletely understood. The transcription factor RUNX1 plays essential roles in definitive hematopoietic stem cell (HSC) ontogeny, HSC maintenance, megakaryocyte (Mk) maturation, and lymphocyte differentiation. It is also the most frequent target of genetic alterations in human leukemia. Here, we report that RUNX1 is phosphorylated by Src family kinases (SFKs) and that this occurs on multiple tyrosine residues located within its negative regulatory DNA-binding and autoinhibitory domains. Retroviral transduction, chemical inhibitor, and genetic studies demonstrate a negative regulatory role of tyrosine phosphorylation on RUNX1 activity in Mk and CD8 T-cell differentiation. We also demonstrate that the nonreceptor tyrosine phosphatase Shp2 binds directly to RUNX1 and contributes to its dephosphorylation. Last, we show that RUNX1 tyrosine phosphorylation correlates with reduced GATA1 and enhanced SWI/SNF interactions. These findings link SFK and Shp2 signaling pathways to the regulation of RUNX1 activity in hematopoiesis via control of RUNX1 multiprotein complex assembly. Master transcription factors play key roles in hematopoiesis by regulating cell fate decisions and terminal cell maturation (Orkin and Zon 2008). Many of these factors act in combinatorial self-stabilizing networks that reinforce selected cell identity programs while repressing alternate lineage choices (Graf and Enver 2009). At the same time, hematopoiesis occurs in complex microenvironments and is influenced by external signaling events. Yet how cell signaling pathways communicate with the transcriptional networks to modulate differentiation remains incompletely understood. In humans, acquired RUNX1 deficiency is an early initiating event in up to 30% of all human leukemias (for review, see Speck and Gilliland 2002). RUNX1 mutations are also a poor prognostic indicator in de novo myelodysplastic syndrome (MDS) and myeloproliferative neoplasms (MPNs) (Nakao et al. 2004;Bejar et al. 2011;Vainchenker et al. 2011). Germline mutations leading to RUNX1 haploinsufficiency cause familial platelet disorder with propensity to develop AML (FPD/AML), an autosomal dominant syndrome characterized by thrombocytopenia, platelet dysfunction, and an ;35% lifetime risk of developing MDS/AML (Song et al. 1999;Owen et al. 2008). Altered RUNX1 expression also predisposes to lymphoma in mice (Wotton et al. 2002;Kundu et al. 2005). Thus, tight regulation of RUNX1 activity levels is critical for normal hematopoiesis. RUNX1 contains a number of autoinhibitory domains (IDs) that control its function. A negative regulatory DNAbinding (NRDB) domain inhibits DNA association. This is relieved when RUNX1 physically interacts with CBFb and/or ETS family transcription factors (Ogawa et al. 1993;Goetz et al. 2000). Likewise, an ID located C-terminal to the transcriptional activation domain (AD) dampens transcriptional activity (Kanno et al. 1998). The mechanism that relieves this autoinhibition is unknown. Tyrosine phosphorylation plays critical roles in cellular signaling events, particularly those controlling proliferation in response to cytokine, cell-cell, and cell-matrix interactions. Moreover, protein tyrosine kinases and phosphatases are frequently dysregulated in MPNs and hematologic malignancies. Although tyrosine phosphorylation is typically described in the context of membrane receptors and cytoplasmic proteins, transcription factors and other nuclear proteins can be functionally modified by tyrosine phosphorylation. In the present study, we show that RUNX1 is tyrosine phosphorylated on its NRDB and ID domains by Src family kinases (SFKs) and that this negatively regulates RUNX1 activity in megakaryocytic and T-lymphocyte differentiation. We also provide evidence that the nonreceptor tyrosine phosphatase Shp2 contributes to dynamic RUNX1 tyrosine dephosphorylation and that tyrosine phosphorylation alters RUNX1 multiprotein complex formation. The physical association of RUNX1 with a tyrosine kinase and phosphatase led us to hypothesize that RUNX1 may be tyrosine phosphorylated itself. To test this, Flag-tagged and metabolically biotinylated RUNX1 ( Flag-Bio RUNX1) was immunoprecipitated from nuclear extracts of uninduced L8057 cells with an anti-Flag antibody and analyzed by Western blot using the pan anti-phosphotyrosine (pY) monoclonal antibody 4G10 (Millipore). This revealed bands migrating at the same molecular weight as Flag-Bio RUNX1 (Fig. 1A). To exclude the possibility that these bands represent RUNX1-associated proteins, rather than RUNX1 itself, the experiment was repeated under denaturing conditions in which Flag-Bio RUNX1 was streptavidin-precipitated in the presence of increasing concentrations of sodium dodecyl sulfate (SDS). As shown in Figure 1B, the anti-pY reactive band was retained with as much as 5% SDS, whereas the noncovalently bound protein CBFb was completely dissociated with as little as 0.5% SDS. Quantitation using sequential anti-pY immunoprecipitation (IP) followed by streptavidin immunoprecipitation (SA-IP) revealed that up to ;1%-10% of the nuclear Flag-Bio RUNX1 is tyrosine phosphorylated in uninduced L8057 cells (Fig. 1C). Endogenous RUNX1 was also found to be tyrosine phosphorylated in human MEG-01 megakaryoblastic cells and primary murine thymocytes (Fig. 1D). RUNX1 tyrosine phosphorylation levels decrease during phorbol ester-induced L8057 megakaryoblastic cell maturation We next examined whether RUNX1 tyrosine phosphorylation levels change during TPA-induced maturation of L8057 megakaryoblastic cells. This revealed a dramatic loss of RUNX1 tyrosine phosphorylation (Fig. 1E, top panel). By 3 d of treatment, a time when many of the cells are undergoing endomitosis and cytoplasmic maturation (Ishida et al. 1993), tyrosine phosphorylation levels are barely detectable. Cell fractionation studies indicate that tyrosine-phosphorylated RUNX1 localizes to the nuclear compartment and does not translocate to the cytoplasm upon TPA treatment (Fig. 1E, bottom panel). This suggests that loss of RUNX1 tyrosine phosphorylation is due to dephosphorylation. Consistent with this, brief treatment of the cells with the pan-tyrosine phosphatase inhibitor sodium orthovanadate (Na 3 VO 4 ) markedly enhances RUNX1 tyrosine phosphorylation levels (Fig. 1F). Thus, RUNX1 tyrosine phosphorylation is dynamically regulated, and higher levels correlate with an immature cell state. Given that RUNX1 is required for normal Mk maturation (Ichikawa et al. 2004;Growney et al. 2005), this correlation suggests that tyrosine phosphorylation may inhibit RUNX1 function in megakaryopoiesis. RUNX1 is phosphorylated by SFKs Inhibition of SFKs has previously been shown to markedly enhance megakaryopoiesis (Lannutti et al. 2005(Lannutti et al. , 2006Mazharian et al. 2011). In combination with the findings above, we hypothesized that SFKs may be responsible for RUNX1 tyrosine phosphorylation. To test this, uninduced L8057 cells containing Flag-Bio RUNX1 were treated with the pan-SFK inhibitor PP2, and RUNX1 tyrosine phosphorylation levels were measured ( Fig. 2A, top panel). In contrast to control cells treated with dimethyl sulfoxide (DMSO), the RUNX1 phosphotyrosine signal markedly diminished by 4 h and was nearly undetectable by 24 h. Similar findings were observed using Dasatinib, a clinically available SFK inhibitor ( Fig. 2A, bottom panel). In vitro kinase assays using recombinant c-Src and Flag-Bio RUNX1 purified from TPA-induced L8057 cells show a dose-dependent increase in RUNX1 tyrosine phosphorylation (Fig. 2B). Confocal immunofluorescence microscopy studies indicate partial overlapping localization patterns for c-Src and RUNX1 in L8057 cells (Supplemental Fig. S3). Collectively, these data indicated that c-Src and/or possibly additional SFKs are responsible for RUNX1 tyrosine phosphorylation in megakaryocytic cells. In order to examine the contribution of RUNX1 to the enhancement of megakaryopoiesis by SFK inhibition, wild-type, RUNX1 fl/fl or RUNX1 fl/fl , Vav-Cre fetal liver cells from E13.5 murine embryos were cultured in thrombopoietin (TPO) and stem cell factor (SCF) with or without PP2. Consistent with earlier reports (Lannutti et al. 2005;Mazharian et al. 2011), PP2 treatment of wildtype or RUNX1 fl/fl fetal liver cultures dramatically enhanced the number, size, ploidy, and percentage of CD42b + (mature) Mks in the culture (Fig. 2C,D). This correlated with a marked (12-fold) increase in mRNA levels of the direct RUNX1 target gene c-mpl (Huang et al. 2009) in CD41 + flow-sorted cells (Fig. 2C, right panel). In contrast, treatment of RUNX1 fl/fl , Vav-Cre fetal liver cells, which lack RUNX1, failed to increase the number of large, polyploid, acetylcholinesterase (AChE)-positive cells (Mks) or the percentage of CD42b + cells in the culture (Fig. 2D). Thus, RUNX1 is required for the enhanced megakaryopoiesis observed with SFK inhibition. RUNX1 is tyrosine phosphorylated in its NRDB and ID domains To examine the functional role of RUNX1 tyrosine phosphorylation more directly, we next mapped the phosphor- Western blot of the purified material is shown for pY, RUNX1, and CBFb. (C) Quantitation of RUNX1 tyrosine phosphorylation levels. Nuclear extracts from uninduced L8057 cells containing Flag-Bio RUNX1were incubated with a-pY antibody-conjugated agarose beads. After washing, the bound material was eluted with excess phenyl phosphate. SA-IP was performed on both bound and nonbound fractions and examined by anti-pY and Flag Western blot; 0.5% of the input is shown. (D) Tyrosine phosphorylation of endogenous RUNX1 from human MEG-01 cells and primary murine thymocytes. Nuclear extracts from MEG-01 cells or whole-cell lysates from primary thymocytes of 6-wk-old C57BL/6 mice were incubated with a-pY-bound beads. After washing, the bound material was eluted with 100 mM phenyl phosphate and examined by Western blot for RUNX1 and Brg1 (negative control). For MEG-01, 1.25% of the input is shown (25 mg of nuclear extract), and 0.5% input is shown for primary thymocytes (90 mg of whole-cell lysate). (E) Loss of RUNX1 tyrosine phosphorylation upon TPA-induced maturation of L8057 cells. SA-IP and a-pY Western blot of Flag-Bio RUNX1 from nuclear extracts or cytoplasmic fractions of L8057 cells treated with 50 nM TPA for the indicated number of days; 1% of the input is shown. (F) Enhanced RUNX1 tyrosine phosphorylation with inhibition of tyrosine phosphatases. SA-IP and a-pY Western blot of Flag-Bio RUNX1 from nuclear extracts of uninduced L8057 cells treated with 1.25 mM Na 3 VO 4 for the indicated time. ylated tyrosine residues. Murine RUNX1 (isoform 3) contains 15 tyrosine residues, 10 of which are highly conserved among RUNX family members (RUNX1, RUNX 2, and RUNX 3) and across multiple species ( Fig. 3A; Supplemental Fig. S4). Two complementary approaches were taken to identify the phosphorylated residues. In the first approach, Flag-Bio RUNX1 was purified from uninduced Na 3 VO 4 -treated L8057 cells by SA affinity chromatography and examined by mass spectrometry for phosphorylation ( Fig. 3B; Supplemental Fig. S5). This identified Tyr 260, located within the NRDB domain, as a phosphorylation target. Four peptides encompassing the three tyrosine residues within the AD and four tyrosine residues within the ID, respectively, were also found to be phosphorylated, but the low fragment ion intensities precluded unambiguous assignment of the phosphorylated residues within this region (Supplemental Fig. S5). As a second approach, L8057 cell lines were generated that stably express Flag-Bio RUNX1 mutants in which tyrosine residues were replaced by phenylalanine. Six mutation groups were initially constructed based on the location of the tyrosine residues (Fig. 3A). The mutants were assayed for tyrosine phosphorylation by SA-IP followed by anti-pY Western blot (Fig. 3C, left panel). Mutation of both tyrosine residues within the runt domain (group 1) showed no effect on the RUNX1 tyrosine signal. The group containing Tyr 260 (group 2) showed significantly reduced pY levels, consistent with the mass spectrometry results. However, some residual signal was evident even with mutation of all three tyrosine residues within this group (Y254, Y258, and Y260), suggesting that additional residues are also phosphorylated. Group 3 and 4 mutants, involving residues within the AD, showed only minor reductions in signal. However, group 5, which contains the four tyrosine residues within the ID, showed significantly reduced tyrosine phosphorylation levels. Group 6, which corresponded to the C-terminal ''VWRPY'' Groucho repressor-interacting domain, showed no change (data not shown). Additional mutants were generated to further dissect the phosphorylated tyrosine residues within these regions. These assays were performed in the presence of Na 3 VO 4 to enhance assay sensitivity. As shown in Figure 3C (middle panel), mutation of Y258 and Y260 within group 2 reduced the total RUNX1 tyrosine phosphorylation level compared with wild-type RUNX1 or mutant group 4. However, addition of Y254 further reduced the levels. Likewise, mutation of Y379 and Y386 within group 5 reduced the overall tyrosine phosphorylation levels, but addition of Y375 and Y378 further decreased the levels. Last, we generated a mutant with phenylalanine substitution of Y260 and each of the four ID tyrosine residues (RUNX1 Y260F, Y375F, Y378F, Y379F, Y386F mutant, hereafter referred to as ''RUNX1 5F ''). Three independent L8057 clones expressing this mutant showed RUNX1 tyrosine phosphorylation levels close to background (Fig. 3C, right panel). RUNX1 5F also failed to be phosphorylated by c-Src in vitro (Fig. 3D). These findings indicate that multiple RUNX1 tyrosine residues are phosphorylated under the conditions tested and that they reside predominantly within the NRDB and ID regions. RUNX1 tyrosine phosphorylation negatively affects RUNX1 function in Mk maturation To test the functional significance of RUNX1 tyrosine phosphorylation, we began by retrovirally transducing L8057 cells with either wild-type RUNX1, the nonphos-phorylatable RUNX1 5F mutant, or a mutant containing substitution of each of the five phosphorylated tyrosine residues with aspartic acid to mimic constitutive phosphorylation (RUNX1 Y260D, Y375D, Y378D, Y379D, Y386D , hereafter referred to as ''RUNX1 5D ''). The retroviral vectors also express enhanced green fluorescence protein (GFP) as a bicistronic mRNA. After retroviral infection, GFP + cells were flow-sorted and examined for RUNX1 protein levels. All of the constructs expressed RUNX1 at moderately higher levels than endogenous RUNX1, with RUNX1 5F somewhat lower than wild-type RUNX1 or RUNX1 5D (Fig. 4A, left panel). Despite lower levels, RUNX1 5F significantly enhanced TPA-induced polyploidization compared with wild-type RUNX1 (Fig. 4A, right panel). RUNX1 5D inhibited polyploidization, indicating dominantnegative activity. Retroviral expression of RUNX1 5F in primary murine fetal liver cells also significantly increased the percentage of CD42b-and c-mpl-positive cells compared with wild-type RUNX1 and RUNX1 5D after culturing in TPO (Fig. 4B). To dissect the functional role of individual tyrosine residues, additional RUNX1 mutants were constructed containing phenylalanine substitution of Y375, Y378, Y379, and Y386 (''Runx1 4F'' ) or substitution of only Y379 and Y386 (''Runx1 2F ''). Both of these mutants also led to increased Mk maturation marker expression compared with wild-type RUNX1 when overexpressed in fetal liver-derived Mks (Supplemental Fig. S6). This suggests that Y379 and Y386 are key residues involved in RUNX1 regulation by tyrosine phosphorylation. To examine the role of RUNX1 tyrosine phosphorylation in a more physiologic setting, whole bone marrow cells from wild-type C57BL/6 mice were transduced with the RUNX1 5F , RUNX1 5D , or control retroviral constructs. Some of the cells were transferred to liquid culture containing TPO. The remaining cells were sorted for GFP expression and transplanted into sublethally irradiated wild-type recipient mice (Fig. 4C). For the liquid cultures, GFP + CD41 + cells were flow-sorted on day 5 of culture. RUNX1 5F produced GPIba (CD42b) mRNA levels that were about twofold higher than wild-type RUNX1, despite being expressed at lower levels (Fig. 4D). The RUNX1 5D construct generated slightly lower levels compared with wild type. Analysis of the transplanted mice was limited by marked loss of engraftment associated with RUNX1 overexpression, as has been previously reported for human RUNX1b (Tsuzuki et al. 2007) and RUNX1c (Challen and Goodell 2010). Interestingly, the impaired engraftment, as measured by loss of peripheral blood GFP + cells, was significantly worse for the bone marrow donor cells transduced with RUNX1 5F compared with wild-type RUNX1 (Fig. 4E). The RUNX1 5D construct was not as severe as wild-type RUNX1, although it still suppressed. Although the number of GFP + cells in the bone marrow was quite low at 8 wk following transplant, the mean percentage of GFP + CD41 + cells was 3.8-fold higher in the mice receiving RUNX1 5F compared with wild-type RUNX1 transduced cells (Fig. 4F). The RUNX1 5D recipient mice had a slightly lower percentage of GFP + CD41 + cells. We also examined potential dominant-negative activity of tyrosine phosphorylated RUNX1 on T-cell differentiation (Fig. 5B). For these experiments, bone marrow cells from C57BL/6 wild-type mice were retrovirally transduced and injected into sublethally irradiated wild-type recipient mice. After 2 wk, the RUNX1 5F construct again gave rise to considerably more GFP + CD8 SP cells than wild-type RUNX1 (6.0% 6 4.0% vs. 1.2% 6 1.2%) in the spleen, despite being expressed at equivalent levels. RUNX1 5D was similar to the empty vector. After 8 wk, mice receiving the empty vector transduced cells had splenic CD4 to CD8 ratios of ;2-2.5, and few CD4 CD8 doublepositive (DP) cells. Overexpression of wild-type RUNX1 and RUNX1 5F led to a decrease in the CD4 to CD8 SP ratio (CD4:CD8 ratio ;0.1-0.3). In contrast, RUNX1 5D failed to skew the population toward CD8 SP cells (CD4:CD8 ratio 3.2) and instead led to a marked increase of CD4 CD8 DP cells, with cells expressing a continuum of CD4 and CD8 levels. Similar findings were observed in peripheral blood mononuclear cells. This suggests that overexpression of RUNX1 5D blocked T-cell maturation and/or derepressed CD4 expression. Thus, as in Mk maturation, tyrosine phosphorylation negatively affects RUNX1 activity in T-cell CD8 SP cell differentiation and acts in a dominant fashion. Shp2 contributes to RUNX1 dephosphorylation in Mks and thymocytes Given our identification of Shp2 in RUNX1-containing complexes, we next investigated whether Shp2 contributes to RUNX1 dephosphorylation. Indirect immunofluorescence microscopy demonstrated partial nuclear Shp2 localization in TPA-induced L8057 cells and primary fetal liver-derived murine Mks (Supplemental Fig. S7). This is consistent with earlier reports of partial Shp2 nuclear localization (Yuan et al. 2003;Xu et al. 2005). We next validated the interaction between RUNX1 and Shp2 by independent SA pull-down experiments from TPAinduced L8057 cells containing Flag-Bio RUNX1, reverse coimmunoprecipitation (co-IP) assays of endogenous Shp2 and Flag-Bio RUNX1, and GST pull-down assays of recombinant RUNX1 and Shp2 (Fig. 6A-C). To test whether Shp2 contributes to RUNX1 dephosphorylation, lentiviral shRNA knockdown was performed in L8057 cells, and RUNX1 tyrosine phosphorylation levels were measured. As shown in Figure 6D, two independent shRNA constructs significantly reduced Shp2 protein levels but did not affect the related protein Shp1. This led to an increase in RUNX1 tyrosine phosphorylation levels compared with the empty vector, consistent with a contributory role of Shp2 in RUNX1 dephosphorylation. To examine the role of Shp2 in megakaryopoiesis in vivo, we generated Shp2 fl/fl , Vav-Cre and Shp2 fl/fl , PF4-Cre (Mk-specific deletion) conditional knockout mice. Similar (Long and Williams 1981). (H) RUNX1 tyrosine phosphorylation levels in thymocytes from 6-wk-old Shp2 fl/fl , Vav-Cre or Shp2 fl/fl littermates. Thymocyte extracts were incubated with anti-pY antibody-coupled beads. After washing, the bound beads were eluted using 100 mM phenyl phosphate. The eluates were examined by Western blot for RUNX1, CBFb (negative control) and YY1 (negative control); 0.5% of the input is shown (50 mg of whole-cell lysate). to RUNX1 fl/fl Vav-Cre mice, these mice are significantly thrombocytopenic compared with non-Cre transgenic littermates (Fig. 6E). Heterozygous animals do not have statistically significant platelet count differences compared with wild type. Following transient immune-mediated thrombocytopenia, Shp2 fl/fl , PF4-Cre mice had a significant delay in platelet recovery compared with control Shp2 fl/fl mice at 72 h following antibody injection (Fig. 6F). Many of the Mks from Shp2 fl/fl , PF4-Cre mouse bone marrow have a smaller size, reduced cytoplasm, and less lobulated nuclei than controls and partially resemble the micromegakaryocytes seen with RUNX1 deficiency (Fig. 6G; Ichikawa et al. 2004;Growney et al. 2005). Prior work has also shown that both RUNX1 and Shp2 deficiency lead to impaired T-cell differentiation at the DN3 to DN4 stage using Lck-directed conditional allele excision (Taniuchi et al. 2002;Nguyen et al. 2006). In order to determine the role of Shp2 on RUNX1 tyrosine dephosphorylation in vivo, we examined RUNX1 tyrosine phosphorylation levels in thymocytes from Shp2 fl/fl , Vav-Cre 6-wk-old mice. As shown in Figure 6H, loss of Shp2 led to a marked increase in RUNX1 tyrosine phosphorylation levels in primary thymocytes. We conclude that Shp2 contributes to RUNX1 tyrosine dephosphorylation in Mks and thymocytes. RUNX1 tyrosine phosphorylation alters key protein-protein interactions We next explored the mechanisms whereby tyrosine phosphorylation inhibits RUNX1 function in megakaryopoiesis. Cell fractionation studies show that tyrosine phosphorylated RUNX1 properly localizes to the cell nucleus (Fig. 1E), and confocal immunofluorescence microscopy shows no significant change in bulk RUNX1 subcellular localization after treatment of cells with PP2 or Na 3 VO 4 (Supplemental Fig. S3). RUNX1 makes a number of important protein-protein interactions that modulate its function. We and others have shown that RUNX1 synergistically interacts with the transcription factors GATA1 and Fli1 during terminal Mk maturation (Elagib and Goldfarb 2007;Huang et al. 2009). RUNX1 also physically and functionally associates with the SWI/SNF chromatin remodeling complex (Bakshi et al. 2010;Yu et al. 2012), and we observe that this interaction diminishes during TPA-induced L8057 cell maturation (Fig. 7A). In order to determine whether RUNX1 tyrosine phosphorylation correlates with altered interactions with these factors, TPA-induced L8057 cells containing Flag-Bio RUNX1 were treated with or without Na 3 VO 4 for 15 min. As shown in Figure 7B, addition of Na 3 VO 4 restores RUNX1 tyrosine phosphorylation in TPA-treated cells. This correlates with a dramatic decrease in GATA1 and slight decreases in Fli1, Shp2, and CBFb binding (Fig. 7C). In contrast, interactions with the SWI/SNF core components Brg1 and Snf5 are significantly enhanced. Thus, tyrosine phosphorylation may inhibit RUNX1 function in part by altering key protein-protein interactions. Discussion In this study, we uncovered a direct regulatory role of SFK-mediated tyrosine phosphorylation on RUNX1 activity in Mk and T-cell differentiation. Our data support an inhibitory function in these lineages, which may be in part due to altered RUNX1 protein-protein interactions. Moreover, we show that Runx1 tyrosine phosphorylation is dynamically regulated by tyrosine phosphatases such as Shp2. At least six SFKs, including Fyn, Lyn, Fgr, Hck, Src, and Yes, are expressed in primary Mks (Lannutti et al. 2003). Our in vitro data suggest that Src itself is involved in RUNX1 tyrosine phosphorylation. However, our data do not exclude contributions of additional SFKs. Fyn and Lyn are activated upon TPO receptor signaling (Lannutti et al. 2003), and Lyn-null mice have markedly enhanced megakaryopoiesis (Lannutti et al. 2006), making these potential additional candidates. In T cells, the SFK family member Lck positively regulates CD4 lineage development, while Lck deficiency directs commitment to the CD8 lineage (Zamoyska et al. 2003). Our data showing enhanced CD8 SP cell differentiation with the nonphosphorylatable RUNX1 mutant (RUNX1 5F ) makes Lck a potential candidate SFK involved in RUNX1 tyrosine phosphorylation in the T-cell lineage. Shp2 is the first characterized proto-oncogenic tyrosine phosphatase. It was previously shown to dephosphorylate the transcription factor HOXA10 within the nuclear compartment (Lindsey et al. 2007). In the current study, we identified Shp2 as a component of RUNX1 multiprotein complexes in nuclear extracts from megakaryocytic cells and provided evidence that it plays a role in RUNX1 tyrosine dephosphorylation in cellular assays and in vivo. Genetic deletion of Shp2 in Mks alters Mk development and reduces peripheral blood platelet number, similar to RUNX deficiency. Likewise, both Shp2 and RUNX1 loss lead to blocked T-cell differentiation at the same maturational stage (Taniuchi et al. 2002;Nguyen et al. 2006). Conversely, oncogenic (activating) Shp2 mutations lead to a CD8 SP T-cell lymphoproliferative disorder in mice (Mohi et al. 2005). These effects mirror our findings of enhanced CD8 T-cell development with the nonphosphorylatable RUNX1 5F construct and impaired CD8 SP T-cell development with the phosphomimmetic RUNX1 5D (Fig. 5). Collectively, our findings are consistent with an SFK-RUNX1-Shp2 regulatory axis in megakaryopoiesis and CD8 T-cell development (Fig. 7D). In this model, SFKs maintain RUNX1 in an inactive state. Shp2 and perhaps additional nonreceptor tyrosine phosphatases then activate RUNX1 by removing the phosphate groups and modulating RUNX1 protein-protein interactions. Despite the importance of activating Shp2 mutations in human MPNs and leukemia, the key physiologic Shp2 substrates remains unclear. In this study, we identified Shp2 in a nonbiased screen of RUNX1-associated factors and showed that it binds directly to RUNX1. Moreover, we demonstrated that loss of Shp2 leads to increased RUNX1 tyrosine phosphorylation levels in vivo. We propose that RUNX1 is an important physiologic Shp2 substrate. Recently, Goh et al. (2010) reported that Src phosphorylates the RUNX family member RUNX3 in human gastrointestinal cell lines. Consistent with our findings, they show that tyrosine phosphorylation inhibits RUNX3 activity. However, the mechanisms may be different. In their cell system, tyrosine phosphorylation results in cytoplasmic sequestration of RUNX3, whereas we found normal nuclear localization of tyrosine phosphorylated RUNX1 in megakaryocytic cells. It is possible that SFKmediated phosphorylation of RUNX family transcription factors has evolved as a general means to negatively modulate their function, but by different mechanisms dependent on family member and/or cell type. A number of RUNX1 mutant molecules were used in our study. While we cannot exclude the possibility that the mutations have nonspecific effects through protein misfolding, this seems unlikely since (1) mutants with the phenylalanine substitution (RUNX1 5F ) produced even greater CD8 T-lymphocyte and Mk differentiation activity than the wild-type molecule, and the aspartic acid substitutions (RUNX1 5D ) had the opposite effect; (2) we observed similar effects of RUNX1 5F on Mk development when we mutated only two residues of RUNX1 (RUNX1 2F ) (Supplemental Fig. S6); (3) the inhibitor studies, which avoid the use of mutant proteins, produced parallel results; and (4) the results produced by the mutants mirror effects seen with Shp2 loss-of-function and gain-of-function mutations. Only up to ;1%-10% of the nuclear pool of RUNX1 molecules is tyrosine phosphorylated under the conditions we tested. Although this is in the same general range as observed for tyrosine phosphorylated cytoplasmic proteins, it raises the question of how modification of this small subset of RUNX1 molecules leads to dominant effects when assayed in whole-cell or animal studies? It is possible that there is only a small fraction of the RUNX1 pool that is normally functional within the nucleus. Stein and colleagues (Zaidi et al. 2004) showed that a subpopulation of RUNX1 and RUNX2 molecules are normally targeted to the nuclear matrix and that this is essential for their activity. Interestingly, the nuclear matrix attachment site resides within the transactivation domain and is flanked by the phosphoryated tyrosine residues that we identified. Thus, the effects of tyrosine phosphorylation may be confined to the pool that gets targeted to the nuclear matrix. Further studies will be required to investigate this possibility. SFKs are typically activated upon integrin and cytokine receptor signaling and are influenced by cell-cell and cellmatrix contacts. Mk maturation is spatially compartmentalized within the bone marrow. Mk progenitors leave the osteoblastic niche and migrate in an immature state to vascular sinusoids, where they make physical contacts with vascular sinusoidal endothelial cells. It is in this location that terminal Mk maturation occurs, allowing proplatelets formed by the Mks direct access to the vascular space. T-cell development is also spatially compartmentalized within the thymus and is influenced by cellcell and cell-matrix interactions. Thus, it is possible that the actions of SFKs on RUNX1 help coordinate spatial cues with Mk and T-cell differentiation. RUNX1 deficiency is an early initiating event in human leukemia and MDS and typically results from haploinsufficiency or generation of dominant-negativeacting molecules. Yet, in many of these cases, the wildtype allele is left intact. Our results suggest that SFK inhibition may be a useful means to enhance functional activity of the residual wild-type RUNX1 protein. A num-ber of SFK inhibitors are now clinically available, including Dasatanib, which we have shown to inhibit RUNX1 tyrosine phosphorylation (Fig. 2). Future studies will be aimed at investigating the clinical utility of SFK inhibitors in the treatment of RUNX1 deficiency disorders. In summary, our data demonstrate direct tyrosine phosphorylation of RUNX1 and its functional role in Mk/ T-lymphocyte differentiation. Moreover, we uncovered a RUNX1-centered regulatory axis involving SFK and Shp2 cell signaling. These findings help connect key cell signaling pathways to master hematopoietic transcription factor control in complex microenvironments. Materials All chemicals were purchased from Sigma unless specified otherwise. See Supplemental Table S2 for sources of all antibodies. Plasmid construction The cDNA encoding murine RUNX1 (isoform3, gift from Nancy Speck) was cloned into the vector MSCV-IRES-GFP vector for retroviral expression experiments. Phosphorylation point mutants were generated using the Qiagen site-directed mutagenesis kit following the manufacturer's instructions. The associated online primer design software (Qiagen) was used to design primers. Shp2 shRNAs constructs were designed to target murine Shp2 using RNAi Codex (http:// http://cancan.cshl.edu/cgi-bin/ Codex/Codex.cgi). See the Supplemental Material for hairpin sequences. Cell culture and transfection Cells were cultured in 5% CO 2 at 37°C. Culture medium was supplemented with 100 U/mL penicillin/streptomycin (Pen/ Strep) and 2 mM L-glutamine. MEG-01 and L8057 cells were cultured as previously described (Ishida et al. 1993) and induced to differentiate with 50 nM TPA (Sigma). Generation and culturing of L8057 cells stably expressing Flag-Bio RUNX1 are as previously described (Huang et al. 2009). PLAT-E cells and primary murine fetal liver cells were cultured in Dulbecco's modified Eagle's high-glucose medium (DMEM) supplemented with 10% fetal calf serum (FCS). Murine bone marrow cells were cultured in Isocove's modified Dulbecco's medium (IMDM) supplemented with 10% FCS. PLAT-E cells were transfected using FuGene 6 reagent (Roche) according to the manufacturer's instructions. RUNX1 multiprotein complex purification and proteomic analysis Methods for Flag and SA purification of Flag-Bio RUNX1-containing multiprotein complexes and identification of associated proteins by microcapillary liquid chromatography and tandem mass spectrometry of tryptic peptides have been previously described (Huang et al. 2009). See the Supplemental Material for details. Protein-protein interaction experiments Several different protein pull-down assays were performed in this study: (1) Endogenous Shp2 pull-down assays: These were performed using an anti-Shp2 antibody (Santa Cruz Biotechnology, C-18) and the Pierce co-IP kit following the manual instructions. (2) Small-scale SA purification of Flag-Bio RUNX1: This was performed as previously described (Huang et al. 2009). (3) GST pull-down experiments: GST or GST-RUNX1 were produced in bacteria and measured as previously described (Huang et al. 2009). In vitro translation of Shp2 or CBFb incorporated with 35 S-methionine was performed using the TNT-Coupled Reticulocyte Lysate systems (Promega) following the manufacturer's instructions. One microgram of GST or GST fusion proteins and 15 uL of 35 S-Shp2 or 35 S-CBFb were incubated with glutathione sepharose 4B beads (GE Healthcare) in HEMGT-150 buffer rotated top-to-end overnight at 4°C. The beads were washed four times for 15 min each with HEMGT-150 buffer, boiled in SDS sample buffer, and loaded onto a SDS-PAGE gel. The gels were then stained with colloidal Coomassie blue, dried in a gel drier (Bio-Rad), and exposed to Kodak BioMax MS film. (4) Purification of biotinylated RUNX1 under denaturing conditions: Nuclear extracts from L8057 cells stably expressing Flag-Bio RUNX1 (or mutants) were incubated with SA-coupled agarose beads for 1 h at room temperature (small scale) or 2 h at room temperature (large scale) with up to 5% of SDS. For all SA-IP assays following the initial titration, 3% SDS was used. SAagarose beads were then washed four times for 15 min each with the same percentage of SDS used in the IP. Flag-Bio RUNX1 was eluted by heating for 5 min at 100°C. One millimolar Na 3 VO 4 and 1 mM NaF were added to the lysis and IP buffers to prevent loss of tyrosine phosphorylation during this procedure and the following procedure. (5) Sequential IP for purification of tyrosine phosphorylated RUNX1: An anti-phosphotyrosine purification kit was purchased from Millipore. Enrichment of phosphotyrosine proteins was performed following the manual instructions. Tyrosine phosphorylated Flag-Bio RUNX1 was purified from eluted phosphotyrosine proteins using SA-agarose beads in the IP buffer containing 3% SDS. For MEG-01 and primary thymocyte experiments, 10 mg of nuclear extract protein (MEG-01) cells or whole-cell lysates (thymocytes from 6-wk-old C57BL/6 mice using 1% NP-40 whole-cell lysis buffer) were incubated with anti-pY antibody-conjugated agarose beads (Millipore) overnight at 4°C. After washing in IP buffer four times for 15 min each, the bound proteins were eluted by adding 100 mM phenyl phosphate and concentrated by tricholoroacetic acid (TCA) precipitation. Enriched material was boiled with SDS loading buffer for SDS-PAGE and Western blot. For the experiments examining RUNX1 tyrosine phosphorylation in thymocytes from the Shp2 fl/fl and Shp2 fl/fl , Vav-Cre mice, 17 mg of whole-cell lysate protein was used for the anti-pY IP. Retroviral infection of primary Mks Retroviral particle production and cell infection followed standard procedures. See the Supplemental Material for details. Bone marrow transplantation Bone marrow was harvested from 8-to 12 wk-old RUNX1 fl/fl , Vav-Cre + or wild-type C57BL/6 mice by crushing femurs, tibias, ileum, and spine with IMDM medium containing 2% FCS and 2% penicillin/streptomycin and passing through a 100-mm cell strainer. Single-cell suspensions of whole bone marrow were retrovirally infected as described above for fetal liver Mks, except that a cocktail of recombinant cytokines (50 ng/mL TPO, 100 ng/mL SCF, 20 ng/mL IL-3, 50 ng/mL Flt3 ligand) and concentrated retroviral supernatants were used. Twenty-four hours following infection, GFP + cells were sorted by flow cytometry and washed with phosphate-buffered saline (PBS) twice, and the same number of cells were injected (4.3 3 10 5 or 5.6 3 10 5 cells, depending on the experiment) per animal via the retro-orbital venous plexus of 8-to 10-wk-old lethally (10 Gy divided into two fractions) or sublethally (7.2 Gy) irradiated recipient C57BL/6 mice. In the experiments using lethal irradiation, 2 million CD45.1 spleen cells from wild-type donor mice were coinjected as supporting cells. Lethally irradiated mice were housed in sterilized cages and fed with autoclaved food and sulfamethoxazole-treated water. Peripheral blood counts were obtained weekly beginning 4 wk following transplantation using a Drew HemaVet 950FS animal automated blood cell analyzer. Mice were sacrificed at 2 or 8 wk after transplantation for spleen and bone marrow hematopoietic analysis, respectively. Transient immune-mediated thrombocytopenia Two micrograms per gram body weight of anti-GPIba antibody (Emfret) or normal IgG was injected through the mouse retroorbital venous plexus. Peripheral blood platelet counts were obtained 24 h prior to injection and every 24 h after injection up to 144 h. Additional injected mice were euthanized for bone marrow and spleen histologic and immunochemistry analysis at 72 h after injection. Manipulating RUNX1 tyrosine phosphorylation by PP2 or Na 3 VO 4 L8057 cells expressing Flag-Bio RUNX1 were incubated with 10 mM PP2 or Dasatanib for 4 or 24 h, or 1.25 mM Na 3 VO 4 for 15 min or as indicated in the figures. Control cells were incubated with equivalent volumes of DMSO. Flag-Bio RUNX1 was purified by SA affinity chromatography in the presence of 3% SDS as described in Protein-Protein Interaction Experiments. The anti-phosphotyrosine antibody 4G10 (Millipore) was used to detect tyrosine phosphorylated proteins in the Western blot assays. In vitro kinase assays L8057 cells stably expressing Flag-Bio RUNX1 or Flag-Bio RUNX1 5F were induced with 50 nM TPA for 3 d to generate non-tyrosine phosphorylated RUNX1. SA pull-down was performed from crude nuclear extracts as described above. SA beads containing the bound material were treated with recombinant c-Src using an in vitro kinase kit (Cell Signaling) following the manual instructions. After the reaction, the beads were washed with 3% SDS IP buffer four times for 15 min each to remove c-Src and other noncovalently bound proteins from the beads. The beads were then washed briefly with IP buffer and PBS three times each to remove any residual SDS. The ELISA product was read at 450 and 650 nm. Analysis of RUNX1 tyrosine phosphorylation by mass spectrometry L8057 cells containing Flag-Bio RUNX1 were pretreated with 1.25 mM Na 3 VO 4 for 15 min. SA-IP was performed from crude nuclear extract in the presence of 3% SDS. Flag-Bio RUNX1 was eluted by heat denaturation and separated by SDS-PAGE. After staining with Colloidal Coomassie blue, bands corresponding to Flag-Bio RUNX1 were cut into ;1-mm 3 pieces and digested with sequencing-grade trypsin (Promega) at a concentration of 12.5 ng/uL in 100 mM ammonium bicarbonate overnight at 37°C. The peptides were extracted with 100 mM ammonium bicarbonate and acetonitrile and then lyophilized. See the Supplemental Material for details of mass spectrometry analysis. Flow cytometry Standard flow cytometry procedures were followed. See the Supplemental Material for details. Immunofluorescence staining and confocal microscopy L8057 cells were cytospun after treatment with 50 nM TPA for 5 d, with 10 mM PP2 for 20 h, or with 1.25 mM Na 3 VO 4 for 15 min. DMSO-treated and nontreated cells served as control. The cells were blocked with 5% goat serum overnight at 4°C. Anti-RUNX1 (1:200 dilution; Abcam) and anti-c-Src (1:100 dilution; Santa Cruz Biotechnology) antibodies were incubated at room temperature for 1 h After washing with PBS, the cells were incubated with anti-rabbit (Alexa 594) and anti-mouse (Alexa 488; 1:5000 dilution) for 1 h at room temperature, washed with PBS, and stained with DAPI. Immunofluorescence images were obtained using an LSM700 confocal microscope and analyzed with ImageJ software. Quantitative RT-PCR (qRT-PCR) Total mRNA was extracted from the cells using the Qiagen micro-or mini-RNA extraction kit following the manual instructions. Reverse transcription was performed using Bio-Rad Reverse Transcription kit, and quantitative PCR was performed in the presence of SYBR Green dye (Invitrogen) using a MyIQ real-time PCR instrument (Bio-Rad). See the Supplemental Material for PCR primer sequences. Histology and immunohistochemistry staining Seventy-two hours after anti-GPIba antibody injection, Shp2 fl/fl or Shp2 fl/fl , PF4-Cre mouse femurs were fixed in 4% paraformaldehyde (PFA). Paraffin-embedded slides were prepared and hematoxylin and eosin staining was performed at the Dana-Farber/ Harvard Cancer Center Research Pathology Core facility. A standard immunohistochemistry protocol was followed for von Willebrand factor (vWF) staining. In brief, antigens were retrieved by incubating the sections in 20 mg/mL proteinase K solution in TE buffer for 30 min at 37°C. Antigens were bound using a 1/500 dilution of vWF antibody (Dako), followed by detection using ABC-HRP and DAB (Vector Laboratories) according to the manufacturer's protocols. Nine photomicrographs of each sample were taken randomly. vWF-positive Mks were counted as small immature (#17 mm in diameter) or large mature (>17 mm in diameter) Mks (Long and Williams 1981). Statistical analysis All data are expressed as the mean 6 standard error of the mean (SEM). Statistical significance was determined using the Student's t-test (one-tailed). Differences were considered significant when the P-value is <0.05.
2018-04-03T05:33:45.265Z
2012-07-15T00:00:00.000
{ "year": 2012, "sha1": "c805eb5d43bbcd00c901d5020b8377c03ecd189f", "oa_license": null, "oa_url": "http://genesdev.cshlp.org/content/26/14/1587.full.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "11566bbc9d401dd4c68837e5efd83f74bfde0ea6", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
254395107
pes2o/s2orc
v3-fos-license
Impact of Lavender Herbal Tea on Sleep Quality in Elderly Patients with Poor Sleep Quality: A Randomized Study Objective: Aromatherapy has been used as a complementary alternative therapy in elderly adults with poor sleep quality. Lavender has sedative, anxiolytic, and analgesic properties. This study aimed to evaluate effect of lavender herbal tea in different doses on sleep quality of elderly people . Material-Method: This study has been designed as a prospective, randomized study with a two-arm parallel design. There were 94 patients aged between 65 and 75 years with a Richards-Campbell Sleep Questionnaire (RCSQ) score of <75. Patients were sequentially randomized into two groups as 1 g and 2 g lavender tea bags used for three months. Demographic and clinical characteristics were recorded. The RCSQ was administered initially and during the 1st-month and 3rd-month follow-up visits. Results: There was no significant difference between groups in demographic and clinical characteristics (p>0.05) or in terms of baseline RCSQ scores (p=0.685). However, 1st-month and 3rd-month RCSQ scores in patients who used 2 g lavender tea bags were significantly higher than those who used 1 g herbal tea bags (p<0.001 and p<0.001, respectively). Additionally, the 1st-month and 3rd-month RCSQ scores were significantly higher than baseline RCSQ scores in both groups (p<0.05). Conclusion: Our findings revealed that lavender herbal tea improved sleep quality in elderly patients with sleep problems. Consumption of higher doses of lavender tea (2 g vs. 1 g) resulted in significantly higher RCSQ scores. Therefore, use of lavender may be recommended in individuals with sleep problems in form of herbal tea preparations. INTRODUCTION Elderly adults with chronic insomnia usually suffer from poor sleep quality 1,2 .There is a direct correlation between age and the prevalence of sleep problems 1 .Deterioration of sleep quality leads to several physical and psychological problems, and different types of treatment modalities, including behavioral and cognitive therapies and sleep hygiene practices, and pharmacological therapy, have been recommended to overcome these problems 1,2 .Traditional and herbal therapies have recently gained popularity in this regard 1,3 .Aromatherapy has been used as a complementary alternative therapy to manage stress, muscle spasms, and sleep disturbances 4,5 .Essential oils of the aromatic plants can be produced via the steam distillation of their flowers' heads and leaves 6 .Inhalation, massaging, and bathing in the extracted essential oils are the frequently used aromatherapy methods 4 .Lavender has been used due to its sedative, anxiolytic, and analgesic properties 2,[7][8][9][10][11] .Previous studies reported improvements in mood and sleeping problems after lavender use 12,13 .Lavender is also used to treat depression and anxiety 14 .There are different approaches regarding lavender use, utilizing respiratory, gastrointestinal, or cutaneous routes 7 .The use of its inhalable forms has reportedly improved sleep quality and increased sleep duration 6 .The use of lavender was also associated with a reduction in depression and insomnia, relief of anxiety, and calming of the mind 2,7 .It is believed that herbal tea exerts its effects in several psychological and physiological ways 7 .The scent of lavender herbal tea activates the limbic system, promoting the release of different types of neurotransmitters such as encephalin, endorphin, noradrenaline, and serotonin.These neurotransmitters may trigger changes in human emotions 7 .On the other hand, it has been speculated that the risk of neurotoxicity and hepatic, renal, and cutaneous toxicity during the application of an essential oil via inhalation, massaging, and bathing can be higher than the risks associated with the consumption of herbal tea of the same aromatic plant 15 .In other words, consumption of herbal tea of any aromatic plant may have lower risks of side effects and allergic reactions and milder effects overall, compared to the methods of aromatherapy involving the administration of the essential oil of the same aromatic plant 2 .The relationship between lavender aromatherapy and sleep quality has been studied in diverse populations 4,16 .However, there is still some controversy on the efficiency of herbal tea in elderly people with sleep disturbances.In this context, the objective of this study is to evaluate the effect of the consumption of lavender herbal tea in different doses on the sleep quality of elderly people. Research design This study has been designed as a prospective, randomized study with a two-arm parallel design to investigate the effect of the consumption of lavender tea in two different dosages on the sleep quality of elderly people.The protocol of this study was approved by the Ethical Committee of Istanbul Medipol University (date:26.10.2021,no:1046).This study was carried out in accordance with the principles set forth in the Declaration of Helsinki.Informed consent of the patients who participated in the study was obtained in advance. Population and sample The study population comprised the patients admitted to the outpatient clinics of Internal Medicine and Physical Therapy and Rehabilitation at Istanbul Medipol University Hospital.The study sample consisted of the patients who were a) aged between 65 and 75 years and literate, b) without lack of communication problems, c) with a Richards-Campbell Sleep Questionnaire (RCSQ) score of <75, and d) with normal cognitive functions.Patients, who were allergic to any herbal tea or lavender, had severe sleep disorders and have been receiving treatments for these disorders, have been using anti-depressive and anti-anxiety drugs, had anemia requiring parenteral treatment, severe comorbidities, including coronary artery disease, congestive heart failure, hypo-or hyperthyroidism, and alcohol abuse were excluded from the study.RCSQ was used to identify the patients with sleep problems.RCSQ is a diagnostic tool used to evaluate the quality of sleep.Richards developed this five-item self-report questionnaire in 1987 17 .The questionnaire initially had five items, including sleep depth, latency, frequency of awakenings, time awake, quality of sleep, and subsequently was adapted to include a sixth item, that is, the perceived noise level in the environment during the night, to assess the quality of night sleep.The patients responded to each item using a visual analog scale ranging from zero to 100.Scores less than 75 indicate poor sleep quality 18 .Karaman and Ozer carried out the questionnaire's Turkish validity and reliability studies in 2015 18 .Sample size A pilot study was performed with 20 people who were divided into two groups based on the use of 1 g and 2 g lavender tea bags.The analysis of the RCSQ scores of these 20 participants revealed a 33% difference between the groups in the percent changes between the baseline and 3rd-month RCSQ scores.Accordingly, the sample size was calculated as42±39.6 for Group 1 and 42±63.9 for Group 2. The type I error (α value) was 0.05, and the power of the study (1-β) was 80%.A 10% drop-out rate was factored in, resulting in 47 participants per group (94 participants in total).The sample size calculation was performed using MedCalc® Statistical Software version 19.7.2 (MedCalc Software Ltd, Ostend, Belgium; https://www.medcalc.org;2021).Consequentially, patients (n=94) were sequentially randomized into two groups, with 47 patients in each group.Patients in Group 1were provided 1 g lavender tea bags, whereas the patients in Group 2 were provided 2 g lavender tea bags.Interventions All patients were instructed to drink one cup (200 ml) of lavender tea preparations prepared using 1 g lavender tea bags in Group 1 and 2 g lavender tea bags in Group 2within the last hour before going to sleep for three months.The patients were also advised to inhale the scent of lavender.The herbal teabag preparations contained the flowers of Lavandula intermedia and were steeped for 10 minutes before drinking.A total of 90 teabags were provided to each participant, and their consumption of the tea bags was checked at one-month intervals. Variables The patients' demographic (age, gender) and descriptive characteristics (educational and marital status, comorbidities) were obtained during the first time they were interviewed face-to-face.The RCSQ was administered to the patients in a quiet and comfortable room a total of three times; at the start of the study (RCSQ-baseline) and one month (RCSQ-1), and three months after the start of the study (RCSQ-3). Blinding The patients and the researcher who assessed the questionnaires were blind to the groupings. Statistical analysis The RCSQ-1 and RCSQ-3 scores were the primary outcomes of the study.The secondary outcome was the percent (%) changes observed between the RCSQ-1 and RCSQ-3 scores and the RCSQbaseline scores.Descriptive statistics were expressed as mean ± standard deviation values in the case of continuous variables that were determined to conform to the normal distribution, and as median and minimummaximum values in the case of continuous variables that were determined not to conform to normal distribution.Categorical variables were expressed as numbers and percentages.The normal distribution of the numerical variables was analyzed using the Shapiro-Wilk test.The student's t-test was used to compare two independent groups with numerical variables that conform to the normal distribution, whereas the Mann-Whitney U test was used to compare two independent groups with numerical variables that do not conform to normal distribution.The Pearson's chi-squared test was used to compare the differences between categorical variables in 2x2 tables.The Fisher's Exact test with Yates continuity correction was used in the analyses where the Pearson's chi-squared test could not be used.The Friedman test was used to analyze more than two continuous variables that do not conform to normal distribution.In the next step, the Post-Hoc analysis was performed using the Wilcoxon-Signed Rank test with Bonferroni correction to uncover the significant differences between the variables.MedCalc® Statistical Software version 19.7.2 (MedCalc Software Ltd, Ostend, Belgium; https://www.medcalc.org;2021) was used in all statistical analyses.Probability (p) values≤ 0.05 were deemed to indicate statistical significance. RESULTS The mean ages of the patients in Groups 1 and 2 were calculated as 68.9±2.9 and 68.9±3 years, respectively (p=0.985).There was no significant difference between the groups in gender (p=0.999) or in other demographic and clinical characteristics (p>0.05)(Table 1).The mean RCSQ-baseline scores were calculated as52.5±7.5 and 53.1±7 in Groups 1 and 2, respectively.The difference between the mean RCSQ-baseline scores of the groups was insignificant (p=0.685).However, there were significant differences between the RCSQ-1 and RCSQ-3 scores of the groups (p <0.001 and p<0.001, respectively).The RCSQ-1 and RCSQ-3 scores of Group 2 were significantly higher than those of Group 1 (Table 2).There were also significant differences between the groups in percent (%) changes observed between the RCSQ-1 and RCSQ-3 scores and the RCSQbaseline scores.Significant increases were recorded in the RCSQ-1 and RCSQ-3 scores compared to the RCSQ-baseline scores in both groups (p<0.001)(Table 3) (Figure 1).The most significant change was recorded in Group 2 between the RCSQ-3 and RCSQ-baseline scores. The patients in the study groups reported no side effects related to lavender herbal tea usage. Figure 1. Trends of Richards-Campbell Sleep Questionnaire with 1 g and 2 g of lavender tea. DISCUSSION The findings of this study revealed that lavender herbal tea improved sleep quality in elderly patients with sleep problems.The consumption of a higher dose (2g) of lavender tea resulted in more improvements in sleep quality compared to the consumption of a lower dose (1g). It is a known fact that anti-depressant and anxiolytic medications administered for sleep problems have considerable side effects.The effects of aromatherapy have been studied previously in the context of depression, anxiety, and sleep problems, taking the detrimental effects of such therapies into consideration 7,8 .Although the relevant outcomes show variations depending on the types of aromatic plants studied, the application routes utilized, the sleep-quality measurement tools used, and the characteristics of the study groups, there is a widespread belief that aromatherapy causes relief in the symptoms of depression, anxiety, and poor sleep quality and makes the patients feel good 2,19 . Herbal tea is a traditional form of using aromatic plants 20 .Other forms of use include essential or volatile oils, tinctures, liquid alcoholic extracts, capsules, chewing tablets, lozenges, lollipops, and creams.Although the stability of each form has not been studied in detail, herbal tea bags were preferred as the form of aromatic plants in this study merely based on convenience, considering that they are both inexpensive and easy to use 20 . Previous studies revealed the beneficial effects of lavender preparations, including herbal tea, on depression and poor sleep quality.These effects were attributed to the ingredients of lavender that act on various neurotransmitters 6,7,21,22 .Several studies reported significant improvements in sleep quality with the use of different Levander preparations 3,10,16,[23][24][25][26][27][28][29][30] .However, there are only two studies that investigated the effect of lavender herbal tea on sleep quality 2,7 .In one of these two studies, Bazrafshan et al. 7 investigated the effect of lavender herbal tea on depression and anxiety scores in an elderly group and observed significant improvements after consumption of 2 g lavender herbal tea bags for two weeks.In the other study, Chen et al. 2 investigated the effect of using 2 g lavender tea bags for two weeks on fatigue, depression, and sleep quality in women with sleep disturbances during the postpartum period, yet did not observe any improvement in the sleep quality of the participants.In Chen's study, the positive effect of lavender herbal tea initially observed on postpartum depression was short-lived and became insignificant after four weeks 2 .In comparison, in this study, two different doses of lavender tea were used (1 g and 2 g tea bags) and for a more extended period (three months).Consequentially, significant improvements were observed at the end of three months with the use of both 1 g and 2 g doses.From among the two doses, the use of 2 g lavender tea bags resulted in higher increases in the RCSQ scores compared to the use of 1 g lavender tea bags.Jager et al. 31 did not detect lavender in the blood after 90 minutes of the consumption of the lavender tea.Based on this result, they concluded that the metabolic effect of the herbal tea form of the lavender might be less than its essential oil form given the trace amount of the aromatic molecules in herbal tea preparations.Therefore, multiple daily consumptions of lavender tea are needed to achieve a long-lasting effect.In addition to studies in which a positive relation was found between the use of lavender preparations and the relief observed in the symptoms of depression and anxiety scores, there are also studies that reported no improvements in the anxiety levels with the use of lavender tea 6 .To give an example, Seifi et al. 11 performed a 2-day intervention using lavender essential oil inhalation in patients who underwent coronary artery bypass graft surgery and found no improvement in anxiety scores three days after the surgery.Hence, this study's authors believe that the duration of the intervention and the route of lavender application is of primary importance in achieving desired outcomes. Limitations of the study It is known that there are reciprocal relationships between depression, anxiety, and poor sleep quality; however, only sleep quality was assessed in this study.Secondly, the sleep quality measurements of the patients were carried out right after the patients finished using the lavender tea.If measurements could be repeated after a certain period, it could have been possible to assess how long the effects of the lavender tea have lasted. Selecting the patients from a single center was another limitation. Additionally, patients' adherence to herbal tea consumption was not measured, and it was assumed that the patients used lavender tea as instructed.It is clear that any nonadherence might have negatively affected the results. CONCLUSION In conclusion, the findings of this study revealed that lavender herbal tea improved sleep quality in elderly patients with sleep problems.Consumption of higher doses of lavender tea (2 g vs. 1 g) resulted in significantly higher RCSQ scores.Therefore, the use of lavender may be recommended in individuals with sleep problems in the form of herbal tea preparations. Table 1 . Demographic and clinical characteristics of the study groups. Table 2 . The Richards-Campbell Sleep Questionnaire scores and their changes during the study. Table 3 . Comparison of the percent (%) changes in the Richards-Campbell Sleep Questionnaire scores between different study intervals §: median [min-max], Friedman test with Bonferroni correction
2022-12-08T16:19:42.965Z
2022-10-03T00:00:00.000
{ "year": 2022, "sha1": "2a3ddd105e53b0be2d74aba2a74c9bac3b82f7d7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.53811/ijtcmr.1163513", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "e91e582a9b387f9fc93a4adb1b1b6f61b5d7a005", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
189813736
pes2o/s2orc
v3-fos-license
Identification of the Fungal Pathogens of Postharvest Disease on Peach Fruits and the Control Mechanisms of Bacillus subtilis JK-14 Postharvest fungal disease is one of the significant factors that limits the storage period and marketing life of peaches, and even result in serious economic losses worldwide. Biological control using microbial antagonists has been explored as an alternative approach for the management of postharvest disease of fruits. However, there is little information available regarding to the identification the fungal pathogen species that cause the postharvest peach diseases and the potential and mechanisms of using the Bacillus subtilis JK-14 to control postharvest peach diseases. In the present study, a total of six fungal isolates were isolated from peach fruits, and the isolates of Alternaria tenuis and Botrytis cinerea exhibited the highest pathogenicity and virulence on the host of mature peaches. In the culture plates, the strain of B. subtilis JK-14 showed the significant antagonistic activity against the growth of A. tenuis and B. cinerea with the inhibitory rates of 81.32% and 83.45% at 5 days after incubation, respectively. Peach fruits treated with different formulations of B. subtilis JK-14 significantly reduced the mean disease incidences and lesion diameters of A. tenuis and B. cinerea. The greatest mean percent reduction of the disease incidences (81.99% and 71.34%) and lesion diameters (82.80% and 73.57%) of A. tenuis and B. cinerea were obtained at the concentration of 1 × 107 CFU mL−1 (colony forming unit, CFU). Treatment with the strain of B. subtilis JK-14 effectively enhanced the activity of the antioxidant enzymes-superoxide dismutase (SOD), peroxidase (POD) and catalase (CAT) in A. tenuis and B. cinerea inoculated peach fruits. As such, the average activities of SOD, POD and CAT were increased by 36.56%, 17.63% and 20.35%, respectively, compared to the sterile water treatment. Our results indicate that the isolates of A. tenuis and B. cinerea are the main pathogens that cause the postharvest peach diseases, and the strain of B. subtilis JK-14 can be considered as an environmentally-safe biological control agent for the management of postharvest fruits diseases. We propose the possible mechanisms of the strain of B. subtilis JK-14 in controlling of postharvest peach diseases. Introduction Postharvest losses refer to the losses that occur along the food supply chain due to pathogens infection, handling, storage, transportation and processing, thereby resulting in the reduction in quality, quantity and market value of agricultural commodities [1,2]. Food and Agriculture Organization reported that global average loss due to the food postharvest losses in North America, Europe and Oceania was about 29%, compared to an average of about 38% in industrialized Asia, Africa, Latin America and South East Asia [3]. Among all the factors for reducing the losses on food supply, postharvest diseases of fruits present a major factor that causes the postharvest losses and limits the duration of storage [4,5]. In addition, postharvest diseases are often the major concern in influencing consumer prices, requirements and mode of transportation [6]. China is the largest producer of peaches with a production of 13.5 million metric tons (MMT), and exporter to North Korea, Russia, Singapore, USA, Philippines and Malaysia, with a very different climate (from tropical, continental or oceanic climatic climates), but the postharvest diseases of peach fruits have been considered one of the most severe factors that results in the loss of production [7]. Additionally, the diseases caused by fungal pathogens in harvested fresh fruits are considered as one of the most serious losses of production at the postharvest and consumption levels [8][9][10]. Some research showed that the main worldwide postharvest diseases caused by fungi in peach fruits are brown rot caused by Monilinia fructicola or M. laxa, Rhizopus; rot caused by R. stolonifer; grey mold caused by Botrytis cinerea [11], and other economically important fungal diseases such as those of stone fruits caused by Penicillium spp., Cladosporium spp., Alternaria spp. and Aspergillus spp. [12][13][14]. However, little is known about the species of main fungal pathogens that cause the postharvest disease of peaches in China. A number of strategies have been adapted to manage of postharvest diseases worldwide [15,16]. Chemical control (synthetic fungicides) is known to be highly effective and widely applied method in orchard after harvesting [17,18]. However, some fungicides have toxicological risks, such as dangerous to human health and causing environmental pollution, even in some cases their use is prohibited by law in postharvest phase [19][20][21]. Particularly, the increased level of fungicide use in fruit orchards has led to the growing public concern over the health and environmental hazards associated with fungicides [22]. Therefore, development of alternative safe and natural methods in controlling postharvest diseases have become urgent in recent years worldwide [23,24]. In particular, there has been extensive research to reduce synthetic fungicide usage based on microbial antagonists to biologically control postharvest pathogens in the past years with higher control efficiency [10,15,23,25]. In recent years, the bacterium Bacillus spp. has been widely studied as a potential biological agent against various plant diseases, increases plant systemic resistance and improves rhizosphere microbial community structure [26][27][28][29]. It is common in nature and nontoxic and harmless to humans and other animals, and nonpathogenic to plants [30]. However, there is little information on the bio-control activity of the bacterial antagonist B. subtilis JK-14 and its mechanisms involved in the postharvest disease management of peaches. Therefore, the objectives of the present study were to (i) isolate and identify the main species of fungal pathogens causing postharvest disease on peaches, (ii) explore the antifungal potential and controlling efficiency of B. subtilis JK-14 against the main postharvest fungal infection, and (iii) determine the possible mechanisms involved in the strain of B. subtilis JK-14 in controlling postharvest fruit diseases on peaches. Isolation and Identification of Postharvest Fungal Pathogens In the present study, a total of six fungal isolates were isolated from the mature peach (Prunus persica L.) fruits during the storage period. They were identified as Alternaria tenuis ( Figure 1A The isolates of A. tenuis, B. cinerea, P. digitatum, T. roseum, A. niger, and R. nigricans were all grown better on potato dextrose agar (PDA) medium (pH = 6.0) at 25 • C as the ecophysiological conditions. Determination of the Pathogenicity of the Isolates The difference in disease incidences of infection caused by the six fungal isolates on the mature fruits was highly significant between intact and wounded fruits. The non-inoculated control fruits, including intact and wounded did not develop decay symptoms. In contrast, all the wounded fruits developed rot and decay, regardless of the isolates used. Particularly, the isolates of A. tenuis, B. cinerea and R. nigricans presented the highest disease incidences after inoculation onto the wounded fruits, and the disease incidences were all 100%. In addition, the highest disease incidences were observed after inoculation with the isolates of A. tenuis and B. cinerea on the intact fruits, and the disease incidences were 100% and 92.33%, respectively. Whereas the isolate of T. roseum was unable to infect the intact fruits under the same experimental conditions (Table 1). Inhibitory Effect of Bacillus subtilis JK-14 against Alternaria tenuis and Botrytis cinerea Our results showed that the strain of B. subtilis JK-14 presented the significant antagonistic activity on the pathogens of A. tenuis and B. cinerea compared to the control. In the culture plates (PDA), the colony growth of A. tenuis and B. cinerea were significantly inhibited at 5 days after inoculated with the antagonistic strain of B. subtilis JK-14. The inhibitory rates of A. tenuis and B. cinerea were 81.32% and 83.45% at 5 days after inoculation with the strain of B. subtilis JK-14, respectively (Table 2). Determination of the Pathogenicity of the Isolates The difference in disease incidences of infection caused by the six fungal isolates on the mature fruits was highly significant between intact and wounded fruits. The non-inoculated control fruits, including intact and wounded did not develop decay symptoms. In contrast, all the wounded fruits developed rot and decay, regardless of the isolates used. Particularly, the isolates of A. tenuis, B. cinerea and R. nigricans presented the highest disease incidences after inoculation onto the wounded fruits, and the disease incidences were all 100%. In addition, the highest disease incidences were observed after inoculation with the isolates of A. tenuis and B. cinerea on the intact fruits, and the disease incidences were 100% and 92.33%, respectively. Whereas the isolate of T. roseum was unable to infect the intact fruits under the same experimental conditions (Table 1). Data are means ± standard error of replicates and those in a column followed by different letters are significantly different at p < 0.05, based on Duncan's new multiple range test using multi-way ANOVA (n = 18). The disease incidences (%) were determined at 5 days after inoculation with the six isolates. Control represents the fruits inoculation with sterile water but not with the isolates. Inhibitory Effect of Bacillus subtilis JK-14 against Alternaria tenuis and Botrytis cinerea Our results showed that the strain of B. subtilis JK-14 presented the significant antagonistic activity on the pathogens of A. tenuis and B. cinerea compared to the control. In the culture plates (PDA), the colony growth of A. tenuis and B. cinerea were significantly inhibited at 5 days after inoculated with the antagonistic strain of B. subtilis JK-14. The inhibitory rates of A. tenuis and B. cinerea were 81.32% and 83.45% at 5 days after inoculation with the strain of B. subtilis JK-14, respectively (Table 2). Data are means ± standard error of replicates and those in a column followed by different letters are significantly different at p < 0.05, based on Duncan's new multiple range test using multi-way ANOVA (n = 12). The inhibitory rates (%) were determined at 5 days after inoculation with pathogens of Alternaria tenuis and Botrytis cinerea. Control represents the media inoculation with Alternaria tenuis or Botrytis cinerea but not with Bacillus subtilis JK-14. To further confirm the antagonistic activity of B. subtilis JK-14 in controlling the A. tenuis and B. cinerea decay on fresh peaches fruits. We used the bacterial cell suspension (BCS) of B. subtilis JK-14 treatment and found that it was effective in inhibiting the fresh fruits decay caused by the pathogens of A. tenuis and B. cinerea, compared to the control. The disease incidences and lesion diameters on the peach fruits treated with the BCS of B. subtilis JK-14 at the tested concentration of 1 × 10 8 CFU mL −1 were significantly reduced compared to those on the control fruits. The disease incidences and lesion diameters were 14.8% and 3.0 mm for A. tenuis, and 14.1% and 3.2 mm for B. cinerea after 5-day incubation, whereas the average disease incidences and lesion diameters of A. tenuis and B. cinerea decay in the control fruits were 93.7% and 12.6 mm, respectively (Table 3). In addition, there was no significant symptoms and mycelium around the inoculation site of fresh fruits after inoculated with the pathogen of A. tenuis and the antagonist of B. subtilis JK-14 (Figure 2A), and the pathogen of B. cinerea and B. subtilis JK-14 together ( Figure 2D) in treatment group, whereas the fruits in the control group were decayed significantly and a large number of hyphae grown around the wound sites, regardless of the pathogens of A. tenuis ( Figure 2B) and B. cinerea ( Figure 2C) that was inoculated alone in this experiment without B. subtilis JK-14. Data are means ± standard error of replicates and those in a column followed by different letters are significantly different at p < 0.05, based on Duncan's new multiple range test using multi-way ANOVA (n = 18). The disease incidences (%) and lesion diameters (mm) were determined at 5 days after inoculation with the pathogens. Control represents the peach fruits inoculation with Alternaria tenuis or Botrytis cinerea but not with Bacillus subtilis JK-14. Data are means ± standard error of replicates and those in a column followed by different letters are significantly different at p < 0.05, based on Duncan's new multiple range test using multi-way ANOVA (n = 12). The inhibitory rates (%) were determined at 5 days after inoculation with pathogens of Alternaria tenuis and Botrytis cinerea. Control represents the media inoculation with Alternaria tenuis or Botrytis cinerea but not with Bacillus subtilis JK-14. To further confirm the antagonistic activity of B. subtilis JK-14 in controlling the A. tenuis and B. cinerea decay on fresh peaches fruits. We used the bacterial cell suspension (BCS) of B. subtilis JK-14 treatment and found that it was effective in inhibiting the fresh fruits decay caused by the pathogens of A. tenuis and B. cinerea, compared to the control. The disease incidences and lesion diameters on the peach fruits treated with the BCS of B. subtilis JK-14 at the tested concentration of 1 × 10 8 CFU mL −1 were significantly reduced compared to those on the control fruits. The disease incidences and lesion diameters were 14.8% and 3.0 mm for A. tenuis, and 14.1% and 3.2 mm for B. cinerea after 5-day incubation, whereas the average disease incidences and lesion diameters of A. tenuis and B. cinerea decay in the control fruits were 93.7% and 12.6 mm, respectively (Table 3). In addition, there was no significant symptoms and mycelium around the inoculation site of fresh fruits after inoculated with the pathogen of A. tenuis and the antagonist of B. subtilis JK-14 ( Figure 2A), and the pathogen of B. cinerea and B. subtilis JK-14 together ( Figure 2D) in treatment group, whereas the fruits in the control group were decayed significantly and a large number of hyphae grown around the wound sites, regardless of the pathogens of A. tenuis ( Figure 2B) and B. cinerea ( Figure 2C) that was inoculated alone in this experiment without B. subtilis JK-14. Data are means ± standard error of replicates and those in a column followed by different letters are significantly different at p < 0.05, based on Duncan's new multiple range test using multi-way ANOVA (n = 18). The disease incidences (%) and lesion diameters (mm) were determined at 5 days after inoculation with the pathogens. Control represents the peach fruits inoculation with Alternaria tenuis or Botrytis cinerea but not with Bacillus subtilis JK-14. Effect of Bacillus subtilis JK-14 in Controlling Alternaria tenuis and Botrytis cinerea Decay on Peaches The disease incidences and lesion diameters of postharvest decay of peaches treated with the different formulations of B. subtilis JK-14 at all tested concentrations (1 × 10 5 -1 × 10 9 CFU mL −1 ) were significantly reduced compared to those of the control fruits for the decay caused by the pathogens of A. tenuis (Table 4) and B. cinerea (Table 5). Among all the different concentrations of the formulations, the disease incidences and lesion diameters of postharvest decay of peaches were significantly reduced by the application of fermentation liquid bacterial cells (FLBC) and BCS of B. subtilis JK-14 at 1 × 10 7 CFU mL −1 . The disease incidences and lesion diameters were 18.52% and 3.79 mm for A. tenuis and 17.78% and 3.74 mm for B. cinerea at 5 days after inoculated with the FLBC formulations of B. subtilis JK-14 at 1 × 10 7 CFU mL −1 , 14.82% and 3.06 mm for A. tenuis and 14.07% and 3.19 mm for B. cinerea after inoculated with the BCS. In contrast, the disease incidences and lesion diameters of A. tenuis decay in the control fruits were 92.59% and 11.95 mm, respectively (Table 4), and 92.59% and 13.11 mm, respectively, of B. cinerea decay in the control fruits (Table 5). In addition, the controlling effect of different formulations of the strain of B. subtilis JK-14 was different between the formulations of FLBC and BCS at all the tested concentrations. The average disease incidences and lesion diameters of A. tenuis and B. cinerea decay of the fruits treated with the BCS formulations were lower than the formulation of FLBC. Therefore, the BCS formulations of the strain of B. subtilis JK-14 exhibited the highest controlling effect on the A. tenuis and B. cinerea decay compared to the control (Tables 4 and 5). Effect of Bacillus subtilis JK-14 on the Symptoms of Fruits Decay after Inoculation with the Pathogens on Peaches Overall, the different concentrations of BCS formulation of B. subtilis JK-14 (1 × 10 5 , 1 × 10 6 , 1 × 10 7 , 1 × 10 8 , and 1 × 10 9 CFU mL −1 ) had the different inhibitory and controlling effects on the A. tenuis ( Figure 3A) and B. cinerea ( Figure 3B) decay of the fruits. The BCS formulation of B. subtilis JK-14 at the concentration of 1 × 10 7 CFU mL −1 had a stronger and more significant inhibitory and controlling effect (Figure 3). At the concentrations of 1 × 10 5 , 1 × 10 8 and 1 × 10 9 CFU mL −1 , the fruits were decayed significantly and a large number of hyphae grown on the surface. At the concentration of 1 × 10 6 CFU mL −1 , the fruits became decay and a fewer number of hyphae grew on the surface. However, there was no significant symptoms and mycelium around the inoculation site at the concentration of 1 × 10 7 CFU mL −1 . In contrast, the fruits were exhibited significant decay and a large number of hyphae grew around the wound site in the untreated control fruits. (Tables 4 and 5). Effect of Bacillus subtilis JK-14 on the Symptoms of Fruits Decay after Inoculation with the Pathogens on Peaches Overall, the different concentrations of BCS formulation of B. subtilis JK-14 (1 × 10 5 , 1 × 10 6 , 1 × 10 7 , 1 × 10 8 , and 1 × 10 9 CFU mL −1 ) had the different inhibitory and controlling effects on the A. tenuis ( Figure 3A) and B. cinerea ( Figure 3B) decay of the fruits. The BCS formulation of B. subtilis JK-14 at the concentration of 1 × 10 7 CFU mL −1 had a stronger and more significant inhibitory and controlling effect (Figure 3). At the concentrations of 1 × 10 5 , 1 × 10 8 and 1 × 10 9 CFU mL −1 , the fruits were decayed significantly and a large number of hyphae grown on the surface. At the concentration of 1 × 10 6 CFU mL −1 , the fruits became decay and a fewer number of hyphae grew on the surface. However, there was no significant symptoms and mycelium around the inoculation site at the concentration of 1 × 10 7 CFU mL −1 . In contrast, the fruits were exhibited significant decay and a large number of hyphae grew around the wound site in the untreated control fruits. Discussion and Conclusions Peach is one of the most ancient and world-popular fruits due to its high marketing value with favorable taste and abundant phytonutrients [31]. However, postharvest fungal diseases limit the storage period and marketing life of peaches, and result in serious economic losses worldwide. Discussion and Conclusions Peach is one of the most ancient and world-popular fruits due to its high marketing value with favorable taste and abundant phytonutrients [31]. However, postharvest fungal diseases limit the storage period and marketing life of peaches, and result in serious economic losses worldwide. Recently, application of bio-control agents for the management of postharvest fruit decay has been explored as an alternative method instead of synthetic fungicides worldwide [15]. Bacillus spp. has been considered as the bio-control agent in controlling number of plant diseases with a high efficacy [32,33]. However, there is little information available regarding the identification the fungal pathogen species that cause the peach postharvest diseases, and explore the potential and mechanisms of Bacillus subtilis JK-14 in controlling postharvest peach diseases. Our present study showed that a total of six fungal isolates were isolated from the mature peaches, and in particular the species of Alternaria tenuis and Botrytis cinerea have been identified as the main pathogens for causing the host of mature peach decay. Interestingly, the strain of B. subtilis JK-14 has been found and exhibited a potent activity in inhibiting the growth of A. tenuis and B. cinerea, and controlling peaches fruits fungal disease in the present study. The possible mechanisms for the strain of B. subtilis JK-14 in inhibiting and controlling postharvest peaches fungal disease were due to the direct effect by inhibiting the pathogens infection, and the indirect effect by activating the host defense response to pathogens infection. To the best of our knowledge, the present study is the first to discover the role of the antagonistic B. subtilis JK-14 in controlling peach fungal disease that are caused by the pathogens of A. tenuis and B. cinerea. In view of the high control efficacy in comparison to the control, the strain of B. subtilis JK-14 can be considered as an environmentally-safe biological control agent instead of chemical fungicides for the management of postharvest disease. Some previous studies found and identified numerous postharvest pathogens which can cause the decay of stone fruits and belong to the genera of Monilinia, Rhizopus, Penicillium, Alternaria, Botrytis, Cladosporium, Colletotrichum and Stigmina [34], Trichothecium [35] and Aspergillus [36]. Interestingly, six fungal isolates were isolated from the mature peach fruits in the present study, including A. tenuis, B. cinerea, P. digitatum, T. roseum, R. nigricans and A. niger. Our results confirm for the first time that these species are pathogenic to peach fruit and cause decay on wounded peach fruits. However, we have discovered that the isolate of T. roseum was not pathogenic to the intact peach fruits. The reason may due to the lack of wounds that prevent the T. roseum invasion. A similar study demonstrated that the wounds can provide the pathways for the pathogens invasion [25]. In addition, some previous studies revealed that the gray mold decay, blue mold decay and Rhizopus decay caused by the fungi of B. cinerea, P. expansum and R. stolonifer were the most economically significant and destructive postharvest diseases of peaches [5,8,37,38]. However, our results found that the isolates of A. tenuis and B. cinerea presented the highest pathogenicity and virulence on the host of mature peaches, and also considered as the main pathogens that cause the postharvest disease of peach fruits. The average disease incidences of A. tenuis and B. cinerea were 100% and 96.17% after inoculation onto the wounded and intact fruits, respectively. The difference from the previous studies may due to the relationship between the pathogenicity of microbial isolates and the ripening index of peach fruits at harvest [39,40]. In view of the need for reducing environmental pollution due to fungicide over-use in controlling plant diseases in previous years, recently, biological control has emerged as an effective strategy to combat major postharvest decay of fruits [25,41]. It is well-known that B. subtilis is an effective antagonistic bacterium and has been applied in controlling plant fungal diseases such as root diseases [42], foliar diseases [43] and postharvest diseases [15]. A significant advancement from the present study is the finding that B. subtilis JK-14 provided a significant inhibitory effect on the peach fruits pathogens of A. tenuis and B. cinerea, and also different formulations of B. subtilis JK-14 exhibited significant controlling effect on the peach fruits decay after inoculation with pathogens of A. tenuis and B. cinerea. Our findings suggest that the strain of B. subtilis JK-14 can be considered as a bio-control agent in the effort of developing alternative approaches to control postharvest diseases of fruits. A previous study showed that Bacillus sp. C06 suppressed the disease incidences of the postharvest disease brown rot by 92% and decreased the lesion diameters by 88% compared to the pathogen-only, and Bacillus sp.T03-c reduced disease incidences and lesion diameters by 40% and 62%, respectively [44]. Similarly, Xu et al. reported that the treatment with Pichia caribbica significantly reduced the disease incidences and lesion diameters of Rhizopus decay of peaches compared with the control fruits in a dose dependent manner [7]. However, our results revealed that the greatest mean percent reduction of disease incidences and lesion diameters of peach postharvest fungal disease by 82.40% and 72.46% after the application of B. subtilis JK-14 at 1 × 10 7 CFU mL −1 among all the different concentrations from 1 × 10 5 to 1 × 10 9 CFU mL −1 . Such differences may be related to the effect of the species of pathogens and different conditions of ripening index of peach fruits (pH value) at harvest on the inhibitory effect of B. subtilis JK-14 [39]. To further understand the mechanisms of B. subtilis JK-14 in controlling postharvest diseases of peaches, we explored the effects of B. subtilis JK-14 on the activities of defense-related enzymes after inoculation with the pathogens in the present study, and found that the treatment of peach fruits with B. subtilis JK-14 effectively enhanced the activities of superoxide dismutase (SOD), peroxidase (POD) and catalase (CAT) after inoculation with the pathogen of A. tenuis or B. cinerea. Our results indicate that the enhanced activities of defense-related enzymes may play a significant role in the resistance of peaches to the pathogens infection and the induced activity of defense-related enzymes to be part of the mechanism of B. subtilis JK-14 in controlling postharvest diseases of peach fruits. Some previous studies revealed that one of the important mechanisms for the genus Bacillus in controlling plant diseases is by increasing and activating the plant systemic resistance [45][46][47]. In addition, the enhanced activities of antioxidant enzymes (SOD, POD, CAT and ascorbate peroxidase, APX) and their coordinated action have been reported to be a part of the mechanism implicated in the alleviation of lipid peroxidation and delay of senescence in peach fruits [48]. Similarly, Xu et al. [7] have demonstrated that peach fruits inoculated with P. caribbica exhibited higher level of POD, CAT and phenylalanine aminolase (PAL) activities than the untreated fruits during the storage period. In summary, a total of six isolates were isolated from the peach fruits, and the isolates of A. tenuis and B. cinerea were considered as the main pathogens with the highest pathogenicity and virulence on the host of mature peaches. The strain of B. subtilis JK-14 exhibits a high efficacy in controlling postharvest decay of peaches, and may be considered as an environmentally-safe biological control agent for the management of postharvest decay diseases. The possible mechanisms of B. subtilis JK-14 for the management of peach postharvest disease were due to (i) the direct effect by inhibiting the postharvest fungal pathogens growth and infection, and (ii) the indirect effect by activating the defense-related enzymes to enhance the resistance of peaches response to the postharvest fungal pathogens infection during the storage period. Materials and Methods Experiments were carried out at the Gansu Provincial Biocontrol Engineering Laboratory of Crop Diseases and Pests. The peach (Prunus persica L., cultivar Baifeng) fruits were collected from the stone fruit orchards in Gansu, China. Gansu is located in the northwest of China, at the longitude of 103.8264470 E and latitude of 36.0595610 N, with a dry and strong continental temperate monsoon climate. The average temperature, precipitation and relative humidity of the air were about 8 • C, 300 mm and 30% in 2011-2012. Fungal Pathogens Isolation and Identification During 2011-2012, the mature peach (cultivar Baifeng) fruits were collected from the stone fruit orchards in Gansu, China. The ripening index of peach fruits at harvest: pH 3.75-3.98, organic acid 2.28-2.64 mg g −1 , ethylene production 16.23-21.46 µL kg −1 h −1 , soluble solids content 12.24-13.16%, total sugar 90.85-110.90 mg g −1 , pectic substances 9.2-13.8 mg g −1 . Thereafter, fruits were moist-incubated by placing in plastic containers with lids, lined with moist paper towels to maintain high relative humidity, and incubated at room temperature (20 • C) for 1-2 weeks to promote the pathogens growth and development. Small fruits sections (2 cm) were surface sterilized with 2% sodium hypochlorite (NaClO) for 3 min, and followed by 3 min rinses in sterile water. Fruits were then cut lengthwise along the lesion (1 cm) and placed individually onto PDA for 5 days at 25 • C. The spores and mycelium were transferred with a sterile needle from the colony to fresh Petri dishes containing PDA medium at Day 5. These cultures were grown for 5 days in an incubator at 25 • C, and then identified according to the colony and spores characteristics. Finally, all isolates were maintained and stored in 20% glycerol at −80 • C until use. Spore Suspensions of Fungal Pathogen Preparation The identified pathogens of peaches were cultured on PDA medium for 5 days, and then suspended in 5 mL of sterile water containing 0.05% (v/v) Tween-80. Thereafter, the spore suspensions were filtered through 0.22 mm Millipore membranes to remove any adhering mycelia. The concentration of the spore suspension was determined using a hemacytometer, and then, the final concentration was adjusted to 1 × 10 6 CFU mL −1 [49]. Fruit Preparation For inoculum production, the experiments were conducted with the peach (Prunus persica L.) fruit cultivars Baifeng. The fresh fruits (pH = 3.53-3.64) were collected one week before commercial harvest during the 2012 production season, and the mature fruits (pH = 3.75-3.98) were collected and harvested at the mature stage, and sorted based on the size and the absence of physical injuries or disease infection. Before treatments, fruits were disinfected on the surface with 2% (v/v) NaClO for 3 min, and then rinsed with sterile water and air-dried for approximately 30 min at room temperature (20 • C) prior to use [50] and inoculation. Pathogenicity of the Isolates on Peach Fruits All the isolates in the present study were tested for pathogenicity on the mature peach fruits. Two groups of treatments were designed in this experiment, (i) one group of the sterile fruits were wounded once to a depth of 3 mm with a sterilized needle in the equatorial zone (wounded fruits) and (ii) another group of the sterile fruits were non-wounded with a sterilized needle (intact fruits). A 5-mm-diameter plug from a 5-day-old mycelial culture of isolates was inoculated onto intact and wounded peach fruits. Additionally, a 5-mm-diameter PDA plug was used as the untreated control treatment. Thereafter, all the treatments fruits were moist-incubated by placing in plastic containers with lids, lined with moist paper towels to maintain high relative humidity and incubated at room temperature (20 • C). Pathogenicity was determined as the ability to cause the typical decay symptom, and the number of fruit infected. The parameter of disease incidences was measured at 5 days after inoculation. Each experiment had three replications and each replication had three fruits, and all the experiments were repeated twice. The highest pathogenic pathogens were used to determine the antagonistic activity of Bacillus subtilis JK-14 in later experiments. Formulations of Bacillus Subtilis JK-14 Preparation The strain of B. subtilis JK-14 used in the present study was obtained from the College of Plant Protection, Gansu Agricultural University, isolated from the surface of peach fruits from an orchard in Gansu, China, and tested for its antifungal potential against the highest pathogenicity of the isolates on mature peach fruits. The active colony was then prepared by culturing on nutrient agar (NA, pH = 7.0) in Petri dishes for 3 days at 28 • C. A culture of B. subtilis JK-14 was obtained by transferring a colony from the activated culture plate into a 150 mL flask containing 30 mL liquid broth (peptone 0.3 g, yeast extract 0.3 g, NaCl 0.05 g) and shaking in an orbital shaker (200 rpm min −1 ) at 28 • C for 48 h. A formulation of fermentation liquid with bacterial cells (FLBC) was made by incubating the bacterial culture under the same conditions, and then dissolved with the sterile water to prepare the final concentrations of FLBC from 1 × 10 5 to 1 × 10 9 CFU mL −1 . A formulation of bacterial cell suspension (BCS) was prepared by centrifuging the fermentation liquid at 12,000 rpm min −1 at 4 • C for 20 min, and filtered by 0.22 µm biofilter to collect the bacterial sediment. Thereafter, the bacterial sediment was washed with an equal volume of saline (0.85% NaCl), and then dissolved with the sterile water to prepare the final concentrations of BCS from 1 × 10 5 to 1 × 10 9 CFU mL −1 . The two formulations were stored at 4 • C for later use. In Vitro and in Vivo Antagonistic Activity Determination In vitro experiments, the antagonistic activity of B. subtilis JK-14 against the main pathogens, were conducted following dual culture plate technique [51]. The inhibitory effects of B. subtilis JK-14 on the isolates with the highest pathogenicity were done by examining the growth rates inhibition using the paper-disc method on PDA [52,53]. Each experiment had six replications and was repeated twice. For the confirmation of the antagonistic activity of B. subtilis JK-14 (BCS formulation) in controlling A. tenuis and B. cinerea decay in fresh peach wounds, the fruits experiments were conducted to determine the controlling effects in vivo. A uniform wound (3 mm diameter and 3 mm deep) was made at the equator of each peach fruit using sterilized needle. An aliquot (30 µL) of B. subtilis JK-14 at 1 × 10 8 CFU mL −1 was pipetted into each wound site, and 30 µL of sterile water in place of the B. subtilis JK-14 was used as the control. Two hours later, 15 µL spores suspension of A. tenuis and B. cinerea (1 × 10 6 CFU mL −1 ) were inoculated into each wound, respectively. After air drying, the peaches were stored in enclosed plastic containers to maintain a high relative humidity (RH 85%) at 20 • C. Disease incidences and lesion diameters, and the symptoms of the treated peach fruits were measured and observed at 5 days after inoculation. All treatments were carried out with three replicates and three fruits for each treatment, and the experiment was conducted twice. Efficacy of Bacillus subtilis JK-14 in Controlling of Peach Postharvest Disease For the fruits inoculation, peach fruit samples were treated as described above to determine the antagonistic activity of B. subtilis JK-14 formulations (FLBC and BCS) in inhibiting A. tenuis and B. cinerea decay in mature peach wounds in vivo. An aliquot (30 µL) of different formulations of B. subtilis JK-14 at 1 × 10 5 , 1 × 10 6 , 1 × 10 7 , 1 × 10 8 , and 1 × 10 9 CFU mL −1 was pipetted into each wound site, and 30 µL of sterile water in place of the B. subtilis JK-14 formulations was used as the control. Two hours later, 15 µL spores suspension of A. tenuis and B. cinerea (1 × 10 6 CFU mL −1 ) were inoculated into each wound, respectively. After air drying, the incubation condition of treated peaches as described above. Disease incidences and lesion diameters, and the symptoms of the treated mature peach fruits were measured and observed at 5 days after inoculation. All treatments were carried out with three replicates and three fruits for each treatment, and the experiment was conducted twice. Effects of Bacillus subtilis JK-14 on the Activities of Defense-Related Enzymes of Peaches Peach fruit samples were treated as described above to test the efficacy of B. subtilis JK-14 in inhibiting A. tenuis and B. cinerea decay in mature peach wounds. The wounds were then treated with 30 µL of BCS of B. subtilis JK-14 at 1 × 10 7 CFU mL −1 , and 30 µL of sterile water in place of the BCS formulation of B. subtilis JK-14 was used as the control. Two hours later, 15 µL spores suspension of the highest pathogenicity of the isolates of A. tenuis and B. cinerea (1 × 10 6 CFU mL −1 ) were inoculated into each wound. The treatments of sterile water and B. subtilis JK-14 alone were considered as the controls. The peach fruits were stored in enclosed plastic containers to maintain a high relative humidity (RH 85%) and incubated at 20 • C after air drying. In order to measure the activities of defense-related enzymes of peaches after the treatment of B. subtilis JK-14, the tissue surrounding each wound of fruit was collected at Day 4 after treatment. Three replicates consistent of three fruits were sampled in both inoculated group and control group, and the experiments were conducted twice. Determination and Analysis of Defense-Related Enzyme Activities of Peaches The extraction procedures of the enzyme extract from the collected samples were conducted following the method of Xu et al. [7]. The tissue surrounding each wound of fruits (2 g) were collected and homogenized with 4 mL of ice-cold sodium phosphate buffer (50 mM, pH 7.8) containing 1.33 mM EDTA and 1% PVP. Thereafter, the homogenates were then centrifuged at 12,000× g for 15 min at 4 • C, and the supernatants were collected and used as enzyme extract to assay the activity of POD, SOD and CAT of peaches after extraction using the spectrophotometer (AOE (UV1900), Shanghai, China). POD activity was assayed following the method of Meng et al., with some minor modifications [54]. The reaction mixture containing 0.2 mL of the enzyme extract and 2.2 mL of 0.3% guaiacol was incubated for 5 min at 30 • C, and then the reaction was initiated immediately by adding 0.6 mL of 0.3% H 2 O 2 . The activity of POD was determined by measuring absorbance at 470 nm, and expressed as U per g fresh weight (U g −1 FW −1 ). SOD activity was measured following the method of Giannopolitis and Ries, and determined by assaying the ability to inhibit the photochemical reduction of nitroblue tetrazolium chloride (NBT) [55]. The reaction mixture (1.5 mL) contained 50 mM phosphate buffer (pH 7.8), 0.1 µM EDTA, 13 mM methionine, 75 µM NBT, 2 µM riboflavin and 50 µL enzyme extracts. One unit of SOD activity was defined as the amount of enzyme required to cause 50% inhibition of the NBT photo reduction rate, and the results were expressed as U g −1 FW −1 . CAT activity was measured according to the method described by Wang et al. [56], with some modifications. The reaction mixtures contained 1.4 mL buffered substrate (50 mM sodium phosphate, pH 7.8, and 30 mM H 2 O 2 ) and 100 µL of enzyme extracts. The decomposition of H 2 O 2 was measured by the decline in absorbance at 240 nm. One unit of the CAT activity was defined as the amount of the H 2 O 2 decomposing, and the activity was expressed as U g −1 FW −1 . Statistical Analysis Data presented in the present paper were pooled across two independent repeated experiments. All statistical analyses were performed with SPSS version 16.0 (SPSS Inc., Chicago, IL, USA, 2007). Data were analyzed by multi-way ANOVA. Duncan's multiple range test were computed using standard error and T values of adjusted degrees of freedom. Differences at p < 0.05 were considered significant.
2019-06-15T13:07:30.904Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "e4c6f787a83b20e6a55bf0b777cb8b47efc856cc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6651/11/6/322/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5b9abb09debe32f7257319cdddedcf8e7b02a3c2", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
16889776
pes2o/s2orc
v3-fos-license
Cytotoxic effect in vivo of selected chemotherapeutic agents on synchronized murine fibrosarcoma cells. The cytotoxic effects in vivo of single doses of either adriamycin (ADM), 1-beta-D-arabinofuranosylcytosine (Ara-C), bleomycin (BLM), cis-diamminedichloroplatinum (II) (cis-DDP), or cyclophosphamide (CY) on murine fibrosarcoma (FSa) cell populations were determined. Tumour cells were separated and synchronized by centrifugal elutriation. Viable tumour cells from selected elutriator fractions were then injected i.v. into whole-body-irradiated mice. Twenty minutes later selected doses of ADM, Ara-C, BLM, cis-DDP or CY were administered to selected groups of these animals. Fourteen days later the mice were killed. Killing of injected tumour cells by each of the chemotherapeutic agents was evidenced by a reduction in the lung cells by each of the chemotherapeutic agents was evidenced by a reduction in the lung colonies per cell injected in treated animals. Under these conditions the response of FSa cells in vivo to the 5 drugs tested differed both qualitatively and quantitatively. Ara-C was S-phase-specific in toxicity. ADM, BLM, and cis-DDP were preferentially toxic to S, G2+M and G1 cells respectively. CY, a drug requiring bioactivation to form alkylating metabolites, was found to be equally toxic to G1 and G2+M enriched populations, but less effective in killing cell populations enriched with early-S cells. 1-p -D-arabinofuranosylcytosine (Ara-C), bleomycin (BLM), cis-diamminedichloroplatinum (II) (cis-DDP), or cyclophosphamide (CY) on murine fibrosarcoma (FSa) cell populations were determined. Tumour cells were separated and synchronized by centrifugal elutriation. Viable tumour cells from selected elutriator fractions were then injected i.v. into whole-body-irradiated mice. Twenty minutes later selected doses of ADM, Ara-C, BLM, cis-DDP or CY were administered to selected groups of these animals. Fourteen days later the mice were killed. Killing of injected tumour cells by each of the chemotherapeutic agents was evidenced by a reduction in the lung colonies per cell injected in treated animals. Under these conditions the response of FSa cells in vivo to the 5 drugs tested differed both qualitatively and quantitatively. Ara-C was S-phase-specific in toxicity. ADM, BLM, and cis-DDP were preferentially toxic to S, G2+M and G1 cells respectively. CY, a drug requiring bioactivation to form alkylating metabolites, was found to be equally toxic to G1 and G2 + M enriched populations, but less effective in killing cell populations enriched with early-S cells. KNOWLEDGE of the differential cytotoxicity of drugs to cells in various phases of the cell cycle is extremely important to the design of chemotherapy protocols. Such studies have most frequently been carried out in vitro, using cultured cell lines. While this approach has given rise to information, many difficulties exist in relating these data to the complex in vivo situation (Valeriote & van Putten, 1975). In vivo studies have been made, but they generally involve the use of two or more cytotoxic agents. A partial synchrony of "target" cells is induced by first exposing the host animal to a known phase-specific agent such as hydroxyurea (Madoc-Jones & Mauro, 1970). At varying times afterwards, a second agent is administered and its effects are monitored. The difficulty with this approach, however, is in discerning whether the response of the cells to the second agent is perturbed in any way by exposure to the first. In a recent communication we described a procedure for testing in vivo the phasespecific cytotoxicity of chemotherapeutic agents (Grdina et al., 1979). The method is based on the separation and synchronization of tumour cells by centrifugal elutriation (Grdina et al., 1978a). Cells enriched in the various phases of the cell cycle (i.e. the "target" populations) are injected i.v. into mice. At selected times later, the drug to be tested is administered either i.v., i.p. or s.c. With appropriate controls, the number of lung colonies formed reflects the phase-specific cytotoxicity of the test agent. This procedure is advantageous in that it is applicable to testing drugs which require bioactivation. Additionally, relatively large numbers of cells can be separated, synchronized and recovered without loss of viability. In this communication we describe the cytotoxic effects of adriamycin (ADM), 1-P-D-arabinofuranosylcytosine (Ara-C), bleomycin (BLM), cis-diamminedichloroplatinum (II) (cis-DDP) and cyclophosphamide (CY) on synchronized murine fibrosarcoma (FSa) cells lodged in the lungs of specific-pathogen-free C3Hf/Kam mice. The tumour cells were separated by centrifugal elutriation and characterized with respect to cell-stage distribution by flow microfluorometry (FMF). MATERIALS AND METHODS Preparation of tumour cells. The tumour and cell-separation systems have been described in detail elsewhere (Grdina et al., 1979). Briefly, tumour-source material was derived from 6th generation isotransplants of a methylcholanthrene-induced murine fibrosarcoma (Suit & Suchato, 1967). Ten-twelveweek-old female C3Hf/Kam mice from our specific-pathogen-free breeding colony were used. Single-cell suspensions were obtained by mincing and trypsinization (Grdina et al., 1975). Cell viability was determined by phasecontrast microscopy and was routinely > 9500. Tumour cells derived in this manner were then incubated in vitro for 48 h before centrifugal elutriation to improve synchrony (Grdina et al., 1978a). Cell separation by centrifugal elutriation.-Tumour cells were separated under sterile conditions using a Beckman JE-6 elutriator rotor (Grdina et al., 1978a). The rotor chamber and associated tubing were sterilized by pumping 70% ethanol throughout the system. The ethanol was allowed to remain in the system overnight. Before use, ethanol was removed and sterile Solution A (8-0 g NaCl, 0 4 g KCI, 1 0 g glucose, and 0 35 g NaHCO3 in 1 1 H20) was used to rinse out the system. The separation medium consisted of modified McCoy's 5A (Humphrey et al., 1970) supplemented with 5% foetal calf serum containing DNase (Deoxyribonuclease 1; Sigma Chemical Co., St Louis, MO) at a final concentration of 0 1 mg/ml and 5mM 2-naphthol 6-8 disulphonic acid to reduce cell clumping (Shortman, 1973). All separations were performed at 4°C. During separation the rotor speed was set at 1525 rev/min and the flow rates were varied by equal increments from 5-4 to 27-4 ml/min. Routinely, 2 x 108 cells were separated. Twelve fractions were collected and then stored at 4°C. Cells collected in each fraction were counted by haemacytometer and by Coulter counter (model ZBI; Coulter Electronics, Hialeah, FL), and their volume distributions determined with a multichannel analyser (Channelyzer II; Coulter Electronics). The modal volume was designated as the volume corresponding to the modal channel number of the volume distribution of each sample (Grdina et al., 1978a). The DNA content of individual cells in suspension was determined by flow microfluorometry (FMF) using an ICP II flow cytometer (Phywe Co., Gottingen, Germany). Cells were stained with mithramycin (Grdina et al., 1978a) and the resultant histograms of DNA fluorescence were computer analysed (Johnston et al., 1978). Lung colony assay.-The colony-forming efficiency (CFE) of FSa cells was determined by a lung colony assay (Hill & Bush, 1969). To maximize CFE, recipient mice, with their hind legs shielded, were whole-body irradiated with 10 Gy 24 h before use. These mice were injected with 1-5 x 104 viable FSa cells from each of the elutriator fractions, or an unseparated control population (USC) along with 2 x 106 heavily irradiated (HIR; 100 Gy) FSa tumour cells. The HIR cells were not separated by centrifugal elutriation. Fourteen days later the mice were killed, their lungs removed, the lobes separated and fixed in Bouin's solution, and tumour colonies counted. Drug testing in vivo.-The drugs used in this study were obtained from the following sources: ADM, Adria Laboratories, Wilmington, DE; Ara-C, Upjohn, Kalamazoo, MI; BLM, Bristol Laboratories, Syracuse, NY; cis-DDP, Division of Cancer Treatment, National Cancer Institute, National Institutes of Health, Bethesda, MD; and CY, Mead Johnson, Evansville, IN. Phenobarbital was obtained from Wyeth Laboratories, Inc., Philadelphia, PA, and administered i.p. at a dose of 40 jtg/g twice daily for 5 days before treatment of animals with CY (Peters & Mason, 1977). Stock solutions of drugs were made up immediately before use in sterile water. Twenty minutes after the i.v. injection of viable FSa cells from each of the elutriator fractions and an unseparated control population into recipient mice, selected groups of these animals were injected with either ADM (10 mg/kg, i.v. or 5 mg/kg i.p.), Ara-C (50 mg/kg, i.p.), BLM (15 mg/kg, s.c.) or cis-DDP (4 mg/kg, i.v.). In addition, CY (200 mg/kg, i.v.) was injected into mice previously treated with phenobarbital. Additional groups of mice injected with viable FSa cells, and phenobarbital in the case of CY experiments, remained untreated as controls. Under these conditions, FSa cells from the various elutriator fractions are equally retained in the lungs of the recipient animals, and over 95?/O of the cells are present in the lungs 20 min after injection (Grdina et al., 1978b). Drug doses and routes of administration were chosen which allowed for sufficient expression of tumour cell killing but minimized toxic effects to the animals used in the experiments. RESULTS A variety of chemotherapeutic agents have been tested in vivo against murine fibrosarcoma cell populations synchronized by centrifugal elutriation. Presented in Fig. 1 is a representative sedimentation profile, describing the relationship between modal cell volume and the number of cells recovered in each elutriator fraction. Cell recovery ranged from 85 to 95%. Cell viability, as determined by phasecontrast microscopy, was routinely > 95% for cells collected in Fractions (F) 3-11. Fl and F2 were discarded because they contained subcellular debris and damaged cells. F12 and F13 contained mixtures of large and small cells, as well as small clumps of cells washed out of the rotor at the end of the run; these fractions were therefore also discarded. Using the method of flow microfluorometry, no non-tumour cells were detected in any of the fractions. These cells had been eliminated from the tumour population by incubating the tumour suspension for 48 h in vitro before separation (Grdina et al., 1978a). The colony-forming efficiency (CFE) of untreated cell populations varied between experiments from 1 to 3% (i.e. an average of 50-150 colonies per animal). Within each experiment, however, no appreciable difference in CFE was observed between the elutriated control groups. All experiments were repeated at least 3 times, and representative data are presented in each of the figures. The cytotoxic effectiveness of ADM was tested in vivo on an unseparated control (USC) population and elutriator-synchronized FSa populations lodged in the lungs of test animals. For comparison, the drug was administered either i.p. at 5 mg/kg or i.v. at 10 mg/kg. Under either condition, cell killing was seen in all cell fractions, with the greatest reduction in CFE for cells collected in F9 (see Fig. 2). This elutriator fraction contained 80% S-phase cells. For comparison, the CFE after exposure in vivo to ADM, Ara-C, BLM and cis- summarized in Fig. 3. Ara-C was found to be most toxic to S-enriched tumour populations in vivo, as evidenced by the reduced CFE of cells in F7, F8 and F9. These populations contained 63, 72 and 80% S cells, respectively. FSa cells in FIO and F 11, however, were found to be most sensitive to BLM administered s.c. These fractions contained 84% and 90% G2+M cells respectively. Finally, cis-DDP administered i.v. was found to be most cytotoxic to cells in F2-4. These contained primarily G1 cells (94-65%). Since no lung colonies were observed in treated animals injected with F2 and F3 cells, these points could not be included in the figure. Each of the agents described so far is cytotoxic under in vitro conditions and has been extensively characterized using in vitro cell systems. Cyclophosphamide, however, requires biotransformation by microsomal mixed-function oxidases to exert its cytotoxic effect (Brock & Hohorst, 1967). In addition, phenobarbital has been found to accelerate the biotransformation of CY to its active form (Donelli et al., 1976;Peters & Mason, 1977). The cytotoxic effect of CY on FSa cells lodged in the lungs of phenobarbital- , is plotted as a function of elutriator fraction number. Animals were pretreated with plenobarbital at a dose of 40 Itg/g twice daily eacl day for 5 days before administration of CY. USC = unseparated cells. The vertical bars represent the s.e. treated animals is presented in Fig. 4. CY was more toxic to cells in G1 (F3-F5) and G2+M (F9-F11) than to S cells in the intermediate fractions (F6 and F7). Similar results were obtained using mice not pretreated with phenobarbital. DISCUSSION In an earlier report we described in detail a procedure by which chemotherapeutic agents could be characterized in vivo with respect to phase specificity in cell killing (Grdina et al., 1979). In particular, we chose hydroxyurea for initial investigation, because it had been well characterized with respect to its phasespecific toxicity to S cells. We have now further characterized this system with respect to agents which are known to differ from each other in their phasespecific or preferential toxicity to cells, one of which must be bioactivated in order to exert its cytotoxicity. ADM was chosen because it is a well 48 characterized anthracycline antibiotic known to be effective against many animal and tumour systems. It has been observed that while it is cytotoxic for cells in all phases of the cell cycle, it is most toxic to cells in S (Kim & Kim, 1972). Our results agree with these findings. ADM cytotoxicity in vivo, as evidenced by a reduction in the number of lung colonies in treated mice, was greatest for FSa cell populations most enriched with S cells (see Fig. 2). This effect, though differing in magnitude, was similar for the two doses and routes of injection used. Ara-C is a known S-specific agent (Skipper et al., 1967;Momparler, 1974). As shown in Fig. 3, cell killing correlated well with the percentage of S cells in each fraction. BLM, an agent reported to be most effective against G2 and M cells (Barranco & Humphrey, 1971;Drewinko & Barlogie, 1976) was found to be most toxic to G2+M FSa cells collected in F10 and Fl l. Cis-DDP was included in this study because it has been demonstrated under in vitro conditions to be preferentially cytotoxic to G1 (Drewinko et al., 1973;Fraval & Roberts, 1979). As shown in Fig. 3, cis-DDP was most toxic to the G1enriched FSa population in F3. In a recent report, centrifugal elutriation was used to separate and synchronize Chinese hamster ovary (CHO) cells in order to characterize the cycle-dependent cytotoxicity of selected chemotherapeutic agents in vitro (Meyn et al., 1980). CHO cells were observed in this study to be most sensitive to cis-DDP in G1, ADM and Ara-C in 5, and BLM in G2 +M. There is excellent agreement between data derived from established in vitro methods and those presented here concerning the cytotoxic activities of ADM, Ara-C, BLM and cis-DDP. These agents have been demonstrated to exert similar cell-cycle phase-dependent toxicity under both in vitro and in vivo conditions. The in vivo method has the advantage that it permits the direct characterization of agents which require bioactivation to become effective. For this reason, cyclophosphamide was chosen for study. CY is a potent antineoplastic agent which must be metabolized, primarily by microsomal enzymes in the liver, to produce alkylating metabolites (Brock & Hohorst, 1967). To accelerate this effect, liver microsomal enzymes can be stimulated by phenobarbital (Donelli et al., 1976;Peters & Mason, 1977). Alkylating agents are known to be more toxic to G1 and M cells than to S cells (Bhuyan, 1977). Results presented in Fig. 4 indicate that FSa populations enriched with S cells (i.e. F6 and F7) were less sensitive to CY toxicity than cells from the other fractions. These results were confirmed in 3 separate experiments. In conclusion, we have characterized in vivo the cell-cycle phase-specific effects of a variety of chemotherapeutic agents currently used in the treatment of malignant disease. The method described in this communication can also be applied to include the separation and synchronization of FSa cells grown as pulmonary nodules in mice, without the requirement of a preseparation incubation in vitro (Grdina, 1980). Data acquired in this manner, however, must be interpreted with respect to the pharmacological properties of the agents tested and the animal system used. Chemotherapeutic agents can thus be routinely and rapidly evaluated under in vivo conditions with respect to their phase-specificity in cell killing, effect on cell kinetics, and toxicity to the host animal.
2014-10-01T00:00:00.000Z
1980-11-01T00:00:00.000
{ "year": 1980, "sha1": "b4a9c3175f1e9abab28622d393d4e7f7fc4e4498", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc2010549?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "b4a9c3175f1e9abab28622d393d4e7f7fc4e4498", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
257219340
pes2o/s2orc
v3-fos-license
HULAT at SemEval-2023 Task 10: Data Augmentation for Pre-trained Transformers Applied to the Detection of Sexism in Social Media This paper describes our participation in SemEval-2023 Task 10, whose goal is the detection of sexism in social media. We explore some of the most popular transformer models such as BERT, DistilBERT, RoBERTa, and XLNet. We also study different data augmentation techniques to increase the training dataset. During the development phase, our best results were obtained by using RoBERTa and data augmentation for tasks B and C. However, the use of synthetic data does not improve the results for task C. We participated in the three subtasks. Our approach still has much room for improvement, especially in the two fine-grained classifications. All our code is available in the repository https://github.com/isegura/hulat_edos. Introduction Sexism can be defined as behaviors or beliefs that support gender inequality, and result in discrimination, generally against women.Contrary to what one might believe, sexism is still very present also in the most advanced and technologically advanced societies (Ridgeway, 2011).Proof of this is that many gender stereotypes are still present in our belief system today (for example, men should not wear dresses).Unfortunately, social networks are used to spread hateful and sexist messages against women (Rodríguez-Sánchez et al., 2020). During the last few years, various research efforts (Rodríguez-Sánchez et al., 2022;Fersini et al., 2022) have been devoted to the development of automatic tools for the detection of sexist content.While these automated tools have addressed the classification of sexist content, this is a highlevel classification, without providing additional information that allows us to understand why the content is sexist.The goal of SemEval-2023 Task 10, Explainable Detection of Online Sexism (EDOS) (Kirk et al., 2023), is to promote the development of fine-grained classification models for detecting sexism in posts written in English, which were collected from social networks such as Gab and Reddit.The organizers of the task proposed three subtasks: A) Binary Sexism Detection, B) Category of Sexism, a four-class classification task, and C) Fine-grained Vector of Sexism, an 11-class classification.A detailed description of these classifications can be found at (Kirk et al., 2023). In our approach, we explored some of the most popular pre-trained transformer models such as BERT (Devlin et al., 2019), DistilBERT (Sanh et al., 2019), RoBERTa (Zhuang et al., 2021), andXLNet (Yang et al., 2019).Moreover, we used different data augmentation techniques (such as EDA (Wei and Zou, 2019), and NLPAug library 1 ) to create synthetic data.Then, synthetic data and training data were used to fine-tune the models.Based on our experiments during the development phase, we decided to use the RoBERTa transformer model to estimate our predictions for the test dataset during the test phase. We participated in the three subtasks.In task A, our system obtained a macro F1-score of 0.8298, ranking 43th, with a total of 84 teams in the final ranking.The top system achieved a macro F1score of 0.8746, while the lowest macro F1-score was 0.5029.About half of the systems achieved a macro F1-score below 0.83.In task B, our system ranked in the 45th position out of the 69 participating systems.Our macro F1-score was 0.5877, while the lowest and highest macro F1-scores were 0.229 and 0.7326, respectively.In task C, our team ranked in the 27th position out of the 63 participating systems.The lowest and highest macro F1 scores were 0.06 and 0.56, respectively.About half of the systems achieved a macro F1-score below 0.42, while our system had a macro F1-score of 0.44. Our systems, which ranked roughly in the middle of the three rankings, show modest results on the three subtasks.Our approach still has much room for improvement, especially in the two fine-grained classifications.The results showed that the use of synthetic data does not appear to provide a significant improvement in the performance of the transformers.All our code is available in the repository https://github.com/isegura/hulat_edos. Background The goal of this task is to detect sexism content.The task is composed of three subtasks: A, B and C. Task A is a binary classification task to distinguish between sexism and non-sexism texts.Task B and C aim a finer-graned classification with four and eleven classes, respectively. The full dataset consists of 20,000 posts written in English.Half of the posts were taken from Reddit and the other half from Gab. Gab is a social network known for its far-right users.The dataset was divided in three splits with a ratio of 70:10:20.That is, 14,000 posts were used for training, 2,000 for development, and 4,000 for the final evaluation. We have studied the class distribution in each task.In Task A, a binary classification, the two classes are not balanced, where the not-sexist class is the majority class.The same distribution is observed in the three datasets (see Fig. 3).We also plot the distribution of categories for task B (see Fig. 4. In the dataset, the label for the second task is the field 'label_category'.It contains four different categories: "1.threats, plans to harm and incitement", "2.derogation", "3.animosity", and "4.prejudiced discussions".The majority category is "2.derogation".To obtain the distribution of these categories, we removed those records that were annotated as 'not sexist'.The second class with a larger number of instances is "3.animosity".The other two classes are the minority classes, "4.prejudiced discussions" and "1.threats, which have a similar number of instances.The same distribution is observed in the three datasets. Regarding the distribution of the vectors in task C (see Fig. 5), the vector subcategory "2.1 descriptive attacks" is the majority class, while "3.4 condescending explanations or unwelcome advice" is the minority class.The vectors follow a distribution similar to that of their corresponding categories.For example, the vectors with the largest number of instances are usually the vectors of the category "2.derogation", followed by the vectors corresponding to the category "3.animosity". We also studied the length of the texts in the datasets (see Fig. 6).There are no significant differences between the three datasets.The mean number of tokens is around 23, and the maximum length is approximately 55 tokens. We want to know if there are differences in the length of the texts between the two main classes: sexist and non-sexist.As the three datasets show a similar distribution, we created a density graph for the whole dataset (Fig. 1) that shows the distribution of sexist texts and non-sexist texts.Although sexist and non-sexist texts appear to have a very similar distribution of their lengths, we can observe that some sexist texts may be slightly longer than non-sexist texts.Figure 2 shows the length distribution of the texts for each category in task B. We can see that the texts classified as "4.prejudiced discussions" appear to be longer than the other texts.The category "1.threats, plans to harm and incitement" have the shortest texts.Indeed, the average length of the texts in the first category is around 22 tokens, while in the four categories is around 27 tokens.The other two categories, "2. .derogation" and "3.animosity", show very similar distribution with an average length of 25 tokens for their texts. We also study the length distribution of texts for each vector.As there are eleven vectors, it is very difficult to compare their distributions (see Fig. 7).For this reason, we created a density graph for the vectors of each category (see Appendix).All vectors have a very similar distribution of text length.Texts classified as '4.1 supporting mistreatment of individual women' or '4.2 supporting systemic discrimination against women as a group' tend to have the largest average length between 27 and 30 tokens.The vector '2.1 descriptive attacks' has an average length of 26 tokens.The vector '1.2 incitement and encouragement of harm' has the smallest average length (around 22 tokens).The other vectors have an average length between 23 and 25 tokens.Therefore, There do not seem to be significant differences between the length of the texts of each vector. 3 System Overview BERT (Devlin et al., 2019) is the most popular transformer model due to its excellent results in many NLP tasks.BERT is an encoder trained using two strategies: masked language modeling (MLM) and next sentence prediction (NSP).The multilingual version of BERT was pre-trained in more than one hundred languages using Wikipedia.DistilBERT (Sanh et al., 2019) is a smaller version of BERT, which can achieve similar results to BERT but with less training time. RoBERTa (Zhuang et al., 2021) is based on BERT.RoBERTa was pre-trained using additional data.Unlike BERT, RoBERTa does not use the next sentence prediction (NSP) strategy.Regarding the MLM strategy, some tokens are dynamically masked during pre-training.Another difference with BERT is that RoBERTa uses a byte-level BPE tokenizer, which has a larger vocabulary than BERT (50k vs 30k).Therefore, RoBERTa has a larger vocabulary that can provide better results, but with an increase in complexity. XLNet (Yang et al., 2019) is an autoregressive model.That is, it was pre-trained to predict the next token for a given input sequence of tokens.XLNet does not use any masked strategy.Instead of this, it uses a permutation language modeling that can capture context by training an autoregressive model on all possible permutations of words in a sentence.This allows to create bidirectional contextualized representations of words.Like BERT, this model was trained with Wikipedia and BooksCorpus, but also with Giga5, ClueWeb 2012-B, and Common Crawl. Data augmentation Data augmentation (DA) aims to increase the training size by applying different transformations to the original dataset.For example, in computer vision, some modifications can be performed by cropping, flipping, changing colors, and rotating pictures.In NLP, these transformations include swapping tokens (but also characters or sentences), deletion or random insertion of tokens (but also characters or sentences), and back translation of texts between different languages.While those transformations are easier to implement in computer vision, they are challenging in NLP, because they can alter the grammatical structure of a text. Another advantage is that these techniques help to enhance the diversity of the examples in the dataset.Moreover, they also help to avoid overfitting.Unfortunately, data augmentation does not always improve the results in NLP tasks. In this task, we used different data augmentation techniques (such as EDA (Wei and Zou, 2019), and NLPAug library2 ) to create synthetic data. EDA has been implemented in the textaugment library3 for Python.EDA uses four simple operations: Synonym Replacement, Random Insertion, Random swap, and Random Deletion.The first operation randomly chooses n words in a sentence (which are not stopwords).Then, these words are replaced with synonyms from WordNet4 , a very large lexicon for English.Random insertion chooses a random word (which is not a stopword).Then, it finds a random synonym that is inserted in a random position in the sentence.The third operation, Random Swap, randomly chooses two words in the sentence and swaps their positions.The fourth operation, Random Deletion, randomly removes a word from a sentence.These operations can be repeated several times. NLPAug also provides an efficient implementation of DA techniques.In particular, NLPAug offers three types of augmentation: Character level augmentation, Word level augmentation, and Sentence level augmentation.In each of these levels, NLPAug provides all the operations described above, that is, synonym replacement, random deletion, random insertion, and swapping.Regarding synonym replacement, the most effective way is using word embeddings to select the synonyms.This technique allows us to obtain a sentence with the same meaning but with different words.NLPAug uses non-contextual embeddings (such as Glove, word2vec, etc) or contextual embeddings (such as Bert, Roberta, etc). In this work, we use the synonym replacement provided by EDA, which is based on WordNet.Thanks to NLPAug, we generate new texts by using a contextualized language model such as BERT. Experimental Setup During the development phase, we divide the training dataset into three splits: training, validation, and test, with a ratio of 70:10:20.These three splits, were used to train and evaluate the different models and data augmentation techniques.During the development phase, these techniques were only applied to the training split. However, during the test phase, we used the full training provided by the organizers to train our model.Moreover, we applied the data augmentation techniques to the full training dataset to obtain more synthetic data.The organizers published the real answers for the development dataset, so we could use the development dataset as our validation set to train our model, and create the final predictions for each task. Based on our results in the development phase, for task B and C, we decided to use RoBERTa combined with data augmentation techniques to generate the final predictions.However, for task A, we only use RoBERTa, because the data augmentation techniques did not appear to improve the results for the binary classification. Results HULAT has participated in the three subtasks.Below we present our results in each task. Task A As was previously said, we fine-tuned a RoBERTa model using the full training dataset, without using synthetic data.Our system provided a macro F1score of 0.8298, obtaining the 43rd position of a total of 84 participating systems.The highest macro F1-score was 0.8746, while the lowest was 0.5029.About half of the systems achieved a macro F1-score below 0.83.Table 1 shows the results on the test dataset for task A. We evaluated all combinations that we studied during the development phase. Model We evaluated both the uncased and the cased versions of BERT.BERT uncased shows better results than the cased version (more than one point of improvement).The use of data augmentation does not improve the results of the BERT model in none of their versions, cased or uncased. DistilBERT obtains slightly lower results than BERT, though its training time is much better.Data augmentation helps to increase recall, but with worse precision.The improvement in F1 is not significant.There are hardly any differences between the results of the cased model and those obtained under the uncased version of DistilBERT. XLNet has very similar results to those obtained by the uncased version of BERT.The data augmentation techniques do not appear to improve the results. RoBERTa defeats all previous approaches, with improvements in macro F1-score between 1 and 3 points.In particular, RoBERTa achieves better precision than DistilBERT and BERT.Regarding the results obtained by data augmentation, the use of synthetic data negatively affects the precision of RoBERTa. In sum, all the models show very close results, and data augmentation does not improve the results.RoBERTa slightly outperforms the other models. Task B In this task, we fine-tuned the RoBERTa model using the full training and the synthetic data created with the data augmentation techniques described in section 3. Our system ranked in the 45th position out of the 69 participating systems.Our macro F1-score was 0.5877, while the lowest and highest macro F1-scores were 0.229 and 0.7326, respectively.Table 2 shows the results on the test dataset for task B. We evaluated all combinations that we studied during the development phase. Model In task B, we again evaluated both the uncased and the cased versions of BERT.Although both versions obtain close results, the uncased version shows slightly better precision and recall than the cased one.For the cased version of BERT, data augmentation improves the precision (around one point) but significantly lowers the recall (more than three points).It also has a negative effect on the performance of the BERT uncased model. While BERT and its simplified version, Distil-BERT, show close results in task A, DistilBERT shows worse performance than BERT (around six point over the macro F1-score) in task B, a fourclass classification.Contrary to BERT, the cased version of DistilBERT is slightly superior to its uncased version.However, the results are so close that these differences are not statistically significant in the models.The use of data augmentation shows an improvement of around five points over the macro F1-score (in both versions of DistilBERT), but with a slight decrease in the precision.In terms of macro F1-score, data augmentation obtains an improvement of two points.Therefore, unlike BERT, Dis-tilBERT gets some improvements thanks the use of data augmentation. XLNet outperforms DistilBERT, showing simi-lar results to BERT.As with BERT, data augmentation does not appear to help XLNet in classifying the four categories for sexism.Like BERT and XLNet, RoBERTa achieves a macro F1-score of 0.595.Data augmentation increases the recall, but with a significant decrease of the precision.However, RoBERTa with data augmentation obtained the best results on the development set during the development phase.For this reason, we decided to use this combination for our final submission on the test phase. Table 3 shows the results of RoBERTa with data augmentation for each category.Although the category '1.threats, plans to harm and incitement' has the lowest number of instances in the dataset (see Fig. 4), it shows the top F1 (0.624).The posts in this category are shorter than the posts in the rest of the categories (see Fig. 2).Moreover, an analysis of these texts show that they usually use a very violent vocabulary.Indeed, some of their most common words are: 'bitch', 'kill', 'rape', 'fuck', 'punch', 'beat', 'kick', 'hang', 'death', or 'slap'.The category with the lowest F1 is '4.prejudiced discussions', with around 10 points less than F1 for the first category.The lower score may be due to the fact of this category has very few instances compared to the second (derogation) and third (animosity) categories (see Fig 4).Moreover, its texts tend to be longer than the texts of the first category (threats) (see Fig. 2).The scarcity of examples in this category together with the fact that they do not use aggressive vocabulary as it was in the first category, may make very challenging to classify them. Task C In task C, we used the same approach as for task B, that is, RoBERT and data augmentation techniques. Our system obtained a macro F1 score of 0.4458, which ranked in the 27th position out of the 63 participating systems.The lowest and highest macro F1 scores were 0.06 and 0.56, respectively.About half of the systems achieved a macro F1-score below 0.42.Table 4 shows the results on the test dataset for task C. We evaluated all combinations that we studied during the development phase. The cased version of BERT slightly outperforms the uncased version.Unlike tasks A and B, data augmentation techniques appear to have a positive effect on the results for task C. Thus, they obtain an improvement of more than 10 points for BERT uncased and eight points for the cased version. DistilBERT provides lower results than BERT.Both versions of DistilBERT, cased and uncased, show very close results.Like in BERT, data augmentation improves the results. XLNet outperforms BERT with an increase of around three points over the macro F1-score when data augmentation is used.RoBERTa obtains the best scores, outperforming the other models.In addition, when data augmentation is used, the model obtains a significant improvement of 10 points over the macro F1-score. In sum, RoBERTa trained with training and synthetic data is the best approach for the task C. Table 5 shows the results of RoBERTa with data augmentation for each vector.The model could not classify any instance of the vector '3.4 condescending explanations or unwelcome advice', which only has 14 instances and the test dataset, and 47 in the training dataset.Although our model was trained with synthetic examples (in particular, 94 for this label), the total number of examples for this vector is still very scarce.Although the vector '1.2 incitement and encouragement of harm' is not one of the vectors with ther largest number of instances, it does show the best F1 score (0.657).As was previously discussed for the category 1, the texts classified with this vector tend to be shorter and include very violent words such as 'bitch', 'fuck', 'kill, or 'kick'.The vector '3.1 casual use of gendered slurs, profanities, and insults' achieve the second highest F1-score (0.646).The vector 3.1 is the third one with the highest number of instances in the dataset.Regarding the other vectors, we observe that the fewer instances a vector has, the lower the F1 score it obtains.When RoBERTa is trained without using synthetic data, it can not classify any instance of the three vectors: 1.1, 3.3 and 3.4.Therefore, data augmentation techniques improve the results of task C. Conclusion Our team participated in the three tasks with an approach based on RoBERT fine-tuned with training data and synthetic data created by data augmentation techniques.This approach shows very modest results on the three tasks (our systems approximately rank in the middle of the three rankings).We still have much room for improvement, especially in the two fine-grained classifications.While data augmentation is does not achieve a significant improvement in task A and B, its obtains a positive effect on the results in task C. As future work, we plan to extend our research on data augmentation techniques to augment the training data.For example, we plan to use back translation (Sugiyama and Yoshinaga, 2019).In addition, we will exploit other datasets for the detec-tion of sexism content, such as the EXIST dataset (Rodríguez-Sánchez et al., 2021) or MAMI (Fersini et al., 2022), to also approach the task from two different scenarios: multilingual and multimodal. A Appendix In this section, we provide supplementary material for our research. Figure 3 shows the distribution of the classes sexist and not sexist in the three datasets.Figure 4 shows the distribution of the four categories in task B. Figure 5 shows the distribution of the eleven vectors in task C. Figure 7 is a density graph showing the distribution of text length text for each vector in task C. In addition, Figures 8-11 show the distribution of the text length for the vectors of each of the four categories: "1.threats, plans to harm and incitement", "2.derogation", "3.animosity", and "4.prejudiced discussions". Figure 1 : Figure 1: Density graph of the length of texts for the classes sexist and not sexist. Figure 2 : Figure 2: Density graph of the length of texts for each category (task 2). Figure 3 : Figure 3: Class distribution for task A. Figure 4 : Figure 4: Class distribution for task B. Figure 5 : Figure 5: Class distribution for task C. Figure 6 : Figure 6: Distribution of text length (number of tokens) for each dataset. Figure 6 Figure6shows the distribution of text length in each dataset.Figure7is a density graph showing the distribution of text length text for each vector in task C. In addition, Figures8-11show the distribution of the text length for the vectors of each of the four categories: "1.threats, plans to harm and incitement", "2.derogation", "3.animosity", and "4.prejudiced discussions". Figure 7 : Figure 7: Density graph of the length of texts for each vector in task C. Figure 8 : Figure 8: Density graph of the length of texts for vectors of the category 1. Figure 9 : Figure 9: Density graph of the length of texts for vectors of the category 2. Figure 10 : Figure 10: Density graph of the length of texts for vectors of the category 3. Figure 11 : Figure 11: Density graph of the length of texts for vectors of the category 4. Table 1 : Resuls for TASK A on the final test dataset Table 2 : Resuls for TASK B on the final test dataset Table 3 : Results provided by RoBERTa and data augmentation on the test dataset (task B) for categories: 1. threats, plans to harm and incitement, 2. derogation, 3. animosity, and 4. prejudiced discussions. Table 4 : Resuls for TASK C on the final test dataset Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019.Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter.In The 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing @ NeurIPS 2019.Amane Sugiyama and Naoki Yoshinaga.2019.Data augmentation using back-translation for contextaware neural machine translation.In Proceedings of the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019), pages 35-44.Jason Wei and Kai Zou.2019.EDA: Easy data augmentation techniques for boosting performance on text classification tasks.In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6383-6389, Hong Kong, China.Association for Computational Linguistics.Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019.Xlnet: Generalized autoregressive pretraining for language understanding.Advances in neural information processing systems, 32.Liu Zhuang, Lin Wayne, Shi Ya, and Zhao Jun.2021.A robustly optimized BERT pre-training approach with post-training.In Proceedings of the 20th Chinese National Conference on Computational Linguistics, pages 1218-1227, Huhhot, China.Chinese Information Processing Society of China.
2023-02-28T06:42:16.820Z
2023-02-24T00:00:00.000
{ "year": 2023, "sha1": "5f0a14c16a1f58dc8437f0824924323aae7a1575", "oa_license": "CCBY", "oa_url": "https://aclanthology.org/2023.semeval-1.26.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "33d72ac092215fa1f712d81073d6f0a535ad4d43", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
233620916
pes2o/s2orc
v3-fos-license
Validation and Improvement of NeQuick Topside Ionospheric Formulation Using COSMIC/FORMOSAT‐3 Data We examine systematic differences between topside electron density measurements and different topside model formulations including α‐Chapman, NeQuick, and an improved NeQuick (NeQuick‐corr) topside formulation, recently proposed. The global topside electron density data set used, was extracted from Radio Occultation (RO) topside electron density profiles on board Low‐Earth Orbit satellites from the COSMIC/FORMOSAT‐3 (Constellation Observing System for Meteorology, Ionosphere, and Climate and Formosa Satellite) mission. By using RO topside electron density measurements collocated with digisonde stations, we ensure that our investigation is based on two independent data sets (RO and digisondes). A subset of these profiles, with matched (within 5%) peak RO‐digisonde characteristics (foF2 and hmF2) is also exploited. This subset is exploited to extend the investigation on the basis of a higher quality validation data set. The comparison demonstrates that α‐Chapman and NeQuick‐corr underestimate, whereas NeQuick overestimates COSMIC topside electron density observations. The main outcome of this study is the significant NeQuick topside representation improvement that can be achieved near the F region peak, if the key parameter g, which controls the change of scale height with respect to altitude, is optimized to a value of 0.15 (compared to a currently adopted value of 0.125). The NeQuick‐corr topside formulation using the optimized g value of 0.15 outperforms all other topside formulations. NeQuick topside formulation has also been adopted as one of three options to model the electron density in the topside ionosphere in the frames of the International Ionosphere Model (IRI)-2016 (Bilitza et al., 2017). The other two options are IRI-2001(Bilitza, 1990) and IRI01-corr (Bilitza, 2004). NeQuick is considered as the most reliable of the three options (Buresova et al., 2009;Strangeways et al., 2009) but according to past and recent studies there is still room for improvement (Bilitza, 2009;Bilitza et al., 2006;Pignalberi et al., 2016). The NeQuick topside model is based on an Epstein function (Nava et al., 2008) (as shown in Equation 1). The electron density profile (Ne (h)) is defined as a function of hmF2, NmF2 and effective scale height (Hm) as follows: The scale height in the NeQuick topside formulation is described by three parameters, scale height at the peak (H 0 ), a parameter (r) which restricts the scale height at higher altitudes and the altitude gradient of the scale height (g). A value of r = 100 and g = 0.125 is adopted in NeQuick topside formulation, while H 0 can be estimated from Equation 3, where foF2 is the peak critical frequency, NmF2 is the peak electron density, hmF2 is the height corresponding to NmF2, and R12 is the 12 months smoothed sunspot number: An improvement in the NeQuick topside formulation (NeQuick-corr, Pezzopane & Pignalberi, 2019) was recently proposed. This NeQuick-corr topside formulation is based on H 0 grids as a function of hmF2 and foF2, derived from electron density values measured by Langmuir probes on-board Swarm satellites (A, B, and C), generated by applying the IRI-UP (Update) method (Pignalberi et al., 2018). According to NeQuickcorr, H 0 is estimated using H 0, AC and H 0, B at two different altitudes for each pair of hmF2 and foF2 values, in accordance to Equations 5 and 6: Data The topside COSMIC RO electron density values used as a COMPARISON data set and the full topside electron density profiles subset used as a VALIDATION data set in this study. These data sets were extracted from RO electron density profiles downloaded from the COSMIC data analysis and archive center (CDAAC) (https://cdaac-www.cosmic.ucar.edu/cdaac/products.html). Both data sets were selected under a maximum separation requirement (<1° in latitude and longitude) from the corresponding digisonde station location, in order to control the quality of the assembled RO data set, based on a minimum colocation separation. Figure 1 shows a COSMIC RO profile ground projection with respect to latitude and longitude, where the red part of the profile identifies the bottomside and the blue part the topside profile projection on the ground, respectively. It also shows the nearest digisonde station (Nicosia station in this example) and the minimum (perpendicular) distance between digisonde station and topside profile projection. By ensuring a maximum separation requirement (<1° in latitude and longitude) between this RO topside measurement and digisonde location we could achieve sufficient colocation criteria between these two data sets, and form a COMPARISON RO data set that we have subsequently used in our study. Unrealistic RO profiles with excessive fluctuations in the topside electron density or with hmF2 values out of a realistic range [150 < hmF2 < 450 km] were discarded. In total, 29,063 topside electron density values corresponding to a period of 2006-2018, encapsulating most of the COSMIC mission time span, were extracted from these profiles to assemble the so-called COMPARISON data set. The autoscaled digisonde hmF2 and NmF2 data corresponding to the digisonde profile measurements within a time interval of less than ±15 min from the RO measurements were downloaded from the Digital Ionogram Data Base (DIDBase-http://giro.uml.edu/ didbase/scaled.php). To extend the investigation we have also focused on a more reliable VALIDATION full topside profile data set that was assembled as a subset of the COMPARISON data set under the additional requirement of maximum difference at the peak values (<5% difference in hmF2 and foF2), in accordance to a previous study by Shaikh et al. (2018). This, VALIDATION, was composed of 3,433 out of the 29,063 COMPARISON data set cases. The selected digisonde stations, their location (latitude and longitude) and the number of nearest selected COSMIC topside values, and profiles forming the COMPARISON and VALIDATION data sets, are shown in Table 1. Figures 2a and 2b show hmF2 and foF2 scatter plots, extracted from the corresponding digisonde and COSMIC electron density profiles to form the topside COMPARISON data set. Both plots indicate a significant correlation between the two data sets in both hmF2 and foF2 with correlation coefficients of 0.68 and 0.82 respectively. The scatter plots for hmF2 and foF2 recorded from digisonde and extracted from RO electron density profiles considered in this more reliable topside VALIDATION subset are plotted in Figures 3a and 3b, clearly demonstrating, very high correlation coefficients at 0.98 and 0.99, respectively. The α-Chapman topside electron density values were extracted from profiles that were calculated in terms of autoscaled hmF2, foF2 and scale height as shown below: To compare the full topside profiles recorded by COSMIC RO satellites and modeled by α-Chapman, NeQuick, NeQuick-corr, and New g _NeQuick-corr a relative difference (as a function of altitude beyond the peak) was calculated as shown below: with X = Chap, NeQ, NeQcorr, gNeQcorr for Y = α-Chapman, NeQuick, NeQuick-corr, and New g _NeQuickcorr, respectively and, where htop denotes the peak-relative altitude in km. To investigate the overall performance in terms of the full profile in the various topside formulations, a Normalized Root Mean Square Error (NRMSE) was calculated for each of the 3,433 profiles of the VALI-DATION subset, using: SINGH ET AL. where subscript RO refers to COSMIC measurements, while modeled to either α-Chapman, NeQuick, NeQuick-corr or New g _NeQuick-corr. N is the total number of electron density profile points. The scale height (Hm) was calculated for COSMIC, α-Chapman, NeQuick and NeQuick-corr from the Epstein equation. in accordance to Pignalberi, Pezzopane, Nava, et al. (2020) and Pignalberi, Pezzopane, Themens, et al. (2020), as shown below: so that the equation becomes: By using the Sridhar Acharya formula, the solution for the above quadratic equation reduces to: SINGH ET AL. 10.1029/2020JA028720 6 of 18 and by solving Equations 13 and 17, Hm would be: Results and Discussion The comparison between topside electron density profile measurements and model formulations, as described in Section 2 is presented in the following sections. The results in section ( Figure 4a shows the binned scatter plot between peak-relative altitude (htop = h-hmF2) and relative difference (RD RO_Chap ) between RO observations and α-Chapman estimations, while the color bar shows the counts in each bin. As it can be seen from the graph, in the vast majority of cases RD RO_Chap is greater than zero which indicates that α-Chapman underestimates RO observations and this difference increases with htop with the bin occurrence maximizing around 500 km (above hmF2). The evidence from Figure 4a is justified because digisonde topside estimation is based on a α-Chapman function, with a constant scale height (as shown in Figure 4b), but real observations differ from α-Chapman estimates because scale height increases linearly with height over the peak (Olivares-Pulido et al., 2016). Figure 5a shows the binned scatter plot between peak-relative altitude (htop) and relative difference (RD RO_NeQ ) between RO observations and NeQuick estimates. It shows that NeQuick slightly overestimates SINGH ET AL. Analysis Based on the COMPARISON Data Set 10.1029/2020JA028720 8 of 18 RO observations up to an approximate htop altitude of 300 km and then its behavior reverses underestimating RO measurements. NeQuick is based on an Epstein function to represent the topside profile with an approximately linear scale height (calculated using Equation 18, as shown in Figure 5b) and therefore its performance is superior to α-Chapman. NeQuick considers values of r = 100 and g = 0.125 for calculating the scale height. The error with respect to htop as shown in Figure 5a) could be attributed to the difference in the change of scale height with respect to htop (represented by g) between RO observations and NeQuick estimations (Themens et al., 2018). Figure 6a shows the binned scatter plot between peak-relative altitude (htop) and relative difference (RD RO_ NeQcorr ) between RO observations and NeQuick-corr estimates. It shows that NeQuick-corr underestimates RO observations and this underestimation increases with htop. The NeQuick-corr is equivalent to NeQuick but the value of H 0 is deduced from H 0,AC and H 0,B grids following Equations 5 and 6 as proposed by Pezzopane and Pignalberi (2019) and the scale height (as shown in Figure 6b) is calculated by Equation 18. As it is clear from Figures 5a and 6a, for the majority of cases NeQuick exhibits an approximate relative difference error in the range −0.2 to 0.4 and for NeQuick-corr the error lies in the range of 0-0.35, which demonstrates that NeQuick-corr outperforms NeQuick. The scale height behavior of RO observations is shown in Figure 7 as calculated from the COSMIC measurements in accordance to Equation 18. A careful inspection of this plot and its comparison with Figures 5b and 6b reveals that there is seems to be a notable difference in the slope for the RO data set with a higher value of g necessary to match this increased slope according to Equation 7. The above results clearly indicate that the scale height calculated using different H 0 formulations is not able to match the scale height calculated from RO observations and that further potential improvement could be achieved by more appropriate values for r and g as indicated by other past studies (Themens et al., 2018). To explore this possibility, we used least squares to optimize the value of g and r, keeping H 0 constant, using NeQuick-corr which exhibits the lower relative dif-SINGH ET AL. 10.1029/2020JA028720 9 of 18 ference with respect to the RO data set (RD RO_NeQcorr ). The value of r was varied with a step size of one and g with a step size of 0.01. As the RO data were mostly limited to an altitude below 800 km, since r controls the scale height at higher altitudes, the value of r remained constant during this optimization (r = 100). Pignalberi, Pezzopane, Nava, et al. (2020) and Pignalberi, Pezzopane, Themens, et al. (2020) demonstrated that the effect of r on the scale height becomes significant for altitudes much higher than the F2 peak. Figure 8 shows the variation of r and g with respect to the RMSE calculated between RO observations and NeQuick-corr estimates. RO and NeQuick-corr comparison showed that for r = 100 and an optimized value of g = 0.15, RMSE minimizes. To evaluate the relative difference performance with this new optimized value of g, the Epstein equation with r = 100 and g = 0.15 was used to estimate the electron density (New g _NeQuickcorr) with a scale height calculated according to NeQuick-corr using H 0 extracted from the H 0,AC and H 0,B grid. Figure 9a shows the binned scatter plot between peak-relative altitude and the relative difference (RD RO_gNeQcorr ) between RO observations and New g _NeQuick-corr estimates. It shows that the (RD RO_gNeQcorr ) is almost constant with htop and it is confined within a range −01 to 0.2 which is lower than the other cases shown (Figures 4a, 5a, and 6a). Therefore, it can be stated that the performance of New g _NeQuick-corr method is better than the other three topside modeling approaches for this particular COSMIC RO data set. The scale height (calculated from Equation 18 for New g _NeQuick-corr method is shown in Figure 9b. Analysis Based on the More Reliable VALIDATION Data Set As already mentioned, this more reliable VALIDATION data set was actually a subset of the COMPARI-SON data set with an additional requirement for a maximum difference in hmF2 and foF2 (within <5%) at the profile peak. We have to stress though that the VALIDATION data set comprises of full topside electron density profiles whereas the COMPARISON data set analyzed in the previous sections comprises electron density values at various topside altitudes corresponding to the nearest distance to a digisonde station. Figures 10aand 10b show the binned scatter plot between peak-relative altitude (htop = h-hmF2) SINGH ET AL. 10.1029/2020JA028720 10 of 18 . The graph shows the binscatter plot of (a) relative difference (RD RO_gNeQcorr ) between COSMIC RO observations and New g _NeQuick-corr estimations (b) Scale height of New g _NeQuick-corr estimations as a function of peak-relative altitude (h-hmF2). and relative difference (RD RO_Chap (h)) between RO and α-Chapman profiles, for h-hmF2 > 100 and h-hmF2 < 100, respectively. The color bar represents the counts in each bin. As discussed in Section 3.1, α-Chapman underestimates RO observations and it increases with htop, which can also be observed from Figure 10a as RD RO_Chap (h) increases with htop. Figure 10b shows that up to 100 km over hmF2, the average RD RO_Chap (h) fluctuates around zero. This is expected as α-Chapman scale height is constant, around the peak. Figures 11a and11b show binned scatter plots between peak-relative altitude (htop) and relative difference (RD RO_NeQ (h)) between RO profile and NeQuick estimated profile, for h-hmF2 > 100 and h-hmF2 < 100, respectively. Figure 11a shows that NeQuick overestimates (−0.5 to 0 for the majority of profiles) RO up to approximately htop = 300 km and then its behavior reverses with a definite underestimation (within 0-0.2 for most profiles). The results are similar with the findings discussed in Section 3.1 indicating that NeQuick SINGH ET AL. clearly outperforms α-Chapman. Figure 11b shows that up to htop = 100 km, the average RD RO_NeQ (h) fluctuates around 0, which suggests that NeQuick also exhibits approximately constant scale height around the peak. Figures 12a and12b shows the binned scatter plot between peak-relative altitude (htop) and relative difference (RD RO_NeQcorr (h)) between RO and NeQuick-corr, for h-hmF2 > 100 km and h-hmF2 < 100 km, respectively. Figure 12a shows that NeQuick-corr underestimates RO and RD RO_NeQcorr (h) increases (0-0.5) with htop. Unlike NeQuick, the behavior of NeQuick-corr does not reverse with htop, whereas the RD RO_NeQcorr (h) gets saturated for htop > 300 km. Figure 12b shows that up to htop = 100 km average RD RO_NeQcorr (h) fluctuates around zero suggesting that similar to α-Chapman and NeQuick, NeQuick-corr also exhibits nearly constant scale height around the peak. NRMSE between RO and the three topside formulations was also calculated (using Equation 12). Figure 13a shows the scatter plot between the NRMSE values for NeQuick-corr (with respect to COSMIC) on x axis and NRMSE values for α-Chapman (with respect to COSMIC) on y axis. For the majority of cases NRMSE_α-Chapman exceeds NRMSE_NeQuick-corr, which means NeQuick-corr performs better than α-Chapman. Figure 13b shows the scatter plot between NRMSE_NeQuick-corr (with respect to RO) on x axis and NRMSE_NeQuick (with respect to RO) on y axis for each individual matched peak profile. It shows that NRMSE_NeQuick-corr is lower for nearly half the cases (1,803 out of 3,433) and NRMSE_NeQuick is lower for the rest (1,640 out of 3,433) but for the majority NRMSE_NeQuick-corr is more bounded (from 0 to 0.5) whereas NRMSE_NeQuick extends from 0 up to 0.8. Therefore, we can conclude that NeQuick-corr is superior to NeQuick for representing the topside ionosphere, based on this particular RO data set under consideration. Klipp et al. (2020) recently applied the NeQuick-corr method to study the comparison between the ionospheric total electron content from ionosondes and the International GNSS service vertical total electron content and reported that the error was reduced by 27%. The optimized value of g = 0.15 calculated in Section 3.1 (with a value of r = 100) was also tested using the VALIDATION data set. Figures 14aand14b show the binned scatter plot between peak-relative altitude (htop) and relative difference (RD RO _ gNeQcorr (h)) between COSMIC and New g _NeQuick, for h-hmF2 > 100 and for h-hmF2 < 100. Figure 14a clearly shows that RD RO _ gNeQcorr (h) is almost constant with respect to htop and that it is confined within a region (−0.2 to 0.2). RD RO _ gNeQcorr (h) is also almost 0 for h-hmF2 < 100, as shown in Figure 14b. By comparing Figures 10-12 and 14, it is clear that New g _NeQuick (which is basically NeQuick-corr with a value of g = 0.15) outperforms all other topside formulations for the high-quality VALIDATION data set as well. SINGH ET AL. Topside Scale Height Linear Variation and Validation of Optimized Value of g = 0.15 Using VALIDATION Data As discussed in Sections 3.1 and 3.2, the behavior of the topside scale height is expected to be linear. So, to verify this for all VALIDATION RO profiles (3,433 profiles) the scale height was calculated using Equation 18. The scale height of each profile was fitted under a linear approximation as shown in Figure 15a and subsequently the corresponding electron density profile was calculated, based on the estimated g value providing the best fit. Figure 15b shows the relative difference between measured and modeled electron density (using linearly fitted scale height) with the error within 5% for most of the profile points. This verifies the SINGH ET AL. 10.1029/2020JA028720 13 of 18 The graph shows the binscatter plot of relative difference (RD RO_gNeQcorr (h)) between COSMIC RO observed and New g _NeQuick-corr estimated matched peak electron density profiles for (a) h-hmF2 > 100 (b) h-hmF2 < 100 as a function of peak-relative altitude (h-hmF2). linear scale height variation hypothesis, up to 500 km over hmF2 (Pignalberi, Pezzopane, Nava, et al., 2020;Pignalberi, Pezzopane, Themens, et al., 2020). Figure 16 shows the variation of g (calculated from Equation 7) with respect to RMSE between RO and linearly fitted scale height electron density profiles using the VALIDATION data set. It shows that for the majority of the profiles, a value of g = 0.15 (±0.015) minimizes RMSE. As it was discussed in Sections 3.1 and 3.2, for an optimum value of g = 0.15, the relative difference between RO and NeQuick-corr minimizes and exhibits the best performance among all four topside formulations tested on both COMPARISON and VALIDATION data sets as demonstrated by RD RO_gNeQcorr (Figure 9a) and RD RO _ gNeQcorr (h) (Figure 14a), respectively, but also in the histograms shown in Figures 17a and 17b. Themens et al. (2018) based on different data sets discussed that the NeQuick option can be improved over upper midlatitude and high latitude regions by adjusting r and g values to r = 20 and g = 0.2024. Another study by Themens et al., 2014 showed that NeQuick parameterization does not adequately represent the topside thickness during solar minimum between cycles 23 and 24. Our study has demonstrated a different optimum g value that is more suitable for midlatitudes. As shown in Figure 18 which depicts g-dependent histograms for specific high-latitude stations (a, b, and c) in the northern hemisphere, midlatitude stations (d, e, and f) in the northern hemisphere and midlatitude stations (g, h, and i) in the southern-hemisphere we can verify that g = 0.15 is apparently the optimum value irrespective of the digisonde location. Considering though the results from all 44 stations, we can also observe that there is a trend for increasing optimum g values as we move toward higher latitudes. This is evident from Figure 19 which is in agreement with recent evidence by Pignalberi, Pezzopane, Nava, et al. (2020) and Pignalberi, Pezzopane, Themens, et al. (2020) that the gradient of the modeled topside scale height (which is equivalent to g according to Equation 7) exhibits a distinct global spatial variation pattern with a clear increasing trend as a function of latitude. Conclusion This study was primarily based on the comparison of various topside formulation methods (α-Chapman, NeQuick and NeQuick-corr.) with an extended data set of COSMIC RO observations. Through a data set com-SINGH ET AL. 10.1029/2020JA028720 14 of 18 prising of 29,063 COSMIC RO topside electron density measurements in the vicinity of 44 digisonde stations we were able to identify particular weaknesses of these formulations and to solidify the linear scale height variation hypothesis (up to several hundred km over hmF2) as the best appropriate option to describe the topside electron density variation with height. Based on this comparison, we propose that a new value of g = 0.15 (as opposed to a currently adopted value of 0.125) is adopted to ensure the superior NeQuick-corr topside SINGH ET AL. 10.1029/2020JA028720 15 of 18 Figure 18. The graph shows the variation of parameter g with respect to its frequency for different stations, which belongs to different latitudinal range. representation. To validate this new optimum g value in the frames of the NeQuick-corr topside formulation, a more reliable higher quality RO topside data set of 3,433 profiles with matched (within 5%) peak RO-digisonde foF2 and hmF2 characteristics was used. This enabled us to validate our results and to conclude that despite an optimum global value g = 0.15, there is a clear marked spatial feature according to which the optimum g value increases with increasing latitude. This could be the focus of future studies in order to improve topside electron density representation in operational models. The single-frequency GNSS correction algorithm (NeQuick-G) is an example of such an operational model. Despite the emerging trend for the adoption of dual-frequency chipsets (able to overcome the dispersive ionospheric impact in the positioning error budget) a vast array of future applications (such as Internet of things (IoT) device control) will continue to demand single-frequency ionospheric correction for which topside ionospheric contribution in TEC is significant. Data Availability Statement The topside COSMIC RO electron density values used as a COMPARISON dataset and the full topside electron density profiles subset used as a VALIDATION dataset in this study were extracted from RO electron density profiles downloaded from the COSMIC data analysis and archive centre (CDAAC) (https:// cdaac-www.cosmic.ucar.edu/cdaac/products.html). The autoscaled digisonde hmF2 and NmF2 data corresponding to the digisonde profile measurements within a time interval of less than ±15 min from the RO measurements were downloaded from the Digital Ionogram Data Base (DIDBase-http://giro.uml.edu/ didbase/scaled.php). The corresponding NeQuick values were estimated at the corresponding COSMIC topside electron density altitude using the FORTRAN IRI-2016 source code, available at http://irimodel.org/ by ingesting the hmF2 and foF2 autoscaled values. These hmF2 and foF2 values were also used to calculate H0 using the H0,AC and H0,B grid (downloaded from the supplementary data of the Pezzopane and Pignalberi, (2019)) in order to calculate the NeQuick-corr profiles. Figure 19. The graph shows the binscatter plot of variation of parameter g with respect to the latitude.
2021-05-05T00:09:51.659Z
2021-03-10T00:00:00.000
{ "year": 2021, "sha1": "28982eedf95fcc27cf82c5c201a0fa75185b43e3", "oa_license": "CCBY", "oa_url": "https://zenodo.org/record/4889516/files/Validation%20and%20Improvement%20of%20NeQuick%20Topside.pdf", "oa_status": "GREEN", "pdf_src": "Wiley", "pdf_hash": "d7b13f999fb6aa1e14a0dc9059aff663a88d8ed3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Environmental Science" ] }
252450868
pes2o/s2orc
v3-fos-license
Identification and Characterization of a Novel Cold-Adapted GH15 Family Trehalase from the Psychrotolerant Microbacterium phyllosphaerae LW106 : Psychrophiles inhabiting various cold environments are regarded as having evolved diverse physiological and molecular strategies, such as the accumulation of trehalose to alleviate cold stress. To investigate the possible contributions of trehalose metabolism-related enzymes to cold-adaption in psychrotrophic bacteria and enrich the resource bank of trehalose hydrolysis enzymes, a novel cold-adapted GH15 GA-like trehalase (MpTre15A) from psychrotolerant Microbacterium phyllosphaerae LW106 isolated from glacier sediments was cloned and characterized. The recombinant MpTre15A from M . phyllosphaerae LW106 was expressed and purified in Escherichia coli BL21(DE3). The purified MpTre15A functioned as a hexamer and displayed maximal activity at pH 5.0 and 50 ◦ C. Substrate specificity assay proved MpTre15A only showed hydrolytic activity toward α , α -trehalose. Site-directed mutation verified the key catalytic sites of Glu392 and Glu557 in MpTre15A. The k cat and k cat /K m values of MpTre15A at 4 ◦ C (104.50 s − 1 and 1.6 s − 1 mM − 1 , respectively) were comparable to those observed for thermophilic GH15 trehalases at 50 ◦ C, revealing its typical cold-adaptability. MpTre15A showed a trehalose conversion rate of 100% and 99.4% after 10 min and 15 min of incubation at 50 ◦ C and 37 ◦ C, respectively. In conclusion, this novel cold-adapted α , α -trehalase MpTre15A showed potential application for developing therapeutic enzymes, enzyme-based biosensors, and enzyme additives in the fermentation industry. Introduction Trehalose is a kind of stable disaccharide that consists of two glucose units linked primarily by an α, α-(1,1) bond, which is ubiquitous in soils and a prominent metabolite in bacteria, yeast, fungi, insects, invertebrates, and plants [1,2]. It plays multiple roles in various microorganisms, including acting as a source of glucose and/or energy [3], protecting proteins and membranes against various stress conditions such as dehydration, heat, cold, and oxygen radicals [1,4], regulating glucose metabolism [5], and serving as an essential component of various cell wall glycolipids in some mycobacteria species [6]. Trehalose synthesis is induced upon exposure of Escherichia coli to cold and is essential for viability at low temperatures [7]. The yeast Saccharomyces cerevisiae also accumulates trehalose during heat shock to protect its proteome against thermal denaturation and aggregation [8]. This disaccharide is expected to be a valuable ingredient in pharmaceuticals, food, and cosmetic industries [1,4,9]. Bacteria Cultivation Tryptic soy broth (1/4 TSB) was used to propagate the cells of M. phyllosphaerae LW106 strain. To investigate the effects of various sugars on the growth of M. phyllosphaerae LW106 strain, the bacteria cells were grown in a 1/4 TSB liquid medium containing glucose, lactose, sucrose, maltose, and trehalose, at a final concentration of 3.0 g/L, respectively, with shaking (200 rpm) for aeration at 16 • C. Optical density at 600 nm (OD 600 ) was measured using a Multiskan FC microplate reader (Thermo). All growth experiments were conducted in triplicate and the average values were shown. Luria-Bertani (LB) medium (pH 7.2) was used to cultivate the E. coli cells by shaking at 200 rpm and 37 • C for 12 h before protein expression. Ampicillin (50 µg/mL) was added to the media as needed. Sequence, Phylogenetic, and Modeling Analysis A similarity search of MpTre15A was performed using the BLASTp algorithm at the National Center for Biotechnology Information (NCBI) database (https://blast.ncbi. nlm.nih.gov/Blast.cgi, accessed on 14 August 2022). The conserved domains and the GH family classification were predicted using the cdd online tool (http://www.ncbi.nlm.nih. gov/Structure/cdd/cdd.shtml, accessed on 14 August 2022). Computer-assisted protein sequence analysis was performed using Clustal W version 2.0, and the secondary structure elements and key catalysis residues based on alignments were depicted using the online tool ESPript version 3 [26]. The reference sequences selected in the phylogenetic analysis were retrieved from the NCBI database and aligned using Clustal W. The phylogenetic tree for MpTre15A was constructed using the neighbor-joining method with the bootstrap method phylogeny test and 1000 replications using the MEGA7 program. Cloning, Mutant Plasmid Construction, Expression, and Purification of MpTre15A To amplify the MpTre15A coding sequence, genomic DNA of M. phyllosphaerae LW106 was used as a template for PCR reaction. The amplification procedure included one cycle of 94 • C for 5 min, 30 cycles of 94 • C for 30 s, 56 • C for 30 s, 72 • C for 2 min, and incubation at 72 • C for 10 min. The primer sequences for the MpTre15A gene were as follows: the forward primer was 5 -ATGCCGGCTCCGATTGAAGATTAT-3 and the reverse primer was 5 -ACGACGATGTGCTGCACGACCACC-3 . The amplified MpTre15A DNA fragments were purified, ligated to the pMD18-T vector using a Mighty TA-cloning Reagent Set for PrimeSTAR (TaKaRa), and delivered for sequencing. The nucleotide sequence of the MpTre15A gene (GenBank: OM456201) was codonoptimized for E. coli, synthesized, and directly subcloned into the expression vector pET21a with NdeI and BamHI restriction sites by Zoonbio Biotechnology Co., Ltd. (Nanjing, China). The MpTre15A gene was fused with the coding gene of a hexahistidine tag (His-tag) to achieve a C-terminal tail for subsequent affinity chromatography purification. After being confirmed by sequencing, the recombinant plasmid carrying MpTre15A gene, named pET21-MpTre15A, was transformed into the cells of E. coli BL21 (DE3). E. coli transformants carrying the expression vectors were grown overnight at 37 • C in an LB medium containing 50 µg/mL ampicillin, then inoculated in a fresh LB medium con-taining 50 µg/mL ampicillin. The E. coli cells were grown at 37 • C until the optical density at 600 nm (OD 600 ) reached 0.6 and then the target protein was induced for expression for approximately 18 h with 0.5 mM isopropyl-β-D-thiogalactopyranoside (IPTG). The E. coli cells were harvested by centrifugation at 4 • C, 8000 rpm for 10 min, and washed 3 times with 50 mM sodium phosphate buffer (pH 7.0). The cells were finally resuspended in lysis buffer (50 mM sodium phosphate, 300 mM sodium chloride, 30 mM imidazole, pH 7.0), and were lysed by sonication at 4 • C (2 s on with 4 s interval for 10 min at 200 W) with a Sonicator JY92-IIN (Scientz). The supernatant was collected after centrifugation at 4 • C, 12,000 rpm for 20 min. For purification of the recombinant trehalase, lysate obtained from sonication supernatant was added to a Ni-NTA resin column (Sangon, China). The non-specific binding proteins were washed out with 10 column volumes of wash buffer (50 mM sodium phosphate, 300 mM sodium chloride, and 50 mM imidazole, pH 7.0). The target protein was then eluted with 3 column volumes of elution buffer (50 mM sodium phosphate, 300 mM sodium chloride, and 250 mM imidazole, pH 7.0). To remove the residual salts and imidazole, the eluted protein solution was added to a Millipore spin column (10 kDa cutoff) and buffer-exchanged with the appropriate buffer solution. Aliquots of 10 µL of lysate and purified protein were applied on the SDS-PAGE and Native-PAGE gels, respectively, to analyze the purity and oligomeric state of the target protein. The pET21-MpTre15A plasmid was used as a template for site-directed mutagenesis analysis by the NEBase Changer method (Q5 Site-Directed Mutagenesis Kit). The following oligonucleotide primers were used as mutagenic primers (mismatched bases are underlined): E392Q F: 5 -CACCGCGAATctgCCAAATACCAT-3 , E392Q R: 5 -AACCGCAGTGGTTTACCC-3 , E557Q F, 5 -CATCATATTCctgGCTCAGCAGGCC-3 , E557Q R, 5 -TGGCAAATGCACGCCAGG-3 . The nucleotide sequences of the MpTre15A mutant genes were confirmed by DNA sequencing as described above. The expression, purification, and activity determination for these two mutants were performed the same as the wild-type MpTre15A described above. Trehalase Activity and Protein Concentration Determination The activity of the recombinant trehalase was measured by determining the content of glucose released from trehalose. The reaction mixture consisted of 4.5 mM trehalose and 0.025 mg/mL purified enzyme in 100 µL sodium citrate buffer (100 mM, pH 5.0). The reaction was incubated at 50 • C for 10 min and stopped by boiling for 10 min. Then 100 µL 3,5-Dinitrosalicylic acid (DNS) solution was added to determine the produced glucose. One unit of trehalase activity was defined as the amount of enzyme required for releasing 1 µmol of glucose per minute under a specific assay condition indicated. Protein concentration was assayed by employing a Bradford protein assay kit (Sangon Biotech) using bovine serum albumin as the standard. Substrate Specificity Assay Enzyme activities toward soluble starch, different disaccharides including trehalose, maltose, cellobiose, and p-nitrophenyl-α-glucoside, were measured at a substrate concentration of 4.5 mM in 100 mM sodium citrate buffer at pH 5.0 and 50 • C. The reaction was kept at 50 • C for 10 min and stopped by boiling for 10 min. The amount of the released glucose was determined using the DNS method [18]. Optimum pH, Optimum Temperature, and Stability The optimum reaction pH for the recombinant MpTre15A was evaluated by analyzing with 4.5 mM trehalose dissolved in various buffers with pH values ranging from 3.0 to 9.0 at 50 • C. The pH stability of the purified enzyme was assayed after incubation in buffers with pH values ranging from 3.0 to 9.0 for 12 h at 4 • C. Buffers for pH 3.0-6.6, pH 6.6-7.8, and pH 8.2-9.0 were 100 mM sodium citrate buffer, 100 mM sodium phosphate buffer, and 100 mM sodium borate buffer, respectively. Aliquots were withdrawn at different time points and the residual activity was measured at 50 • C, pH 5.0 as described above. The influence of temperature on enzyme activity over trehalose was investigated by incubating the purified enzyme at temperatures ranging from 10 • C to 65 • C in 100 mM sodium citrate buffer (pH 5.0) for 10 min. To determine the enzyme thermostability, reaction aliquots were withdrawn for activity assay after incubating the purified enzymes in 100 mM sodium citrate buffer (pH 5.0) at various temperatures for up to 12 h. Residual activity was measured at 50 • C, pH 5.0 for 10 min as described above. Effects of Metal Ions and Other Chemicals Hydrolytic activities against 4.5 mM trehalose were assayed at their optimal temperature of 50 • C and pH 5.0 in the absence or presence of various cations added as chloride form as indicated. The influences of various chemicals including ethylenediaminetetraacetic acid, urea, sodium dodecyl sulfate, 1, 4-dithiothreitol, and β-mercaptoethanol on the enzyme activities were investigated under the same condition described above. Kinetic Constants For kinetic parameter measurement, the enzyme activity was detected using the substrate trehalose with concentrations ranging from 4 mM to 50 mM. Activity assay of the MpTre15A was performed as described above at optimum pH and specific temperature indicated. The kinetic parameters (K m , V max ) were calculated by non-linear fitting the experimental data to the Michaelis Menten equation using the Michaelis Menten function in the software Origin. The apparent k cat values were estimated using the theoretic molecular mass of MpTre15A (62.5 kDa). Analysis of Trehalose Hydrolysis Products Aliquots of purified MpTre15A (2 mg/mL) were mixed with trehalose at a final concentration of 5 mg/mL and 50 mg/mL, respectively, and incubated for up to 120 min. Then, the reaction mixture was boiled for 5 min to inactivate the enzyme. The hydrolyzed sugar solutions were visually analyzed by thin-layer chromatography (TLC) on a silica gel plate (10 cm × 20 cm; Merck, Germany) with a solvent system consisting of n-butyl alcohol/methyl alcohol/deionized water (8:4:3, v/v/v), and the spots were visualized by spraying the silica gel plate with methyl alcohol and concentrated sulfuric acid (1:1, v/v), followed by heating at 90 • C for 15 min [18]. Trehalose consumption and the produced glucose were determined by high-performance liquid chromatography (HPLC, Shimazu, Japan) using a column of Hi-Plex Na Carbohydrate Column (300 mm × 7.7 mm, Agilent) at 80 • C with a refractive index detector (RID). The mobile phase was triple distilled water (Watsons, China) at a flow rate of 0.2 mL min −1 . The Ability of M. phyllosphaerae LW106 to Utilize Trehalose as the Sole Source of Carbon To validate the trehalose utilization ability of M. phyllosphaerae LW106 under ambient temperature, the time course growth of this psychrotolerant strain cultured in a 1/4 TSB liquid medium supplemented with various sources of carbon was investigated. M. phyllosphaerae LW106 was able to grow normally on a modified 1/4 TSB medium containing glucose, lactose, sucrose, maltose, and trehalose as a sole carbon source, suggesting this strain owns multiple disaccharide hydrolytic enzymes. However, the growth rate on a medium with trehalose as a sole carbon source was significantly slower than that of the other carbon sources (Figure 1). gesting this strain owns multiple disaccharide hydrolytic enzyme rate on a medium with trehalose as a sole carbon source was si that of the other carbon sources (Figure 1). Bioinformatics Analysis of MpTre15A The discovered putative GH15 family trehalase MpTre15A acid residues without a signal peptide, with a theoretical molec The isoelectric point (pI) was predicted to be 5.40. The deduced a MpTre15A showed 93.97% identity with a predicted glycoside hy terium sp. Leaf161, followed by predicted glycoside hydrolases f Leaf320 (93.79%) and the type strain of Microbacterium p MpTre15A shared a low amino acid sequence identity with th bacterial trehalases from Mycolicibacterium smegmatis (36.63%) an culosis (36.30%), and archaeal trehalases from Thermoplasma v Thermoplasma acidophilum (34.43%). The phylogenetic relationships among GH37 and GH65 t functional verified GH15 members were built. As shown in branches were formed. GH37 enzymes including trehalases fro sects, algae, plants, and animals formed one branch, GH65 en acidic trehalases and periplasmic trehalose from fungi, were branch, and GH15 enzymes, which include MpTre15A, trehala smegmatis, and archaeal trehalases from Sulfolobus acidocaldarius, T and Thermoplasma acidophilum together with several glucoamyla Bioinformatics Analysis of MpTre15A The discovered putative GH15 family trehalase MpTre15A consisted of 597 amino acid residues without a signal peptide, with a theoretical molecular mass of 66.2 kDa. The isoelectric point (pI) was predicted to be 5.40. The deduced amino acid sequences of MpTre15A showed 93.97% identity with a predicted glycoside hydrolase from Microbacterium sp. Leaf161, followed by predicted glycoside hydrolases from Microbacterium sp. Leaf320 (93.79%) and the type strain of Microbacterium phyllosphaerae (93.45%). MpTre15A shared a low amino acid sequence identity with the functionally verified bacterial trehalases from Mycolicibacterium smegmatis (36.63%) and Mycobacterium tuberculosis (36.30%), and archaeal trehalases from Thermoplasma volcanium (33.90%) and Thermoplasma acidophilum (34.43%). The phylogenetic relationships among GH37 and GH65 trehalases and relevant functional verified GH15 members were built. As shown in Figure 2, three major branches were formed. GH37 enzymes including trehalases from bacteria, yeasts, insects, algae, plants, and animals formed one branch, GH65 enzymes, which include acidic trehalases and periplasmic trehalose from fungi, were clustered in the same branch, and GH15 enzymes, which include MpTre15A, trehalase from Mycobacterium smegmatis, and archaeal trehalases from Sulfolobus acidocaldarius, Thermoplasma volcanium, and Thermoplasma acidophilum together with several glucoamylases (GAs) of different origin, were clustered in one branch. Multiple sequence alignment of MpTre15A with selected relevant trehalases of different origins showed that MpTre15A possessed the five conserved regions in GH15 family glycoside hydrolases. Furthermore, two conserved residues (Glu392 and Glu557) in regions 3 and 5, respectively, which may correspond to the conserved catalytic residues in GH15 family trehalases, were found ( Figure 3). family glycoside hydrolases. Furthermore, two conserved residues (Glu392 and Glu557) in regions 3 and 5, respectively, which may correspond to the conserved catalytic residues in GH15 family trehalases, were found ( Figure 3). family glycoside hydrolases. Furthermore, two conserved residues (Glu392 and Glu557) in regions 3 and 5, respectively, which may correspond to the conserved catalytic residues in GH15 family trehalases, were found ( Figure 3). The number sign (#) denotes putative catalytic residues. Fully conserved amino acid residues and related amino acid residues are red and yellow, respectively. Boxes show two highly conserved regions, conserved regions 3 (CRs3) and conserved regions 5 (CRs5). Expression, Purification, and Activity Assay of GA-Like Trehalase MpTre15A The recombinant MpTre15A could be expressed in soluble form and at a high level, typically with a yield of 26.8 mg/L. The expressed gene products were purified via affinity Microvirga sp. MC18 rMtreH. The number sign (#) denotes putative catalytic residues. Fully conserved amino acid residues and related amino acid residues are red and yellow, respectively. Boxes show two highly conserved regions, conserved regions 3 (CRs3) and conserved regions 5 (CRs5). Expression, Purification, and Activity Assay of GA-Like Trehalase MpTre15A The recombinant MpTre15A could be expressed in soluble form and at a high level, typically with a yield of 26.8 mg/L. The expressed gene products were purified via affinity chromatography to generate single bands in SDS-PAGE analysis (Figure 4a). The apparent molecular weight of His6 tagged MpTre15A correspond to their theoretical one (66.2 kDa). The Native-PAGE analysis (Figure 4b) of the purified MpTre15A provided a rough estimate of the apparent molecular mass of approximately 397.2 kDa, suggesting that the holoenzyme of MpTre15A should function as a hexamer. The purified MpTre15A was able to hydrolyze trehalose yielding glucose but showed no activity toward the other tested disaccharides including maltose, cellobiose, sucrose, soluble starch, and p-nitrophenyl-α-glucoside (Table 1). Based on the low amino acid sequence identity of MpTre15A shared with the other functionally verified trehalases, and the highly specific hydrolytic activity toward the substrate trehalose shown by MpTre15A, we identified MpTre15A as a novel member of GH15 family trehalases. Substrate Relative Activity (%) trehalose 100 maltose MpTre15A was added to the sugar solution with a final protein concentration of 0.01 mg/mL and determined at pH 5.0 with 100 mM sodium citrate buffer. The purified MpTre15A was able to hydrolyze trehalose yielding glucose but showed no activity toward the other tested disaccharides including maltose, cellobiose, sucrose, soluble starch, and p-nitrophenyl-α-glucoside (Table 1). Based on the low amino acid sequence identity of MpTre15A shared with the other functionally verified trehalases, and the highly specific hydrolytic activity toward the substrate trehalose shown by MpTre15A, we identified MpTre15A as a novel member of GH15 family trehalases. Substrate Relative Activity (%) MpTre15A was added to the sugar solution with a final protein concentration of 0.01 mg/mL and determined at pH 5.0 with 100 mM sodium citrate buffer. Characterization of the Catalytic Residues in MpTre15A To verify the catalytic involvement of the amino acid residue Glu392 and Glu557 in the hydrolytic activity of trehalose MpTre15A, two mutants E392Q and E557Q were constructed and introduced into E. coli for recombinant expression, respectively. The mutants were purified similarly to the wild-type MpTre15A ( Figure S1). As shown in Table 2, both the purified E392Q and E557Q mutants lost all their trehalose hydrolytic activity when tested under the same condition as the wild-type MpTre15A. Taken together, these results proved that the two amino acid residues, Glu392 and Glu557, located in the conserved region 3 and 5 of MpTre15A, respectively, functioned as its two key catalytic residues, which are similar to other reported GH15 family GA-like bacterial trehalases [13]. Table 2. Activity analysis of MpTre15A mutants. Enzyme Relative Activity (%) The specific activity of MpTre15A toward trehalose was 69.4 U/mg protein tested in 100 mM sodium citrate buffer (pH 5.0). Effects of pH and Temperature on MpTre15A The influences of pH on MpTre15A activity are illustrated in Figure 5a. The trehalosehydrolyzing activity of MpTre15A was maximal at pH 5.0. MpTre15A was active within a narrow pH range from 4.4 to 6.2. MpTre15A exhibited more than half of its maximal activity at pH between 4.8 and 5.4 but lost its activity greatly at pH below 4.4 and above 6.2. The pH stability of MpTre15A was also investigated (Figure 5b). MpTre15A retained over 80% of its maximum activity after 12 h treatment at 4 • C and a pH of 4.0, 5.0, 6.0, and 8.0. However, the residual activity of MpTre15A decreased to less than 30% after 4 h treatment at pH 9.0. Effects of pH and Temperature on MpTre15A The influences of pH on MpTre15A activity are illustrated in Figure 5a. The trehalose-hydrolyzing activity of MpTre15A was maximal at pH 5.0. MpTre15A was active within a narrow pH range from 4.4 to 6.2. MpTre15A exhibited more than half of its maximal activity at pH between 4.8 and 5.4 but lost its activity greatly at pH below 4.4 and above 6.2. The pH stability of MpTre15A was also investigated (Figure 5b). MpTre15A retained over 80% of its maximum activity after 12 h treatment at 4 °C and a pH of 4.0, 5.0, 6.0, and 8.0. However, the residual activity of MpTre15A decreased to less than 30% after 4 h treatment at pH 9.0. The effects of temperature on the enzymatic activity of the recombinant MpTre15A are shown in Figure 6a. Under the conditions used, the trehalose-hydrolyzing activity of MpTre15A was maximal at approximately 50 • C. The relative activity of MpTre15A dropped to about 10% at 10 • C. As the results demonstrated in Figure 6b, MpTre15A retained more than 90% of its initial activity after 24 h of incubation at 25 • C and showed good stability even after 14 days of incubation at 4 • C (data not shown). After 12 h of incubation at 40 • C, the residual activity of MpTre15A was about 90%. The residual activities of MpTre15A decreased to less than 40% of their initial activity after incubation at 45 • C for 4 h and 50 • C for 20 min. The enzyme activity disappeared after 6 h and 30 min of incubation at 45 • C and 50 • C, respectively. tained more than 90% of its initial activity after 24 h of incubation at 25 °C and showed good stability even after 14 days of incubation at 4 °C (data not shown). After 12 h of incubation at 40 °C, the residual activity of MpTre15A was about 90%. The residual activities of MpTre15A decreased to less than 40% of their initial activity after incubation at 45 °C for 4 h and 50 °C for 20 min. The enzyme activity disappeared after 6 h and 30 min of incubation at 45 °C and 50 °C, respectively. Effects of Different Chemicals on the Activity of MpTre15A The effects of metal ions and other substances on the activity of recombinant MpTre15A were tested at pH 5.0 and 50 °C with the chemicals' final concentrations of 1 mM and 10 mM. As shown in Table 3, the addition of the monovalent cations Na + and K + , and the divalent cations Co 2+ and Fe 2+ did not affect the activity of MpTre15A significantly. It was reported that inorganic phosphate and Mg 2+ were required for the activity of Mycobacterium trehalase [6]. In contrast, the addition of 1 mM or 10 mM ethylenediaminetetraacetic acid (EDTA) did not affect the activity of MpTre15A significantly, suggesting MpTre15A was able to hydrolyze trehalose without a requirement for any specific metal ions. Nevertheless, Mn 2+ showed a stimulating effect on MpTre15A at a final concentration of 1 mM. When added in 10 mM, both Mn 2+ and Mg 2+ had a moderate stimulating effect on the activity of MpTre15A. In contrast, the divalent cations Ni 2+ and Ca 2+ , and the trivalent cations Fe 3+ and Al 3+ showed a moderate inhibitory effect on trehalase activity at the concentration of 10 mM. Similar inhibitory effect of Fe 3+ and stimulating effect of Mn 2+ on the trehalose hydrolytic activity of rMtreH from Microvirga sp. MC18 was observed as well, indicating trivalent cations like Fe 3+ may play a negative role in GH15 family trehalases, whereas some divalent cations such as Mn 2+ and Mg 2+ may be favorable for the activity of GH15 trehalases. The detergent sodium dodecyl sulfate (SDS) strongly inhibited the activity of MpTre15A at a final concentration of 1 mM, and the addition of 1 mM or 10 mM cetyltrimethylammonium bromide (CTAB) moderately inhibited the activity. Surprisingly, urea, 1,4-dithiothreitol (DTT), and β-mercaptoethanol at a final concentration of both 1 mM and 10 mM significantly stimulated the activity of MpTre15A. Effects of Different Chemicals on the Activity of MpTre15A The effects of metal ions and other substances on the activity of recombinant MpTre15A were tested at pH 5.0 and 50 • C with the chemicals' final concentrations of 1 mM and 10 mM. As shown in Table 3, the addition of the monovalent cations Na + and K + , and the divalent cations Co 2+ and Fe 2+ did not affect the activity of MpTre15A significantly. It was reported that inorganic phosphate and Mg 2+ were required for the activity of Mycobacterium trehalase [6]. In contrast, the addition of 1 mM or 10 mM ethylenediaminetetraacetic acid (EDTA) did not affect the activity of MpTre15A significantly, suggesting MpTre15A was able to hydrolyze trehalose without a requirement for any specific metal ions. Nevertheless, Mn 2+ showed a stimulating effect on MpTre15A at a final concentration of 1 mM. When added in 10 mM, both Mn 2+ and Mg 2+ had a moderate stimulating effect on the activity of MpTre15A. In contrast, the divalent cations Ni 2+ and Ca 2+ , and the trivalent cations Fe 3+ and Al 3+ showed a moderate inhibitory effect on trehalase activity at the concentration of 10 mM. Similar inhibitory effect of Fe 3+ and stimulating effect of Mn 2+ on the trehalose hydrolytic activity of rMtreH from Microvirga sp. MC18 was observed as well, indicating trivalent cations like Fe 3+ may play a negative role in GH15 family trehalases, whereas some divalent cations such as Mn 2+ and Mg 2+ may be favorable for the activity of GH15 trehalases. The detergent sodium dodecyl sulfate (SDS) strongly inhibited the activity of MpTre15A at a final concentration of 1 mM, and the addition of 1 mM or 10 mM cetyltrimethylammonium bromide (CTAB) moderately inhibited the activity. Surprisingly, urea, 1,4-dithiothreitol (DTT), and β-mercaptoethanol at a final concentration of both 1 mM and 10 mM significantly stimulated the activity of MpTre15A. Steady-State Kinetics Using Trehalose as the Substrate at Different Temperature Due to the psychrotolerant bacterial origin of MpTre15A, we determined the kinetic parameters of MpTre15A toward trehalose at 50, 37, 25, and 4 • C, respectively (Table 4). In 100 mM sodium citrate buffer (pH 5.0), the trehalose hydrolysis reaction catalyzed by MpTre15A at different temperatures obeyed the Michaelis-Menten kinetics. The k cat and K m values at 50 • C were 345.4 s −1 and 38.0 mM, respectively. As the reaction temperatures decreased to 37, 25, and 4 • C, the K m values toward trehalose increased to 48.38, 53.66, and 65.43 mM, respectively, indicating a decreased affinity of MpTre15A to trehalose as the reaction temperature dropped. As expected, the k cat value of MpTre15A decreased with the dropped reaction temperature from 50 • C to 4 • C. The k cat /K m of MpTre15A at 50 • C was the highest with a value of 9.15 s −1 mM −1 and that of MpTre15A at 4 • C was the lowest with a value of 1.60 s −1 mM −1 . Notably, the k cat and k cat /K m values of MpTre15A at 4 • C (104.50 s −1 and 1.6 s −1 mM −1 , respectively) were comparable to those observed for thermophilic GH15 trehalase TVN1315, Ta0286, and SaTreH1 from T. volcanium, T. acidophilum, and S. acidocaldarius at 50 • C (Table 4). Time-Course Conversion of Trehalose into Glucose by MpTre15A The time-course conversion efficiency of trehalose into glucose catalyzed by MpTre15A at different temperatures (4, 25, 37, and 50 • C) was further investigated by TLC and HPLC. TLC analysis demonstrated that the trehalose hydrolysis and synchronous formation of glucose catalyzed by purified MpTre15A occurred at different temperatures with initial trehalose concentrations of 5 mg/mL and 50 mg/mL ( Figure S2). The trehalose consumption and glucose production were further determined by HPLC (Figure 7). When 5 mg/mL trehalose was used as the substrate, the residual trehalose at 50 • C after incubation for 10 min and that at 37 • C after incubation for 15 min was only 3.0% of the initial trehalose (Figure 7a). The substrate trehalose was completely degraded after being incubated at 50 • C and/or 37 • C for less than 60 min with an initial substrate concentration of 5 mg/mL. The conversion rate of trehalose into glucose decreased as the reaction temperature decreased. The residual trehalose after 120 min of incubation at 25 • C and 4 • C was about 2.4% and 9.3%, respectively. The formation of glucose at different temperatures was in agreement with the trehalose consumption (Figure 7b), indicating a direct conversion of trehalose to glucose without any other side products. The produced glucose after 10 min incubation at 50 • C and 15 min incubation at 37 • C were 5.07 g/L and 4.97 g/L, suggesting the conversion rate of trehalose at 50 • C and 37 • C reached 100% and 99.4%, respectively. Consistent with reduced trehalose hydrolysis rate at a lower temperature, the glucose production at 25 • C and 4 • C after 120 min was 5.02 g/L and 4.15 g/L. Discussion Trehalose is generally regarded as an osmoprotectant as part of bacterial stress responses during periods of freezing [27]. However, the trehalose metabolism in psychrophiles has not been addressed in detail yet. The bacterium M. phyllosphaerae LW106 could grow at 4-25 °C and showed an optimal growth at 16-20 °C. According to Morita's definition, M. phyllosphaerae LW106 should be categorized as a psychrotolerant bacterium [28]. This strain was found to be able to grow on a medium containing glucose, lactose, sucrose, maltose, and trehalose as a sole carbon source but displayed a slower growth rate on a medium with trehalose as a sole carbon source at 16 °C (Figure 1). Whole genomic sequencing results revealed both a GH15 family GA-like trehalose and a GH13_16 family trehalose synthase in M. phyllosphaerae LW106 (data not shown). Thus, we specu- When 50 mg/mL trehalose was used as the substrate, trehalose was not completely degraded after incubated at 50 • C and 37 • C for up to 120 min, and the residual trehalose at 50 • C and 37 • C was 1.2% and 13.7% of the initial trehalose, respectively (Figure 7c). The trehalose hydrolysis rate at 25 • C and 4 • C was only 60.6% and 44.9% after 120 min of incubation, which is much lower than that at 50 • C and 37 • C. Correspondingly, the glucose production at 25 • C and 4 • C after 120 min was only 27.58 g/L and 17.80 g/L, respectively, both of which were significantly lower than those (48.78 g/L and 41.29 g/L, (Figure 7d). Approximately 39.9% of the total trehalose could still be detected after 6 h of incubation at 4 • C. In addition, compared with the trehalose conversion rate with an initial substrate concentration of 5 mg/mL at 4 • C and 25 • C, the trehalose conversion with an initial substrate concentration of 50 mg/mL at the same reaction temperature was significantly lower, which may be caused by inhibition effects of a high concentration of substrate or produced glucose. Discussion Trehalose is generally regarded as an osmoprotectant as part of bacterial stress responses during periods of freezing [27]. However, the trehalose metabolism in psychrophiles has not been addressed in detail yet. The bacterium M. phyllosphaerae LW106 could grow at 4-25 • C and showed an optimal growth at 16-20 • C. According to Morita's definition, M. phyllosphaerae LW106 should be categorized as a psychrotolerant bacterium [28]. This strain was found to be able to grow on a medium containing glucose, lactose, sucrose, maltose, and trehalose as a sole carbon source but displayed a slower growth rate on a medium with trehalose as a sole carbon source at 16 • C ( Figure 1). Whole genomic sequencing results revealed both a GH15 family GA-like trehalose and a GH13_16 family trehalose synthase in M. phyllosphaerae LW106 (data not shown). Thus, we speculate that MpTre15A catalyzed trehalose degradation may not be the priority way for providing glucose as a cellular carbon source of energy in M. phyllosphaerae LW106. MpTre15A should be more involved in maintaining the homeostasis of cellular trehalose, which functioned as a compatible solute, when the growth temperature changes, since this psychrotolerant bacterium has to undergo temperature fluctuations caused by the glacier basal freezing and the internal melting cycles as well as the season and climate changes [29]. There have been only a few identified bacterial trehalases, especially from the GH15 family [6,18]. MpTre15A shared only a low sequence identity with its closest homolog that has been partially characterized, which is a trehalase from Mycobacterium tuberculosis (35.03%). In addition, different from the recombinant, archaeal trehalases functions as a monomer, dimer, or trimer [16,17,30], and Mycobacterium trehalase functions as a multimeric structure with a molecular mass of 1500 kDa [6], the native MpTre15A seemed to function as a hexamer with a molecular mass of approximately 397.2 kDa (Figure 4). Therefore, the description of MpTre15A is valuable due to its sequence novelty and psychrotrophic origin. In addition to trehalase, enzymes assigned to the GH15 family also include GAs, glucodextranases (GDases), dextran dextrinase, and isomaltose glucohydrolase. Similar to other reported GH15 trehalases, the cold-adapted GH15 trehalase MpTre15A, discovered in the current study, showed more similar amino acid sequences to GAs than to GH37 and GH65 trehalases ( Figure 2). An [S/G/A]E[H/E] sequence around one of the catalytic residue Glu in the conserved region 5 (CRs5) was regarded as an essential motif for the catalytic reaction of GH15 family trehalases [13]. For instance, a GEH sequence existed in the two archaeal trehalases, SaTreH1 and SaTreH2 from Sulfolobus acidocaldarius, whereas a SEE sequence was observed in the other two archaeal trehalases, TVN1315 and Ta0286, from the thermophilic Thermoplasma volcanium and T. acidophilum [17]. Unlike the previously reported bacterial Mycobacterium trehalase [6], which possessed an AEE sequence, a SEE sequence was discovered in MpTre15A in the current study ( Figure 3). Similar to the other GH15 enzymes, such as GAs and GDases, GH15 trehalases including those of archaeal origin possess five CRs in their primary structures [6,17,17]. Consistently, mutation of the two glutamic acid residues, E392Q and E557Q, in CRs 3 and 5 of MpTre15A, led to a total loss of trehalose hydrolytic activity, indicating the two glutamic acid residues' crucial involvement in its catalytic activity. Another recently reported bacterial GH15 trehalase rMtreH from Microvirga sp. MC18 displayed maximum activity at 40 • C and retained more than 60% residual activity after 1 h incubation at 50 • C [18]. In contrast, the optimum temperature of MpTre15A was found to be 50 • C, and the residual activity of MpTre15A disappeared after being treated at 50 • C for only 30 min (Figure 6b). Interestingly, the enzyme activity of MpTre15A at 50 • C exhibited in the trehalose conversion assay was not lost up to 120 min when incubated with an initial trehalose concentration of 50 mg/mL (Figure 7d). Taking into account that enzyme stability is enhanced in the presence of sugars [31,32], these results suggested that a high concentration of trehalose was favorable for the stability of MpTre15A. Substrate specificity analysis of MpTre15A from M. phyllosphaerae sp. LW106 confirmed this novel cold-adapted trehalase could specifically hydrolyze trehalose but had no activity toward maltose, cellobiose, sucrose, and soluble starch (Table 1). Unlike the bacterial GH15 family trehalases from Mycobacterium smegmatis [6] and Microvirga sp. MC18, both of which showed an optimal pH of around 7.0, MpTre15A from the psychrotolerant M. phyllosphaerae LW106 hydrolyzed trehalose with an optimal pH of 5.0 and was stable at a pH range of 4.0 to 8.0. Nevertheless, similar results were observed in the archaeal GH15 family trehalases from thermophilic Thermoplasma volcanium and T. acidophilum, and acidophilic Sulfolobus acidocaldarius, which functioned within a narrow range of acidic pH values but were stable over a wide pH range [16,17]. The above enzymatic properties of MpTre15A suggested this enzyme was thermolabile, and the kinetic parameters of MpTre15A further revealed its typical cold-adaptability. The kinetic behavior of MpTre15A at different temperatures showed decreased affinity toward trehalose with the decrease in reaction temperature (Table 4). This decrease in substrate affinity of MpTre15A at low temperatures should be a result of its inherent flexible structure, which compensates for the low kinetic energy of cold-adapted enzymes in cold environments [21,33]. Nevertheless, both the k cat and k cat /K m values of MpTre15A at 4 • C (104.50 s −1 and 1.6 s −1 mM −1 ) were comparable to or even higher than those observed for the GH15 family trehalases from thermophiles at 50 • C (Table 4). It has been generally regarded that cold-adapted enzyme features with high localized structural flexibility, where the rapid change of conformation at low temperatures allows for substrates to access the active center [21]. Higher values are usually found for psychrophilic enzymes in terms of k cat , K m as well as k cat /K m when comparing the catalytic activities of psychrophilic, mesophilic, and thermophilic enzymes at the same temperature below the melting point [34]. Thus, the countered detrimental effect of low temperature on the catalytic turnover and catalytic efficiency displayed by MpTre15A proved that this novel GA-like bacterial trehalase was a typical cold-adapted trehalase of the GH15 family, which may decrease the activation energies and increase the protein flexibility (entropic compensation) like the other psychrophilic enzymes did [35,36]. In addition, since GH15 trehalases are structurally similar to GA but display different substrate specificities, MpTre15A will be a good model for further dissecting the differences in substrate binding sites and elucidating the substrate recognition mechanism of cold-adapted GH15 trehalases. To date, the trehalase with the highest turnover number (730 s −1 ) is a GH37 family trehalase from the midgut of Spodoptera frugiperda larvae [37]. According to the kinetic parameters of the current study, MpTre15A showed the highest k cat value (347.45 s −1 ) at its optimum temperature among the characterized GH15 family trehalases (Table 4). Compared with the other reported trehalases of bacterial origin such as E. coli (a turnover number of 199 s −1 ) and Zunongwangia sp. (a turnover number of 263.25 s −1 ) [38,39], the k cat value of MpTre15A made it a good candidate for efficient conversion of trehalose into glucose as therapeutic enzyme and food additive from an application perspective [18,40]. In addition, MpTre15A could be recombinantly expressed in the soluble form at a high yield (typically 26.8 mg/L), which is advantageous for its scale-up expression and purification as well. The optimal pH of 5.0 and good stability at a pH range of 4.0 to 8.0 also made it convenient to achieve the maximum trehalose hydrolysis at a pH range usually employed for ethanol fermentation of the yeast S. cerevisiae and regulate the enzyme activity by simply adjusting the medium pH. Moreover, MpTre15A showed an efficient trehalose conversion rate at a temperature range between 50 • C and 25 • C (Figure 7). Considering that yeast generates trehalose during the process of ethanol fermentation but cannot utilize trehalose as a carbon source [40], recombinant trehalase MpTre15A with a high expression level and efficient conversion rate of trehalose into glucose at a temperature range between 37 • C and 25 • C could be of great interest for application in improving the process of ethanol fermentation. In addition, regression analysis of the trehalose consumption and glucose production catalyzed by MpTre15A at ambient temperature with an initial trehalose concentration below 5 mg/mL showed a good linear correlation with an R 2 of 0.9885 and 0.9995, respectively (data not shown). These results suggested this cold-adapted trehalose also had good potential in the development of an enzyme-based biosensor for trehalose content quantification, which is of great value for life science, biomedicine, and other research fields [41]. Conclusions We described detailed identification and characterization of a putative GH15 α,αtrehalase MpTre15A from M. phyllosphaerae LW106, which is the first report of a coldadapted GH15 family α,α-trehalase to our knowledge. Investigation of the enzymatic properties of this cold-adapted α,α-trehalase proved its involvement in trehalose hydrolysis in the psychrotolerant M. phyllosphaerae LW106. The purified MpTre15A functioned as a hexamer and displayed maximal activity at pH 5.0 and 50 • C without a requirement for any metal ions. MpTre15A was thermolabile at temperatures above 45 • C but retained a high k cat and k cat /K m values of 104.50 s −1 and 1.6 s −1 mM −1 at 4 • C, respectively, which proved its typical cold-adaptability. These properties added to the benefit of using this cold-adapted trehalose in industrial ethanol fermentation processes with energy savings. The capability of MpTre15A to efficiently catalyze the conversion of trehalose at a substrate concentration of up to 5 mg/mL with high rates at a temperature range from 4 • C to 50 • C, also constitutes advantages for developing potential therapeutic enzymes and enzyme-based biosensors for trehalose content quantification. Figure S2: Thin layer chromatography (TLC) analysis of the hydrolytic products of trehalose. Panel a and b, TLC analysis of trehalose hydrolysates at different temperatures with 5 mg/mL trehalose. Panel a: Lane G indicates glucose and trehalose mixed standard; Lane 1 to Lane 5 indicate the hydrolytic product of trehalose at 50°C for 1, 3, 5, 10, and 60 min, respectively; Lane 6 to Lane 11 indicate the hydrolytic product of trehalose at 37°C for 5, 15, 30, 45, 60, and 120 min, respectively. Panel b: Lane G indicates glucose and trehalose mixed standard; Lane 1 to Lane 6 indicate the hydrolytic product of trehalose at 25°C for 10, 30, 45, 60, 90, and 120 min, respectively; Lane 7 to Lane 12 indicates the hydrolytic product of trehalose at 4°C for 20, 40, 60, 120, 180, and 240 min, respectively. Panel c and d, TLC analysis of trehalose hydrolysates at different temperatures with 50 mg/mL trehalose. Panel c: Lane G indicate glucose and trehalose mixed standard; Lane 1 to Lane 5 indicate the hydrolytic product of trehalose at 50°C for 15, 30, 45, 60, and 120 min, respectively; Lane 6 to Lane 10 indicate the hydrolytic product of trehalose at 37 • C for 15, 30, 45, 60, and 120 min, respectively; Lane 11 to Lane 16 indicate the hydrolytic product of trehalose at 25 • C for 30, 60, 90, 120, 180, and 240 min, respectively; Lane 17 indicates glucose and trehalose mixed standard. Panel d: Lane G indicates glucose and trehalose mixed standard; Lane 1 to Lane 7 indicate the hydrolytic product of trehalose at 4 • C for 30, 60, 90, 120, 180, 240, and 360 min, respectively. Data Availability Statement: The data presented in this study are available on reasonable request from the corresponding authors.
2022-12-03T22:11:57.207Z
0001-01-01T00:00:00.000
{ "year": 2022, "sha1": "ea11d3af41aaa118c6fef601e57ef52fd8fb2318", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2311-5637/8/10/471/pdf?version=1663728591", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ea11d3af41aaa118c6fef601e57ef52fd8fb2318", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
233289914
pes2o/s2orc
v3-fos-license
Mixing matters All hydrodynamical simulations of turbulent astrophysical phenomena require sub-grid scale models to properly treat energy dissipation and metal mixing. We present the first implementation and application of an anisotropic eddy viscosity and metal mixing model in Lagrangian astrophysical simulations, including a dynamic procedure for the model parameter. We compare these two models directly to the common Smagorinsky and dynamic variant. Using the mesh-free finite mass method as an example, we show that the anisotropic model is best able to reproduce the proper Kolmogorov inertial range scaling in homogeneous, isotropic turbulence. Additionally, we provide a method to calibrate the metal mixing rate that ensures numerical convergence. In our first application to cosmological simulations, we find that all models strongly impact the early evolution of galaxies leading to differences in enrichment and thermodynamic histories. The anisotropic model has the strongest impact, with little difference between the dynamic and the constant-coefficient variant. We also find that the metal distribution functions in the circumgalactic gas are significantly tighter at all redshifts, with the anisotropic model providing the tightest distributions. This is contrary to a recent study that found metal mixing to be relatively unimportant on cosmological scales. In all of our experiments the constant-coefficient Smagorinsky and anisotropic models rivaled their dynamic counterparts, suggesting that the computationally inexpensive constant-coefficient models are viable alternatives in cosmological contexts. INTRODUCTION Galaxies form and evolve in tempestuous gaseous environments where hydrodynamics, radiative cooling, and gravity synergize to produce rich emergent phenomena on a myriad of spatial scales. The immense dynamic range of scales involved and their interconnectedness prove to be limiting factors in advancing our understanding of the complete picture of galaxy evolution (see Naab & Ostriker 2017 for an excellent review). At the forefront of the issue is hydrodynamical turbulence as it is a multi-scale, non-linear phenomenon that occurs in almost all galactic environments -directly impacting our theoretical understanding of galaxy evolution. While the importance of turbulence in the interstellar medium of galaxies has been long recognized (see Elmegreen &Scalo 2004 andElmegreen 2004 for reviews), only recently has the role of turbulence in halo gas come under careful consideration. Indeed, both the circumgalactic medium (CGM) of * galaxies and the intracluster medium (ICM) of groups and clusters of galaxies show signs of turbulence playing an important role in their evolution (see, for example, Prasad et al. 2018 andWang et al. 2020). Observationally, there is evidence of complex kinematic structure in the CGM of * -galaxies that emerged through the revolutionary ★ E-mail: douglas.rennehan@gmail.com (DR) Cosmic Origins Spectrograph halo survey on the Hubble space telescope (COS-halos; Tumlinson et al. 2013Tumlinson et al. , 2017. For instance, Werk et al. (2016) found that turbulent velocities of 50-75 km s −1 explain the broadening of absorption lines in the CGM that is not otherwise explainable that has subsequently been confirmed using numerical studies (Buie et al. 2020). Moving up in mass scale, the ICM also shows evidence of turbulence through indirect observational methods such as X-ray surface brightness and Sunyaev-Zeldovich fluctuations (Zhuravleva et al. 2014;Pinto et al. 2015;Zhuravleva et al. 2015;Khatri & Gaspari 2016;Zhuravleva et al. 2018). There are two main drivers of turbulence in galactic environments: (a) global outflows that emerge from star formation processes and supermassive black holes (SMBHs) within galaxies (Prasad et al. 2015(Prasad et al. , 2018Karen Yang & Reynolds 2016;Bourne & Sijacki 2017;Fielding et al. 2017Fielding et al. , 2018Sokołowska et al. 2018;Li et al. 2020) and (b) shearing motion driven by gas in-fall during structure formation (Dekel et al. 2009;Vazza et al. 2010Vazza et al. , 2012Vazza et al. , 2017Wittor et al. 2017;Bennett & Sijacki 2020), mergers (ZuHone et al. 2013), and ram-pressure stripping of galaxies moving through the ICM (Ruggiero & Lima Neto 2017;Simons et al. 2020). In both the CGM and ICM, turbulence could provide additional pressure support (Poole et al. 2006;Vazza et al. 2018;Lochhaas et al. 2020) that prevents the gas from collapsing and rapidly converting into stars as well as a physical mechanism to transport energy and metals throughout gas directly -impacting the cooling profile, star formation cycle, and metal distribution functions (Shen et al. 2010(Shen et al. , 2012(Shen et al. , 2013Brook et al. 2014;Sokołowska et al. 2018;Escala et al. 2018;Tremmel et al. 2019;Rennehan et al. 2019;Hafen et al. 2019Hafen et al. , 2020. Therefore, understanding the nature of turbulence is imperative to understand the complete picture of galaxy evolution. While there are many successful cosmological simulations that use a variety of sub-grid assumptions to broadly reproduce galaxy populations (e.g. Guedes et al. 2011;Hopkins et al. 2014;Vogelsberger et al. 2014;Schaye et al. 2015;Genel et al. 2014;Davé et al. 2017Davé et al. , 2019Tremmel et al. 2019;Pillepich et al. 2018;Huang et al. 2020), one aspect that is often overlooked is the numerical modelling of sub-grid turbulence. The crux of the problem is in the fact that in hydrodynamical simulations, the physical dissipation scale is almost always much smaller than the resolution scale, ℎ (Pope 2000). For that reason, the kinetic energy that is flowing in the turbulent cascade reaches some scale ℎ where it may no longer progress. If the numerical viscosity of the hydrodynamical method is not sufficiently strong to thermalise the kinetic energy, there will be a build-up of kinetic energy at that scale ℎ (Garnier et al. 2009). The kinetic energy build-up is a completely unphysical representation of turbulence and not only impacts the energetics, but the large scale flow properties such as the redistribution of metals. Although not explicitly stated, many cosmological hydrodynamical simulation studies implicitly assume that numerical dissipation is sufficient to mimic sub-grid turbulence, rather than modelling sub-grid turbulence with additional terms in the hydrodynamical equations of motion. However, numerical dissipation is not sufficient to reduce the kinetic energy build-up and may not represent turbulent flow statistics in all cases (Sagaut 2006). Indeed, Lecoanet et al. (2016) showed that numerical noise at the resolution scale seeds instability in the Kelvin-Helmholtz experiment that causes the long-term evolution to be unconverged as the small-scale instabilities grow. When they introduced explicit sub-grid diffusion to their simulations, the results of the Kelvin-Helmholtz experiment converged -showing that more small-scale structure (i.e. resolution-scale noise) is not necessarily better, and that explicit sub-grid diffusion can cause less large-scale mixing. In Lagrangian hydrodynamics, the fluid equations are approximated via fluid elements that move with the flow. There are three main approaches that are most common in cosmological simulations: smoothed particle hydrodynamics (SPH) (Gingold & Monaghan 1977;Lucy 1977;Hernquist & Katz 1989;Hopkins 2013), the moving-mesh method (MM) (Springel 2010), and mesh-free methods (MF) (Lanson & Vila 2008a,b;Gaburov & Nitadori 2011;Hopkins 2015). Each of these methods track fluid elements using different discretisation techniques that lead to different levels of numerical dissipation. In SPH, there is no inherent numerical dissipation and it, counter-intuitively, produces a deficit of kinetic energy near the resolution scale rather than a build-up (Bauer & Springel 2012; Price 2012b). The MM and MF methods use Riemann solvers to approximate the fluid equations of motion between neighbouring fluid elements. Riemann solvers are generally diffusive due to their approximate nature, and the build-up of kinetic energy is present in both methods on scales up to ∼ 10 times the resolution scale (Bauer & Springel 2012;Hopkins 2015). A solution to kinetic energy build-up is to model the action of turbulent eddies as a viscous process that diffuses momentum (and metals) and dissipates kinetic energy via a diffusion equation and source term in the energy equation, respectively. Usually the assumption is that the viscosity, or diffusivity, depends on velocity fluctuations eddy near the resolution scale ℎ. The resulting diffusivity is ∝ ℎ eddy , where eddy may be a characteristic velocity, or velocity difference, within the neighbourhood of a fluid element (Wadsley et al. 2008;Greif et al. 2009). One particularly important choice of eddy is the Smagorinsky model (Smagorinsky 1963), which assumes that velocity shear fluctuations drive dissipation and mixing through eddy ∼ ℎ| * |, where | * | is the magnitude of the trace-free shear tensor. The Smagorinsky model has been successfully used to treat metal and thermal energy mixing in SPH (Wadsley et al. 2008;Shen et al. 2010Shen et al. , 2012Brook et al. 2014;Williamson et al. 2016;Tremmel et al. 2017;Wadsley et al. 2017;Su et al. 2017) and has been extended to other Lagrangian hydrodynamical methods, such as the MFM method (Colbrook et al. 2017;Escala et al. 2018;Rennehan et al. 2019;Hafen et al. 2019Hafen et al. , 2020. While the Smagorinsky model improves mixing in Lagrangian simulations, it is important to consider two assumptions in the model: (a) that shear fluctuations (i.e., changes in | * |) always represent turbulence and (b) that the diffusive process acts isotropically through the magnitude of the trace-free (i.e. ignoring compression) shear tensor over the scale ℎ. Are either of these assumptions reasonable? To address point (a), Piomelli & Liu (1995) proposed a method of dynamically calculating the model coefficient at simulation time. That method was employed in Rennehan et al. (2019) for the first time in Lagrangian hydrodynamics and they found that the model significantly reduced over-diffusion in non-turbulent shear flows, such as in rotating galactic disks and the Kelvin-Helmholtz instability. The second point (b) concerns the isotropy in and the discounting of compression in | * |. Recently Hu & Chiang (2020) showed that a better representation of sub-grid scale turbulence is obtained by treating the full velocity tensor ∇ ⊗ for the diffusivity , now a tensor. The model is the gradient model (Clark et al. 1979) and is a major change since diffusion now depends on the directionality encoded in ∇ ⊗ rather than equally in every spatial direction. The trace of ∇ ⊗ is automatically included and, therefore, compression is automatically handled 1 -an important point for highly-compressible turbulence in cosmological flows. However, Hu & Chiang (2020) post-processed their driven turbulence simulations to check if the model would have improved the results at simulation time. Our goal is to implement the model for the meshfree finite mass method, and determine its feasibility at simulation time in combination with the dynamic procedure from Rennehan et al. (2019). We introduce an implementation of the gradient model for Lagrangian astrophysical simulations and additionally provide methods for computing the model coefficient at simulation time. In Section 2 we provide a derivation of the model, as well as a derivation of the dynamic procedure in Section 2.1 that allows calculation of the model coefficient at simulation time. Section 3 describes driven turbulence validation tests of the gradient model, included at run time, for both eddy viscosity and metal mixing. As a first applica-tion, we describe the qualitative impact of the eddy viscosity and metal mixing model on cosmological gas phases in Section 4. We present our conclusions and recommendations in Section 5. THE GRADIENT MODEL In finite-mass Lagrangian hydrodynamics, such as the mesh-free finite mass (MFM) method (Lanson & Vila 2008a,b;Gaburov & Nitadori 2011;Hopkins 2015), the build-up of kinetic energy at the resolution scale demands an additional dissipation mechanism. Additionally, metals follow the fluid mass elements throughout the simulation volume and, therefore, the exchange of metals between fluid elements due to sub-grid scale turbulent motion does not occur. The crux of the issue is that discretisation of the fluid field leads to damping out of the high-frequency turbulent fluctuations that should continue down to the physical dissipation scale. It is useful to think of the damping action as a high-pass filter acting on the fluid equations of motion. By applying a general filter to the momentum conservation equation, it is possible to derive the correct level of mixing that should occur between fluid elements due to unresolved turbulence. In general, the filtering action over a scalar field ( ) can be represented as, where ℎ is the smoothing scale over the domain . We apply this to the momentum equation to determine the correction terms due to unresolved turbulence, where is the gas density, is the pressure, and is the velocity vector. If we apply equation (1) to equation (2) it follows that, assuming the filtering operation and derivatives commute, For simplicity, we switch to density-weighted variables such that ≡ / and The term ⊗ is unknown at simulation time because it relies on information below the resolution scale. To put the equation in a more manageable form, we add ∇ · ( [ ⊗ − ⊗ ]) and rearrange, Therefore, we define the sub-grid scalar flux , and retrieve a new equation, The sub-grid scale momentum flux is unknown at simulation time and must be modelled yet is widely ignored in cosmological simulation studies which usually focus only on the thermal energy and metal fluxes via the Smagorinsky model. There are a myriad of models in the literature for (see Garnier et al. 2009 for extensive lists) but here we take the direct approach of using a Taylor series approximation following Clark et al. (1979) and Hu & Chiang (2020). We expand 2 ⊗ via a Taylor expansion as and, therefore, the flux becomes where is a constant that depends on the kernel scale ℎ as ∝ ℎ 2 (Monaghan 1989(Monaghan , 2002(Monaghan , 2011. Expanding the term on the right hand side of equation (9) and keeping only the first derivative terms we find where is our model parameter. A similar result emerges when considering the mass flux of metals in a fluid, Applying the same approach as before gives an equation for the sub-grid flux of metals, The method for solving equation (7) is detailed in Hopkins (2017), and we point the reader to that work for further information. From their equation (2), where is the tensor describing the diffusive strength, and is the fluid field property. Therefore, we identify, ≡ . It is important to note that following Section 3.0.6 of Hopkins (2017), we also include the dissipation term corresponding to ∇ · in the energy flux to ensure energy conservation. The model in equation (10) is known to lead to numerical instability due to particles attracting rather than repelling (Nomura & Post 1998;Balarac et al. 2013), similar to the well-studied tensile instability in smoothed particle magnetohydrodynamics (Phillips & Monaghan 1985;Morris 1996;Monaghan 2000;Price 2012a). Balarac et al. (2013) specifically showed for the anisotropic eddy viscosity model that ignoring the positive eigenvalues of ensures the model is always well behaved and that the action of the model is to repel particles rather than attract. Therefore, we follow Balarac et al. (2013) and only keep the contribution due to the negative eigenvalues of the shear tensor. However, we must first decompose ∇ ⊗ into the symmetric and anti-symmetric parts, We further decompose into the contribution from the positive eigenvalues and negative eigenvalues as ≡ ⊕ + . It then follows that where ( ) is the th eigenvalue and ( ) is the corresponding eigenvector. Therefore, the new diffusion coefficient tensor is where we have reduced the shear contribution into . Removing the negative eigenvalues appears to not be necessary for the metal mixing case as we found no cases of numerical instability in all of our hydrodynamical tests and cosmological simulations. However, we find that it is absolutely necessary in cosmological simulations for the momentum flux. It is important to note that our choice of ℎ differs from that in Hopkins et al. (2018). We choose ℎ as the kernel radius of compact support as this is the maximum interaction distance for the flux in the MFM method. Physically, it is the maximum distance over which sub-grid eddies transport their fluid properties. We find much better results in our tests in Section 3 using the maximum interaction distance. However, we do note that the radius of compact support is different for each kernel and may not be the most accurate lengthscale. Given that it is normally twice the smoothing scale for the kernel (Dehnen & Aly 2012), it provides the best compromise. Any kernel that is used having a compact support radius larger than twice the smoothing scale should be further investigated before using our definition of ℎ. Our value is approximately twice that found in the FIRE studies (Su et al. 2017;Escala et al. 2018;Hafen et al. 2019Hafen et al. , 2020, but produces 4 times as much dissipation and metal mixing since the dependence is squared in equation (18). Our choice of ℎ is the same as Wadsley et al. (2017) who use the radius of compact support for turbulent mixing in the GASOLINE-2 code. The dynamic gradient model We apply the same procedure in Balarac et al. (2013) combined with the density-weighted filtering procedure in Rennehan et al. (2019). Following their notation, we replace˜with since fluid properties are inherently density-weighted in the mesh-free finite mass method. Although we focus on the velocity fluctuations in the following procedure, we note that it applies equally to the metal field. The resolved fluctuations in the flow are, where represents the velocity vector filtered once on the resolution scale (to produce ), then filtered again on twice the resolution scale ( ℎ ≡ ℎ ∼ 2ℎ, see Section 2.4 of Rennehan et al. 2019). Explicitly, we represent the filtering operation on any scalar quantity from equation (1) as a sum (Monaghan 1989(Monaghan , 2005(Monaghan , 2011Rennehan et al. 2019), where is any scalar quantity at particle , ℎ is the mean smoothing lengths between the two particles, ℎ is the mean of the densities of the two particles, and the sum is taken over nearest neighbours. We take the smoothing factor = 0.8 following Rennehan et al. (2019). To obtain double-filtered quantities, we apply equation (20) to the singly-filtered quantities, If we use only resolved quantities (i.e. doubly-filtered) then the gradient model in equation (10) should reproduce L, The above equation results in a solution for the one unknown parameter , where we have defined ≡ ℎ 2 (∇ ⊗ ). Using this determined value of , we use equation (18) as the diffusivity tensor in the additional flux term from equation (10). Note that ℎ is the resolution scale and not the radius of compact support of the kernel. The resolution scale is approximately half of the radius of compact support, or the mean interparticle spacing. Applying the same procedure to the metal field yields a separate equation for Z . The resolved fluctuations take a similar form to equation (19), Note that L is now a vector rather than a rank-2 tensor. Following the procedure we outline above results in an equation for Z where we have defined ≡ ℎ 2 (∇ ⊗ ) · ∇ . Comparison to the Smagorinsky model The gradient model differs from the widely used Smagorinsky model in a subtle yet important way. Returning to the definition of the sub-grid flux in equation (10), the Smagorinsky model represents as, where * is the trace-free symmetric shear tensor 3 . The diffusivity tensor is isotropic, and || Smag || = 2 ( s ℎ) 2 || * || with s ∼ 0.15. The constant nature of s implies that the diffusivity scales with the magnitude of the symmetric shear or, more directly, that the strength of turbulent fluctuations is only determined by fluctuations in the shear strength. That is a good assumption in purely turbulent flows but fails dramatically in laminar shear flows, where the shear is not a good indicator of the presence of turbulence. One solution to over-diffusion in the Smagorinsky model is to dynamically calculate the coefficient s at simulation time based on the local fluid properties. We showed in Rennehan et al. (2019) that the dynamic procedure predicts much lower values of s in the majority of our simple hydrodynamical tests. However, we did not consider the impact of altering the isotropic nature of the diffusivity. The gradient model has Grad ∝ with || Grad || = 2 ℎ 2 || ||. The constant is yet to be determined but the direction differs from the dynamic and non-dynamic Smagorinsky models. The diffusivity itself no longer acts in each direction equally but acts in the direction of the eigenvectors of . However, in a simple incompressible, low Mach number turbulent flows we expect that || Smag || ∼ || Grad || given that the velocity derivative tensor ∇ ⊗˜is approximately isotropic in that regime. Our application of the dynamic method (Piomelli & Liu 1995) to the gradient model simultaneously allows = ( , ) and the diffusivity to be anisotropic, Grad ∝ . This should be important for any complicated astrophysical flows such as those we investigate in the following sections. Table 1 contains a compact description of our model set. In all cases where there is a sub-grid turbulence model, we treat both metals and viscosity simultaneously. There are three categories of models: no sub-grid model (None), the Smagorinksy model, and the gradient model. The dynamic procedure allows us to extend the Smagorinsky and gradient models with a model parameter that depends on spatio-temporal coordinates. Additionally, we test the only other calibration of the Smagorinsky model in the mesh-free finite mass method from the FIRE collaboration (Escala et al. 2018). Models For the Smagorinsky model, Smag., we use the theoretical value of s ∼ 0.15 and limit s to 0.20 for the dynamic Smagorinsky model (Dyn. Smag.) to avoid numerical instability. The FIRE 3 * ≡ − 1 3 tr( ) · In the new gradient model we use fixed values of = 0.22 and Z = 0.22 for the baseline comparison and label these as Grad. In our other tests, we use the dynamic procedure outlined in Section 2.1 and label these tests as Dyn. Grad. We derive our fixed values of and Z from the approximate median value predicted by the dynamic procedure in the driven turbulence tests in Section 3. HOMOGENEOUS TURBULENCE Turbulence is ubiquitous in astrophysical flows on a myriad of scales and Mach numbers. Therefore, in this section, we investigate the impact of the gradient model on the velocity statistics and metal distributions in homogeneous, isotropic, driven turbulence at Mach numbers M ∈ {0.3, 0.7, 2.1}. For each M, our control (i.e., no sub-grid turbulence model) simulation set comprises 5 simulations with particle counts ∈ {64 3 , 128 3 , 256 3 , 512 3 , 768 3 } within a box of side length = 1 with initial pressure, density, and specific internal energy of = 1, = 1, = 1000, respectively 4 . Initially, we place equal mass particles on a uniform grid and then subsequently mix the gas via the prescription of Schmidt et al. (2006), ported to particle-based simulations in Price & Federrath (2010) and subsequently later implemented in GADGET and GIZMO (Bauer & Springel 2012;Hopkins 2015). We list our turbulent driving parameters for GIZMO in Table 2 (cf. Table 1 in Bauer & Springel 2012). For more details of our approach, see Section 3.1 of Rennehan et al. (2019). Our interest lies in measuring the impact of (1) the eddyviscosity model on the velocity power spectra of these driven turbulence volumes and (2) the convergence of metal distribution functions in these volumes. However, these properties rely on driven turbulence volumes that are in statistical equilibrium. To gauge whether our simulations are in equilibrium, we define a mixing timescale mix ≡ / , where is the expected average velocity of the particles in each volume. However, = s M, where = 1 is the isothermal sound speed of the gas and, additionally, = 1. Therefore, mix = 1/M. We evolved each control simulation for several mixing timescales (∼ 4 mix ) to ensure that the gas is in steady-state statistical equilibrium and we confirmed the stability of the Mach number. The mixing timescale and steady-state Mach numbers are listed in Table 2. We measure the velocity power spectra following the same method as Bauer & Springel (2012), which is available in the public version of GIZMO. We note that there are debates in the literature over the most accurate method to compute the velocity power spectrum in Lagrangian hydrodynamical methods, particularly for classic smoothed particle hydrodynamics (Bauer & Springel 2012; Price 2012b). The biggest issue is reproducing the correct power on the smallest scales, near the maximum resolution. Shi et al. (2013) compared several methods and showed that a second-order moving least squares method produces the least error in reproducing the correct velocity power spectra on the smallest scales. However, it is not clear how this will generalise to the MFM method where the power on smaller scales, as we discuss below, more closely resembles grid-based hydrodynamical methods. We choose to use the module available in the GIZMO for easy comparison to the turbulence results Bauer & Springel (2012) and Hopkins (2015), and leave comparison of the different power spectra calculation methods to future work. It may seem out of place to study low resolutions, such as 64 3 and 128 3 , in this study when it is clearly possible to study resolutions up to 768 3 . The ultimate goal of our work is to apply the model to cosmological simulations which have a huge dynamic range and, therefore, low resolutions in individual galaxies. For example, Illus-trisTNG (Pillepich et al. 2018) and the RomulusC (Tremmel et al. 2019) simulations both have particle mass resolutions of ∼ 10 5 M for each gas particle, at their best. Consider that in an * -galaxy, we expect perhaps 10 10 M of hot gas in the circumgalactic medium (Anderson & Bregman 2010). At the best resolutions we have today, that gives ∼ 10 5 particles per * -halo or ∼ 50 3 particles. Evidently, contemporary cosmological simulations that capture both hundreds of Mpc on the large-scale as well as individual galaxies are far off from 256 3 per galactic halo. Velocity power spectra A standard measure to determine if eddy viscosity models improve the accuracy of hydrodynamical simulations is whether the velocity power spectra reproduces the theoretically predicted Kolmogorov scaling, ( ) ∼ −5/3 . That scaling holds for incompressible, low Mach number turbulence (M 1) but is shallower than the apparent scaling in supersonic turbulence, ( ) ∼ −2 (Federrath 2013). In physical turbulence, the dissipation scale is demarcated by a sharp decline from the Kolmogorov slope on the smallest scales. In simulations, the resolution scale forces dissipation to occur on a much larger scale than what would occur in nature as the physical dissipation scale is unresolved. If the numerical viscosity of a hydrodynamical method cannot rapidly dissipate that energy, there will be a build-up of kinetic energy near the resolution scale that causes an unphysical representation of turbulence. Eddy-viscosity models introduce additional dissipation in the gas by accounting for the unrepresented scales in the flow, or equivalently by minimizing the error from the missing terms in the equations of motion. The build-up of kinetic energy is usually observed as a "bump" in the velocity power spectra where there is artificial correlation in the velocities on small scales. However, before we discuss the impact of eddy viscosity models on the power spectrum we must first test the convergence of the mesh-free finite mass (MFM) method in simulations without eddy viscosity. Fig. 1 shows the velocity power spectra for our set of simulations with particle counts 64 3 , 128 3 , 256 3 , 512 3 , and 768 3 coloured by lines from lightest to darkest, respectively. The power spectra are compensated by for easy comparison to Bauer & Springel (2012). The panels are ordered from lowest to highest Mach number from top to bottom -M ∼ 0.3, 0.7 and 2.1, respectively. In each panel, the dotted line represents the predicted scaling but at an arbitrary normalisation. From the top panel of Fig. 1, it is apparent that 64 3 and 128 3 do not faithfully represent a turbulent gas as they are dominated by the bump. More precisely, the ( ) scaling is much too shallow compared to Kolmogorov turbulence for a wide range of . Our 256 3 simulation shows an inkling of the inertial range scaling but is slightly too steep below 20 and dominated by the bump at 30. As we move up in resolution the inertial range only grows slightly. At our highest resolution, the inertial range spans ∼ 40 to ∼ 60 and the bump dominates the small scales. We skip discussion of the middle panel as the results are qualitatively equivalent between M ∼ 0.3 and 0.7. In the bottom panel of Fig. 1, we show the compensated power spectra for supersonic turbulence at M ∼ 2.1. It is immediately evident that the simulations converge much more rapidly to the proper scaling than in subsonic turbulence. Still, at 64 3 resolution the power spectra is dominated by the build-up of kinetic energy near the resolution scale, with the inertial range only beginning to appear at 128 3 resolution. At our highest resolution, 768 3 , there is arguably an entire order-of-magnitude resolved in the inertial range before the build-up of kinetic energy dominates at the smallest scales. Evidently, sub-grid eddy viscosity models are required for the mesh-free finite mass (MFM) method at any resolutions that may be used in astrophysical environments. We show the impact of our eddy viscosity models in Fig. 2. The left column shows the compensated power spectra, ( ), as a function of wavenumber . The rows represent the same three Mach numbers M = 0.3, 0.7, and 2.1 from top to bottom, respectively. The dotted black line shows the Kolmogorov scaling at each Mach number. All of the simulations were run at 256 3 resolution since that is the resolution where, with no eddy viscosity model, we begin to see an extended inertial range and distinguish the kinetic energy "bump". The coloured curves show the eddy viscosity models, shaded from lightest to darkest: Dyn. Smag (solid salmon), Smag. (solid magenta), Dyn. Grad. (dotted purple), and Grad. (dashed purple). To explain the right column, we must first explain the results in the left column. The left column of Fig. 2 shows the velocity power spectra of the turbulent gas in our simulated volumes with the same models presented in Section 2. The black curve shows the control experiment at 256 3 resolution, i.e. the simulation with no eddy viscosity model and only numerical dissipation as in Fig. 1. All Mach numbers show the same trend: the eddy viscosity models have little impact on reducing the build-up of kinetic energy at small scales. Especially important is that the sub-sonic (M 1) simulations are much less improved than the supersonic case. However, the new gradient model variants, Grad. and Dyn. Grad., dissipate slightly more rapidly and allow for a steeper slope closer to the Kolmogorov scaling. We did not expect that all of the eddy viscosity implementations would fail to reduce the kinetic energy build-up a priori. The fact that there is not enough dissipation suggests that some physical property in the diffusion tensor was assigned incorrectly. As we outlined in Section 2.2, the diffusion strength for the Smagorinsky and the gradient model classes should effectively scale with each other (|| Smag || ∼ || Grad ||) in isotropic, homogeneous turbulence. Therefore, in both classes of models, there are only two physically-motivated quantities that control the strength of diffusion: the length-scale ℎ and the velocity tensor ∇ ⊗ . The velocity tensor should not be the issue since it has been verified through the hydrodynamical tests in Hopkins (2015) and would cause the MFM method to fail drastically if the velocity gradients were incorrect. That leaves ℎ as the issue, suggesting that our estimate of the scale over which the eddy viscosity interactions propagate is underesti-mated 5 . Therefore, we introduce a boost factor to the diffusion tensor || || (i.e., || || → || ||) in order to get a more reasonable scaling in the inertial range. We determined the boost factor by running a series of driven turbulence tests with discrete from = 1 to = 100. We did not perform a quantitative fit to the Kolmogorov scaling as our interest is in the approximate offset required to improve the inertial scaling and the exact is unimportant for the statistics of the flow. We additionally found that we only need to correct the diffusion strength in particles that are subsonic, M < 1, with the Mach number for each particle derived from the current velocity of the particle divided by its thermal sound speed. The right column of Fig. 2 shows the velocity power spectra of our simulations with a dissipation boost factor of ∼ 10 on only the subsonic particles in each simulation. At all Mach numbers the additional dissipation causes the kinetic energy to convert into thermal energy much more readily, causing the disappearance of the additional power at small scales, 40. The gradient models Grad. and Dyn. Grad. perform better than the Smagorinksy variants, but only slightly. That is expected in isotropic homogeneous turbulence since the dissipation strength is effectively the same, and confirms that our implementation of the gradient model is a successful eddy viscosity model. It is very important that we emphasise the results we observe in Fig. 1 are very similar to those found in Bauer & Springel (2012) and Hopkins (2015) for the moving-mesh method (MM; as implemented in AREPO) and the mesh-free finite mass method (MFM; as implemented in GIZMO), respectively. The only difference is that we show results for much higher resolutions where we begin to see an extended inertial range. Both the MFM and MM hydrodynamical methods produce the same kinetic energy bump that exists in gridbased methods and, therefore, we would expect an eddy viscosity model to also solve the problem in the MM method, although we do not test that in this work. To reiterate, it is necessary that all hydrodynamical simulations resolve the inertial range combined with an immediate sharp drop-off in power, or they are not reproducing what we physically observe as turbulence below a certain scale, where the build-up of kinetic energy begins to dominate. Metal mixing After each simulation reached ∼ 4 mix , we treated each steady-state volume as new initial conditions for our metal mixing study. In each volume, we gave the densest 50% of particles a metal mass fraction of = 1 while keeping the rest = 0. The metals in our simulations act as passive scalars and have no impact on the flow properties. We will test the model with more realistic metal distributions in Section 4. First, we must determine if our simulations converge toward a solution for the metal distribution as we increase resolution. We ran each of the metal enriched volumes for an additional 4 mix to sample a wide variety of metal distribution states. We expect a priori that by ∼ 2 mix the metal-enriched particles should be scattered approximately homogeneously since a particle with the typical velocity should have crossed the volume twice in that time. Although that is true for all resolutions, how can we compare each resolution on equal footing after it has reached equilibrium? The appropriate comparison involves smoothing the spatial The left column has no additional increase in dissipation whereas the right column has an order-of-magnitude boost in dissipation on particles with M < 1. The black curve shows the 256 3 simulation with no eddy viscosity model. The other solid lines show the Smag. and Dyn. Smag. models, and the dashed and dotted lines show the Grad. and Dyn. Grad. models, respectively. While all of the models improve the inertial scaling, an additional boost factor ( ∼ 10) for subsonic particles is required to reproduce the proper scaling at all Mach numbers. distribution of metals on the same scale in all of our simulations. The main assumption is that our simulations with particle counts 128 3 contain more accurate information on the scales equivalent to our 64 3 simulations. Equivalently, if we degrade the resolution of the highest resolution simulations to the lowest resolution, we should hope to obtain a result similar to the lowest resolution simulation. To degrade the resolution for each simulation, we first kernel-weight the particle data onto a grid with resolution twice as fine as minimum smoothing length in the simulation, Δ sim,i . Next, we smooth the grid data on a physical scale equivalent to our 64 3 simulation using a uniform top-hat filter 6 with width = Δ low /Δ sim,i , where Δ low is always Δ low ≡ 1/64 since our box has length = 1. Fig. 3 shows the normalised histograms of the filtered metal field. The panels are Mach numbers M = 0.3, 0.7, and 2.1 from top to bottom, respectively. The black curves with markers show the convergence of the filtered metal field for resolutions 64 3 , 128 3 , 256 3 , 512 3 , and 768 3 . The coloured lines show, from lightest to darkest: Dyn. Smag (solid salmon), Smag. (solid magenta), FIRE (dotted magenta), Dyn. Grad. (dashed purple), and Grad. (solid purple) at 64 3 resolution. We obtained all of the information for this figure after approximately two mixing timescales, ∼ 2 mix , where mix = 1/M. All of the sub-grid models except the FIRE calibration predict a more accurate large-scale metal distribution, at lower resolution. Fig. 4 shows the standard deviation ( Z ) of the smoothed metal distribution as a function of x = {64, 128, 256, 512, 768} in our simulations at ∼ 2 mix , for Mach numbers 0.3, 0.7, and 2.1 from top to bottom, respectively. The stars correspond to the simulations without a sub-grid metal mixing model at the resolution given by their labels. The remaining symbols in the legend correspond to the simulations at 64 3 with a sub-grid metal mixing model (see Table 1 for a description). The dotted line shows an exponential decay fit to test convergence in the simulations without metal mixing. We expect decreasing Z with increasing resolution since hydrodynamical mixing is more resolved and the metal value in each grid cell approaches the mean. At M ∼ 0.3 in the top panel of Fig. 4, Z follows an exponentially decreasing trend with resolution in the simulations without metal mixing, as expected. However, there is not evidence for strong convergence at our highest resolution -although it appears the curve is beginning to flatten. The inverted triangle shows the results for the Dyn. Smag. model and the left-pointing triangle shows the result for the Smag. model. Both the dynamic and standard Smagorinsky models predict a more reasonable Z , reproducing a metal distribution closer to a resolution of 512 3 . The FIRE calibration of the Smagorinsky model, marked as ×, shows little improvement in Z ; the result is effectively equivalent to having no model at all. The Dyn. Grad. and Grad. models are marked by a plus sign and diamond, respectively. The gradient model obviously improves Z and can, at 64 3 resolution, also reproduce a metal distribution equivalent to a resolution 512 3 . The trends are equivalent for M ∼ 0.7 turbulence, so we continue to the next panel. The bottom panel of Fig. 4 shows how the sub-grid metal mixing models impact supersonic turbulence at M ∼ 2.1. There is much better convergence of the metal distribution at ∼ 2 mix than in subsonic turbulence at this time. The qualitative trend remains the same for the 64 3 simulations with sub-grid metal mixing: both the gradient models and Smagorinsky models perform equally as well. 6 Specifically, we use the uniform_filter function from the scipy package in Python, with periodicity enabled. However, the width Z of the distributions are much wider in M ∼ 2.1 turbulence. With the exception of the FIRE calibration, the other metal mixing models at 64 3 resolution predict metal distributions similar to 256 3 . Our turbulence tests with metals demonstrate that sub-grid metal mixing models are necessary in the mesh-free finite mass (MFM) method if one desires more accurate metal distributions. The method we provided for calibrating the metal mixing models is important and, we argue, must be investigated whenever one implements a new metal mixing model into any hydrodynamics solver. Particularly, one must calibrate the model parameter at the resolution they desire for their mixing model if a dynamic procedure is not applied. That is an important point: the dynamic procedure (Dyn. Smag. and Dyn. Grad.) allows us to approximate the calibrated model parameter for the corresponding model (Smag. and Grad.) without carrying out the calibration. However, the true power of the dynamic procedure is in simulations with mixtures of non-turbulent and turbulent gas at various Mach numbers. In those complex environments the dynamic procedure automatically adjusts the model parameter and, as we showed in Rennehan et al. (2019) for the Dyn. Smag. model, drastically alters the resulting metal distributions. As expected, in pure homogeneous, isotropic turbulence all of the sub-grid metal mixing models improved the accuracy of the metal distributions when using a proper calibration. That is expected in the homogeneous, isotropic case since all of the models effectively act in the same way, on average. The power of the dynamic procedure, and the new anisotropic model is in cosmological simulations where many complex flows interact. Applicability to other hydrodynamical methods While our interest lies in the mesh-free finite mass (MFM) method, the models in Section 2 and experiments in Section 3 are applicable to other hydrodynamical solvers. Specifically, our results extrapolate with minor modification to grid-based hydrodynamics. Only slight changes to the filtering method must be implemented, as outlined in Schmidt (2015). More care must be taken when applying the model to smoothed particle hydrodynamics (SPH). However, as we outline below, there is much broader applicability to the movingmesh (MM) method. First we consider eddy viscosity. In SPH, it is well-established that there is a deficit in power near the resolution scale rather than a build-up of kinetic energy (Bauer & Springel 2012;Price 2012b;Hopkins 2013Hopkins , 2015. That fact suggests that SPH reproduces turbulence better than the MFM or MM methods, but produces results at a much lower effective resolution. The lack of power, rather than the overabundance of power, implies that an eddy viscosity model would only further degrade the resolution of SPH results and not improve the inertial scaling. Therefore, we do not recommend eddy viscosity models for SPH but rather the work of Di Mascio et al. (2017), who recently provided an SPH equivalent. For the MM method, the build-up of kinetic energy at the resolution scale is present (Bauer & Springel 2012) and of equivalent magnitude to our results in the MFM method. Therefore, we recommend investigation into eddy viscosity models for the MM method as they could improve the inertial scaling. In terms of implementation, all of the derivations in Section 2 apply to the MM method. Metal mixing using the Smagorinsky model has been studied in cosmological simulations involving SPH but widely ignored in the MM method. However, no calibration technique has been provided by the community for SPH and the calibrations usually follow the theoretical value for the Smagorinsky model (e.g., Shen et al. 2010;Williamson et al. 2016) or calibrations that depend on sub-grid astrophysics models (e.g., Wadsley et al. 2017;Escala et al. 2018). The metal mixing calibration technique in Section 3 is completely applicable to SPH since metals are treated equivalently to the MFM method, and they are both constant mass methods. Additionally, our calibration technique does not depend on the uncertainties within astrophysical sub-grid models -only pure hydrodynamics. For the MM method, all of Section 2 is applicable for metal mixing since the MM method relies on transport equations such as equation (2) for advecting metals throughout the fluid (Springel 2010). In fact, Balarac et al. (2013) find that the gradient model improves the inertial scaling in the power in the metal field through identical transport equations. However, convergence tests such as those in Fig. 4 are necessary in order to determine whether they are truly required, or if numerical dissipation is adequate. COSMOLOGICAL SIMULATIONS Understanding the evolution of galaxies is a complex enterprise involving highly non-linear coupled physical processes. Not only do stellar feedback and active galactic nuclei produce powerful outflows that drive turbulence locally in the interstellar media of galaxies, but also in the gas reservoirs surrounding galaxies. Turbulence also appears through the Kelvin-Helmholtz instability in ram pressure stripping of galaxies moving through a hot medium, and through the stellar winds from stars making their way out from the galaxy into the circumgalactic medium. The physical processes above occur on spatial scales much smaller than currently possible to resolve in the average contemporary cosmological simulation. For that reason, the majority of astrophysics in cosmological simulations are encoded into parametrised sub-grid models that use the information on the largest scales to predict what occurs below the resolution of the simulation. The resulting calculations usually indicate how much mass and energy should be injected (or removed) from the large-scale gas and stellar components. However, there is no one correct way to approximate the astrophysics on the sub-grid scale since it highly depends on the maximum possible resolution, hydrodynamical method, as well as other complex numerical effects. All of the issues with numerics and missing physics usually ends up in one or more tunable free parameters in the model. Assuming such a sub-grid astrophysical model is developed, how do we know that it is correct? Or, at least approximating reality? Normally, one or more trusted astronomical observation is used to test the validity of all of the sub-grid astrophysics that may exist. A common example would be the galaxy stellar mass function, or the M BH - * relationship that links supermassive black hole masses to the stellar masses of their host galaxies. However, two different hydrodynamical methods may provide different parameter values for the same sub-grid astrophysical models. Additionally, there may be two completely different approaches to modelling the same physical phenomenon with no clear mapping between free parameters! Calibrating sub-grid astrophysical models is obviously a complicated endeavour and must be built on a strong hydrodynamics base. How can we begin to trust that our understanding of the astrophysics of small scale approximate reality if the hydrodynamics, as we showed in Section 3, does not reproduce reality? Our goal is in determining whether the converged and separately calibrated sub-grid turbulence models we presented in Section 2 have any significant impact over the sub-grid astrophysical models that we use in large-volume cosmological simulations. As a first step, we only investigate the broad, qualitative impact on the gas properties in gaseous halos in a single set of sub-grid astrophysical models. We stress that we do not intend to reproduce the full galaxy population in a calibrated and predictive sense. Additionally, we note that more testing is required across the all of sub-grid astro- physical models that exist in the literature, as the turbulent mixing models may interact in unexpected ways due to the non-linearity of the problem. The simulations For our comparison, we choose to use the SIMBA galaxy formation model. SIMBA includes robust sub-grid models of star formation, cooling, stellar feedback, chemical enrichment, active galactic nuclei feedback, and dust evolution -all evolved in concert with the mesh-free mass method (MFM) (Davé et al. 2016(Davé et al. , 2019. For this study, we implemented the SIMBA models into the public version of GIZMO as described in Davé et al. (2019) and we point the interested reader to that study for the details of the sub-grid models. We follow the approach of Schaye et al. (2015) and calibrate our implementation of the SIMBA model only to the galaxy stellar mass function and the BH - * relationship at = 0 for the purposes of this study. We run 6 cosmological-scale volumes of side-length = 25 cMpc ℎ −1 (∼ 37 cMpc) in order to compare our various mixing models. The simulations begin from initial conditions generated with MUSIC (Hahn & Abel 2011) at a redshift of = 249, with a standard Λ Cold Dark Matter cosmology (see Table 3 for values). The mass resolution in gas and dark matter follows the SIMBA simulations with part,gas = 1.26×10 7 M ℎ −1 and part,dark = 6.88×10 7 M ℎ −1 , respectively. We use adaptive gravitational softening (Hopkins 2015;Hopkins et al. 2018) to compute the softening lengths of all of our particles, and enforce a minimum softening length of soft,min = 0.5 kpc ℎ −1 . Global metal mixing While sub-grid scale turbulence models maximally impact the smallest scales in a cosmological simulation, their integrated effect impact the properties of the largest scales, such as global metal distribution functions (Shen et al. 2010;Escala et al. 2018;Rennehan et al. 2019). Therefore, in this Section, we examine the impact of the gradient model on the metal distribution functions (MDFs) in the circumgalactic medium (CGM) and warm-hot intergalactic medium (WHIM) -both known to be turbulent environments (Iapichino et al Our definition of CGM and WHIM depends on separating gas that is bound to halos from that which is unbound, at a given epoch. A good estimation comes from Davé et al. (2010), where Ω b ( ) is the baryon fraction as a function of redshift, Ω m ( ) the matter fraction, ( ) = 3( ( )) 2 /(8 ), and ( ) the redshift-dependent Hubble function. All gas above bound ( ) we consider bound to halos, and have confirmed that the approximation holds well. We define the CGM to be all gas in the volume that is above bound ( ) in equation (28) and below the star formation density threshold, * ,crit = 4.4 × 10 −25 g cm −3 , at any temperature. That includes gas in the intragroup medium of our most massive halos in the (25 cMpc ℎ −1 ) 3 volumes. The WHIM is all gas that is below bound ( ) and above a temperature of = 10 5 K. Fig. 5 shows the metal distribution functions (MDFs) for our two gas phases in columns: WHIM (left) and CGM (right), and at = 6, 4, 2, and 0 in rows from top to bottom, respectively. These are probability density functions, and were constructed by binning the particle metallicities in the range 10 −6 < log( / ) < 10 1 , where = 0.0134 (Asplund et al. 2009). The black curves show the control simulation, None, with no sub-grid metal mixing. The coloured curves show the simulations with sub-grid metal mixing and are, from lightest to darkest: Dyn. Smag (solid salmon), Smag. (solid magenta), FIRE (dotted magenta), Dyn. Grad. (dotted purple), and Grad. (dashed purple). See Table 1 for more details. First we focus on the WHIM. At = 6, there are two distinct components across all of our model variants. The peak at ∼ 10 −1 Z is the highly enriched interstellar medium (ISM) gas that recently joined the WHIM via stellar winds from the integrated star formation in the early universe. The lower distribution is gas that has mixed into the WHIM but did not recycle through the ISM, missing the opportunity for further enrichment via supernova feedback. It is important to note that since there is no mixing in the None case, when a particle leaves the ISM it cannot change its metallicity. The models that include sub-grid mixing show varying spread in the MDFs, with the FIRE and Dyn. Smag. showing the widest spread. The Dyn. Grad. and Grad. models show the tightest distributions, with the two peaks in the distributions seemingly merging at ∼ 10 −1.5 Z . The Smag. model matches the Dyn. Smag. model at 10 −3 Z , but is biased toward higher metallicities below that threshold. The next 3 panels in the left column of Fig. 5 show the MDFs in the WHIM for = 4, 2, and 0, from top to bottom, respectively. The trend for all models is to approach a singly-peaked distribution as the simulation evolves through cosmic time. Most of the evolution in the MDFs occurs from = 6 to = 2 after which the distributions are mostly stationary. The transition from = 6 to = 4 demonstrates how rapidly the WHIM evolves at high redshift, and how each sub-grid metal mixing model impacts the MDFs with different mixing rates. Specifically, the Dyn. Grad. and Grad. models predict similar distributions at = 4, and produce the tightest MDFs compared to all of the other models. In fact, there is the same trend at = 4 as at = 6 -the gradient model variants (Dyn. Grad. and Grad.) predict tighter MDFs, followed by wider distributions in the Smagorinsky models (Smag., Dyn. Smag., and FIRE, respectively). There are similar trends in the CGM, as we see in the right column of Fig. 5. To reiterate, the panels show redshifts = 6, 4, 2, and 0 from top to bottom, respectively. At = 6, there is a clear distinction between the distribution at 10 −2 Z and 10 −2 Z in the None case, and the Smagorinsky variants (Smag., Dyn. Smag., and FIRE). Stellar feedback drives the peak at ∼ 10 −1 Z similarly to the WHIM at this redshift, whereas the distribution at 10 −2 Z is from the very first generations of stars. By this time the Dyn. Grad. and Grad. models have mixed the most rapidly to create a single broad distribution in their MDFs. All of the models with sub-grid metal mixing have much more gas mass enriched above > 10 −6 Z than the None case, especially compared to the deficit at ∼ 10 −1.5 Z in the None case. The Smagorinsky models vary in mixing rate as Smag., Dyn. Smag., and FIRE, from fastest to slowest, respectively. The MDFs in the CGM at redshifts = 4 to = 0 demonstrate the same trends as in the WHIM phase at the same redshifts: the gradient models mix much more rapidly at early stages than the Smagorinsky models. At = 0 the Dyn. Grad., Grad., and Smag. models predict the same distribution in the global CGM phase, whereas the Dyn. Smag. and FIRE models predict slightly less enriched gas. The MDFs in the turbulent WHIM and CGM show the importance of sub-grid metal mixing models in cosmological simulations as well as the importance of model choice. In all cases we include metal mixing, the MDFs are significantly tighter at all redshifts we measure and significantly tighter for the CGM at = 0. This is contrary to the study in Su et al. (2017) that found metal mixing to be relatively unimportant on cosmological scales. However, we find that the FIRE calibration is much too low to reproduce the correct converged hydrodynamical mixing of metals (see Section 3.2). With our new calibrations of the Smagorinsky model, Smag., and the new gradient models, Dyn. Grad. and Grad., we see significant differences at all redshifts. A common theme in theoretical galaxy evolution is that equivalent results between different models at = 0 does not necessarily imply a similar integrated history. The evolutionary paths for each sub-grid metal mixing model all evolve at slightly different rates as we would expect based on their diffusivities from Section 2. The lesson from our results is that the metal mixing model choice impacts the early development phases of galaxies rather than the long term equilibrium stages. At higher redshifts, 2, gas is collapsing to form galaxies, while stellar feedback and supermassive black holes are driving outflows out of the potential wells and forcing turbulence. Our inclusion of the full diffusion tensor in the Dyn. Grad. and Grad. models allows the gas that is compressing from feedback and infall to further mix its metal mass with nearby neighbours, tightening the MDFs. The Smagorinsky variants also improve the results and allow metal mixing between gas particles but produce broader distributions, notwithstanding the Dyn. Smag. and Smag. models showing good matches in the convergence tests of Section 3.2. This is an important point: the simple turbulence tests in Section 3.2 showed agreement between the gradient and Smagorinsky models (except for the lower FIRE calibration) but now we see disagreement in complex cosmological environments. Evidently, ignoring the trace of the velocity tensor is not the correct approach for cosmological contexts and we recommend either the Grad. or Dyn. Grad. models as we see no difference with the dynamic procedure applied to the constant coefficient gradient case. The impact of eddy viscosity While the velocity power spectra results in Section 3.1 show that eddy viscosity is required in Lagrangian finite mass methods, we Temperature (K) Figure 6. Temperature projections of the most massive halo in three of our cosmological simulations at redshifts = 2, 1, and 0 in rows from top to bottom, respectively. The columns show our None, Smag., and Grad. models from left to right, respectively. Each of the panels represents a 1 Mpc by 1 Mpc (physical) region centred on the most massive galaxy at each redshift. The Smag. simulation shows the most small-scale structure at all redshifts, and a smoother distribution of temperature at high-redshift compared to the None case. The Grad. model produces less small-scale structure than any other model, and much more hot gas at high redshift. Surprisingly, the Grad. model also produces more extended tails from sub-structure moving through the hot halo at lower redshift, suggesting it may impact future studies of jellyfish galaxies. find no significant impact of eddy viscosity on the average gas and galaxy properties in our cosmological simulations. However, we should expect these results considering that the eddy viscosity models we tested in Section 3.1 had no impact on the largest scales of the simulation (as they should not). In terms of global gas properties, we investigated the vorticity, temperature, and density distributions of the warm-hot intergalactic medium (WHIM) and circumgalactic medium (CGM) phases 7 and found only minor differences in data binned by galaxy stellar mass. Additionally, there were little differences between stellar mass distributions in our small volume, low resolution tests. To compare the impact of eddy viscosity between the models we need to investigate the small scales of the simulations in a controlled manner. For that reason, we will use the most massive galaxy in our cosmological simulations as a qualitative case study of the impact of the eddy viscosity model on the halo gas since this galaxy represents the same system in each simulation we ran. First, we investigate the temperature projections of the most 7 Definitions in Section 4.2. massive (in stellar mass) galaxy at redshifts = 2, 1, and 0. We confirmed that the most massive galaxy at = 2 ends up as part of the most massive galaxy at = 1 and = 0. We restrict our analysis to a radius of 500 kpc (physical) for the purposes of this introductory study. Additionally, given that we see the Smag., Grad., and Dyn. Grad. models as the best choices from the results in Section 4.2, we restrict our analysis to the None, Smag., and Grad. models. The Grad. model is much less computationally expensive than the Dyn. Grad. model and is, therefore, the better choice for cosmological simulations 8 . Fig. 6 shows the density-weighted temperature projections of the most massive galaxy at = 2, 1, and 0 in rows from top to bottom, respectively. The columns show the None, Smag., and Grad. models from left to right, respectively. First, we compare the results at = 2 across mixing model variants. The dark clumps in the None case are individual cold model shows more substructure than the None and Grad. models. The Grad. model shows many extended tails from substructure moving through the hot halo compared to the control and Smag. models, in addition to an overall smoother distribution in velocity space. galaxies that are being fed into the main structure via filaments. Ongoing stellar and AGN feedback leads to the temperature increase in the central region, while the extended ∼ 10 6 K halo is a mixture of gravitational heated gas and gas heated by previous generations of stellar feedback and AGN. There are many filamentary structures visible feeding the main galaxy (centred), and there is some small scale structure visible surrounding the galaxies. The Smag. case resembles the None case, except there less cold gas in the upper in-falling structure. Additionally, the satellite galaxies have more small-scale structure in the cold gas surrounding them. Importantly, the distribution of hotter gas appears mores smoothly distributed throughout the volume due to an increase in conversion of kinetic energy in thermal energy via the eddy viscosity. The temperature projection of the Grad. model simulation is strikingly different at = 2 than both the Smag. and None cases. While in the None and Smag. cases there appears to be large-scale gas at 10 5 K at the boundary of the 500 kpc radius, there is no such gas in the Grad. case. There is much less small-scale structure in the Grad. case, and the filaments are much smoothly distributed in space. We conclude that the stellar feedback, gravitational heating, and AGN feedback are much more effective at heating the gas on small scales in the Grad. simulation, since the eddy viscosity is the strongest. At = 1, in the middle row of Fig. 6, a similar picture emerges. The Smag. case has much more small scale structure than the None case as is evident in the central region and to the leftmost satellite galaxy structure. There is still a much smoother distribution of cold gas that extends outwards from the satellite galaxies. There are only a few centrally-located cold gas clumps in the Grad. case compared with the Smag. case and there is effectively no fragmentation identifiable in the satellite galaxy structure to the left of panel. The last row of Fig. 6 shows = 0 across the three model comparisons. By this redshift, there is little structure remaining in the group-sized halo and the temperature distribution of the intragroup gas is very smooth. Similarly to the other redshifts, the Smag. shows the most fragmentation of cold gas, followed by the None case, and then the Grad. case. All three show a satellite being stripped of gas in the upper right of the panels but the Smag. and Grad. models each differ in an important unique way compared to the None case. It is much easier to see the differences in ram pressure stripping in velocity space rather than temperature space. Fig. 7 shows the density-weighted velocity magnitude projections of the most massive galaxy at = 0 for the three eddy viscosity models None, Smag., and Grad. from left to right, respectively. It is apparent across all three cases that the substructure is moving at least a factor of ≈ 5 faster than the background gas yet only the Grad. case shows the most clearly defined long stripping tails from the cold gas in the galaxies. The Smag. case does not have the cold gas in the satellite galaxy marked by the arrows in the None and Grad. cases, as the gas has been completely stripped away. The Grad. case produces the cold gas in that satellite galaxy, and the tail is much more extended than in the None case. In fact, it is evident upon close inspection of the cold gas structures in the halo that the Grad. case produces tails from the cold gas in the galaxies more readily than the None or Smag. cases. That has implications for the study of ram pressure stripping in general, and should be further investigated in the future. CONCLUSIONS Turbulence is a key physical process in the study galaxy evolution and one of many highly complex non-linear interactions that must be understood to advance our knowledge of the Universe. The complexity demands the use of simulations that combine astrophysical sub-grid models with hydrodynamics and gravity in an expanding universe. All hydrodynamical simulations are known to require additional sub-grid models to accurately treat the impact sub-grid turbulence, yet these models have been widely ignored in the astrophysical community. We have, for the first time in Lagrangian hydrodynamics, implemented and studied the gradient model (Clark et al. 1979) -an anisotropic sub-grid turbulence model for viscosity and metal mixing (Hu & Chiang 2020). The model is based on directly modelling the error terms that arise from discretisation of the fluid field via a Taylor series expansion, including the compression terms that are missing from the standard Smagorinsky model. We additionally implemented a dynamic procedure that computes the model parameter on-the-fly for the gradient model following the approach of Rennehan et al. (2019). We used the mesh-free finite mass method in the GIZMO code (Hopkins 2015) as our numerical hydrodynamics solver for all of our experiments. We ran driven turbulence simulations at Mach numbers M = 0.3, 0.7, and 2.1 to validate the gradient model and compare with the popular Smagorinsky model. Hu & Chiang (2020) recently showed, by post-processing driven their turbulence simulations, that the gradient model should be able to reduce the build-up of kinetic energy near the resolution scale in isotropic, homogeneous turbulence and better reproduce the sub-grid metal flux. We confirmed these results in Section 3 by using the gradient model at simulation time. Our analysis of the velocity power spectra in driven turbulence produced unexpected results. We found that the gradient and Smagorinsky models, and their dynamic variants, predicted insufficient dissipation to reduce the artificial build-up of kinetic energy near the grid-scale in our 256 3 simulations. For that reason we introduced a boost factor for the dissipation strength (i.e. || || → || ||) and experimented with various values in the range ∈ [1, 100]. We found that a factor of ∼ 10 is sufficient to reduce the build-up of kinetic energy and is required for all of the Smagorinsky and gradient model variants. Additionally, we found that the boost factor only needs to be applied to the subsonic (M < 1) particles in our simulations to produce the correct statistics in supersonic turbulence (our M ∼ 2.1 test). The true boost factor is higher since we used the maximum interaction distance between neighbouring gas particles to calculate the diffusion tensor, leading to a ∼ 4 times additional boost (total ∼ 40) over other the default implementation in GIZMO (Hopkins et al. 2018). Our converged metal mixing simulations in Section 3.2 show that when we use the gradient and Smagorinsky models at lower resolution (64 3 particles) we are able to produce MDFs that are equivalent to 4 to 12 times the resolution. That is true for both the constant-coefficient and dynamic variants of the gradient and Smagorinsky models with standard parameter values, with the additional factor of ∼ 4 boost from using the maximum interaction distance in the kernel. However, lower calibrations such as those from the FIRE simulations do not produce the correct rate of mixing as they are at least a factor of ∼ 20 too low. We posit that this is because of the common calibration approach in cosmological simulations: calibration in tandem with the full suite of astrophysical sub-grid models. We argue that calibration of fundamental hydrodynamics models, such as the metal mixing model here, must be done in the absence of sub-grid astrophysical models. There must be a strong hydrodynamics base before the complexity of astrophysics is built on top. Note that our dynamic Smagorinsky and dynamic gradient models do not require calibration and produce accurate predictions for the rate of metal mixing in isotropic, homogeneous turbulence. As a first application of the new gradient model, we investigated a set of cosmological simulations to determine if there is any dominant impact on the galaxy and gas properties. We investigated the metal mixing and eddy viscosity separately in Sections 4.2 & 4.3, respectively. We found that the choice of sub-grid metal mixing model strongly impacts the MDF evolution in the warm-hot intergalactic medium (WHIM) and circumgalactic medium (CGM). We found that the gradient and dynamic gradient models mix metals much more rapidly than the Smagorinsky variants and produce tighter MDFs up until ∼ 1 when they approach a similar distribution down to = 0. In our simulation without sub-grid metal mixing, the MDFs in the WHIM and CGM are significantly broader than any of the simulations with sub-grid metal mixing demonstrating that, at the very minimum, including any sub-grid metal mixing model is an improvement. The most important result we discovered is that the metal mixing models are most impactful during the tempestuous early stages of galaxy evolution. On very long timescales, the equilibrium distributions match quite closely across the models. Including eddy viscosity in our cosmological simulations did not significantly impact the galaxy properties we investigated when averaging in bins of stellar mass after = 2. The galaxy stellar mass function was relatively unchanged, along with only slight variations in the stellar mass to halo mass function. We also found that including eddy viscosity did not significantly impact the averaged gas distributions of vorticity, temperature, and density across galaxies of similar stellar mass. We expected a priori that the large-scale properties of galaxies would be unaffected as the sub-grid eddy viscosity mainly impacts the small-scale. For that reason, we investigated a single system that could be linked across all of our cosmological simulations to gain a qualitative view of the impact. In Section 4.3 we showed the temperature projections of the most massive halo traced from = 2, 1 and 0 in our cosmological simulations for three eddy viscosity models: no model, the standard Smagorinsky model, and the new constant-coefficient gradient model. We found that the Smagorinsky model produced much more fragmentation in the halo gas of the most massive galaxy on the small scale compared to having no eddy viscosity, at all redshifts. We also found that the spatial temperature distribution was much smoother at = 2 when stellar and active galactic nuclei feedback was much stronger, showing that the small-scale kinetic energy was being efficiently converted into thermal energy. Although the constant-coefficient gradient model seemingly dissipates faster based on our results in Section 3.1, we observed that its inherent anisotropy does not lead to the same fragmentation we saw in the Smagorinsky model. At high redshift, = 2 the gradient model produced a much more widespread hot gas. The filamentary structure at all redshifts was much smoother and, after = 1, the satellite galaxies in the halo had many more clearly defined tails due to an improved treatment of ram-pressure stripping. Sub-grid metal mixing and eddy viscosity models have a strong impact on galaxy evolution simulations. In this work, we showed in the simplest case of isotropic, homogeneous turbulence that the all of the models tested here improved the accuracy of metal mixing and turbulent kinetic energy dissipation in the mesh-free finite mass method. The most significant differences between model choice appeared at high redshift in the early stages of galaxy evolution, before any equilibrium is reached. Given that contemporary cosmological simulations have resolutions of less than ∼ 50 3 particles in a typical galaxy, we recommend that future studies must at least use the constant-coefficient gradient model as it is (a) computationally inexpensive compared to the dynamic version, while producing similar results and (b) includes the full velocity tensor in the diffusion tensor to give the most accurate solution for sub-grid turbulence. Our recommendation is especially pertinent given the recent push to study higher redshift systems driven by the upcoming launch of the James Webb Space Telescope, as our theoretical understanding of enrichment and thermodynamic histories will depend directly on sub-grid turbulence model choice.
2021-04-19T01:16:07.738Z
2021-04-15T00:00:00.000
{ "year": 2021, "sha1": "50188572318426a1afe333cae696e36819bce4d6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2104.07673", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d468a80d8973bdd29f5220f1cf274b9635269ad7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
195871666
pes2o/s2orc
v3-fos-license
A systematic review and meta-analysis in the effectiveness of mobile phone interventions used to improve adherence to antiretroviral therapy in HIV infection Background Antiretroviral therapy is effective in preventing the progression of HIV to AIDS, but adherence to HIV medication is lower than ideal. A previous Cochrane review concluded that SMS interventions increased adherence to HIV medication, but more recent trials have reported mixed results. Our review aims to provide an up-to-date synthesis of the effects of interventions delivered by mobile phone on adherence. Methods We searched Cochrane, Medline, CINAHL, EMBASE and Global Health for randomised control trials (RCTs) of interventions delivered by mobile phones, designed to increase adherence to antiretroviral medication. Risk of bias was assessed using the Cochrane risk of bias tool. We calculated relative risk ratios (RR) or standardised mean difference (SMD) with 95% confidence interval (CI). Trials were analysed depending on delivery mechanism and intervention characteristics. We conducted meta-analysis for primary objective outcome measures. Results We identified 19 trials. No trials were at low risk of bias. Interventions were delivered as follows; nine via text message, five via mobile phone call, one via mobile phone imagery and four via mixed interventions. There was no effect when interventions delivered by text message were pooled in the RR1.25 (CI 0.97 to 1.61) P = 0.08. The SMD 0.42 (0.03 to 0.81) p = 0.04 showed a moderate effect to improve adherence. There was mixed evidence of the effect of text messages delivered daily, weekly, at scheduled or triggered times, however, messages with link to support, interactivity and three or more behavior change techniques (BCTs) all improved adherence. Of the five trials delivered by mobile phone call, one reported a reduction in HIV viral load. One trial using mobile phone imagery reported a reduction in HIV viral load. Three trials that delivered interventions by text message and mobile phone counselling reported improved biological outcomes. Conclusion Specific interventions, of proven effectiveness should be considered for implementation, rather than mobile phone-based interventions in general. Interventions targeting a wider range of barriers to adherence may be more effective than existing interventions. The effects and cost-effectiveness of such interventions should be evaluated in a randomised controlled trial alongside long term objective and clinically important outcomes. Electronic supplementary material The online version of this article (10.1186/s12889-019-6899-6) contains supplementary material, which is available to authorized users. Background There are currently over 36 million people worldwide living with HIV [1] with the majority from middle and low-income countries. Almost 70% of the global HIV disease burden is in Sub-Saharan Africa [2]. Treatment with antiretroviral therapy (ART) enables people living with HIV (PLWH) to lead healthier and longer lives since the life expectancy of someone who responds to treatment is the same as the general population [3]. Currently, 59% of PLWH have access to ART [1]. High adherence to ART is required to suppress viral replication, to slow the progression of HIV, and further reduce transmission [4]. Poor adherence can also lead to drug resistance [5]. UNAIDS aims to ensure 73% of PLWH achieve viral suppression which is thought to be 2 to 3 times higher than current levels of viral suppression [6] and the World Health Organisation (WHO) estimates that only one-third of the population adhere appropriately [7]. Factors influencing adherence include individual factors such as lack of knowledge, misunderstandings about administering medicines, lack of skills in developing regular medicine taking habits (remembering), concerns about side effects, and social support for medicine taking [8][9][10]. Medicine related factors, which affect adherence, include pill burden (number of tablets, intense dosing schedule, meal time restrictions, medicine side effects) [11][12][13]. Service and structural factors also play a role, such as the availability of medicine and cost of training health care providers [14,15]. Interventions that have demonstrated efficacy in increasing adherence to HIV treatment have had multiple components including providing education, counselling, social support, feedback and additional supervision [11][12][13]. However, in most settings these have proven too costly or are unfeasible to deliver in routine service settings. Mobile phones are a potentially useful, low cost, platform for delivering health interventions [16]. The World Bank estimates that 93 in every 100 people are subscribed to mobile phones [17] with low-income countries being the fastest growing sector [18]. Interventions delivered by mobile phones have the potential to target many of the factors influencing adherence such as knowledge, attitudes, concerns about medicines and difficulties in developing regular medicine taking habits [8][9][10]. They could enhance links to services so participants can obtain support and advice when needed, such as if they are experiencing medicine side effects [19]. Where mobile phones are owned and used by individuals, privacy can be maintained which is vital for stigmatised diseases like HIV. Mobile phones are carried with people wherever they go, so advice and support can be provided in real time in the patient's environment [20]. Mobile phones also have the potential to provide support and training to health care providers, allow remote monitoring of medicine taking and monitoring of drug supplies with the potential to reduce drug stock. A 2011 Cochrane systematic review of trials conducted between 1980 and 2011, included two trials of interventions delivered by mobile phone and concluded weekly messaging is effective in increasing adherence to ART [19]. Other systematic reviews that have looked at text message also support these findings; Finnitis et al. [21], Mayer et al. [22] and Thakker et al. [23] suggest text messaging improves adherence. Wald et al. [24] looked at the difference between one-way and two-way message and found the latter improved adherence. Our review aims to provide an updated synthesis of RCTs of interventions designed to increase adherence to ART medication delivered to patients via mobile phone. We aim to describe the effectiveness of interventions which employ different delivery mechanism (SMS, voice calls, application software) and different intervention content or frequencies of contact (weekly, daily contact). We will explore if the effects of interventions vary according to if they employ interactivity, links to support or use three or more BCTs. Methods This review was conducted in accordance with the PRISMA guidance. The flowchart can be found in Additional file 1. Inclusion criteria Participants -Men and women of any age infected with HIV who are on or due to start ART. There was no restriction of age or stage of treatment. Intervention -All controlled trials employing any mobile technology to deliver interventions to improve adherence to antiretroviral medication. Study design -Randomised control trials. Outcomes -Primary outcomes were objective measures, which include Medication Event Monitoring System (MEMS), pill count and biological outcomes (CD4 count and viral load). Secondary outcomes were subjective measures (self-reported adherence). A review protocol does not exist. There were no language, geographical or publication status restrictions. We excluded trials that included more than one disease and all non-randomised trials including observational and cross-sectional study designs. Search strategy We searched; Cochrane, CINAHL, MEDLINE, EMBASE, and Global Health databases from 1990 to October 2017. The EMBASE search strategy of medical subject headings and text-words can be found in Additional file 2. These terms were combined with the Cochrane pre-set search terms for controlled trials. RS searched the reference lists of included papers to identify additional studies for this review. Two reviewers independently scanned the electronic records to identify potentially eligible trials. Data extraction Two reviewers independently extracted data on the intervention delivery mechanism (e.g. text message, phone call), intervention characteristics, trial quality and on measures of effect. Sensitivity analysis was run looking at intervention characteristics, defined as: (i) Link to support -an intervention that was linked to a health professional was considered as support, e.g. the provision of a telephone number. (ii) Interactivity -when an intervention required the participant to respond once the intervention has been received for e.g. sending a text message back that "everything is okay". Also referred to as twoway text message. (iii)Behaviour change technique -authors description of interventions according to Abraham and Michie's taxonomy of BCTs [25], Additional file 3. An arbitrary measure of three or more techniques were used as a cut-off as we estimate that this number is indicative of interventions which have considered behaviour change and a wider range of factors influencing adherence. All discrepancies were agreed by discussion with a third reviewer. Risk of bias was assessed according to the criteria outlined by the International Cochrane Collaboration [26]. A cut off of 90% complete follow-up was used to determine low risk of bias for attrition. Data analysis and synthesis All analyses, including meta-analysis were conducted in Cochrane Review Manager [27]. All outcomes have been analysed as intention-to-treat. All loss to follow up has been treated as non-adherent. We calculated risk ratios (RR) and standard mean differences (SMD). We used random effects meta-analysis appropriately to give pooled estimates of primary outcomes where there were two or more trials using the same mobile phone delivery mechanism (e.g. SMS messages) and the same measures of adherence. We examined heterogeneity visually by examining the forest plots and statistically using both the χ 2 test and the I2 statistic. We assessed evidence of publication bias using Funnel plots. Results The combined search strategies identified 511 electronic records. These were screened for eligibility and the full text of 46 potentially eligible reports were obtained for further assessment. Nineteen reports met the review inclusion criteria and represented 19 trials, the PRISMA flow diagram can be seen in Additional file 1. Trials that were excluded from this review can be found in Additional file 7. Three trials were excluded in meta-analysis; Nsagha [28] and Kebaya [29] only reported subjective outcomes and Hardy [30] had an intervention as part of the control group. Details of the interventions, as described by the authors are described in Table 1. Outcomes Fifteen different adherence outcome measures were reported. Primary objective outcome measures reported include six trials recordings MEMs, seven viral measures (5 reported viral load and 2 viral failure) and three trials measured pill count and CD4 count. The most frequently reported measure of adherence was the subjective secondary outcome, self-reported adherence, in 14 trials. A complete list is found in Additional file 4. Trial quality A risk of bias summary for each trial is presented in Table 2 with comments in Additional file 5 [48]. No trials had a low risk of bias for all criteria assessed. A funnel plot to show publication bias can be found in Additional file 6. Interventions delivered by text message Nine trials evaluated interventions delivered by text message [28,[30][31][32][33][34][35][36][37] which reported a total of 26 outcomes. An improvement in adherence was measured in seven of the 19 objective primary outcomes and two of the seven subjective secondary outcomes. There was substantial variation in all the text message interventions. The frequency of text messaging was observed in Pop-eleches trial [32] which compared weekly and daily messages with the length of the message (long /short). Of these arms, only weekly text messages showed a significant result, which was not consistent in the Mbugbaw trial [33] where weekly text messages were also delivered. Three trials looked at daily text messages [30,32,35] of which two improved adherences [30,35]. Some text message interventions were coordinated to their ART regime (scheduled) [30,31] and others used real time monitoring which only sent a text reminder if the participant failed to open the medication device (triggered) [34,36,37]. Haberer [36] specifically looked at this function and split participants into scheduled and triggered. In this trial scheduled showed an effect, which was also supported by Hardy [30]. Triggered interventions showed an effect in two trials [34,37]. Some trials only included participants with poor baseline adherence [30,34,35] and all of these trials reported that the intervention improved adherence. For biological measures, neither Orrell [37] or Haberer [36] reported a statistically significant HIV RNA suppression, however the Orrell trial [37] did report a statistically significant odds ratio for virological failure which has been asterisked in Table 3. Interactivity was identified in 2 trials [30,35] and both reported statistically significant improvements in adherence. Three or more BCTs were identified in 3 trials [34][35][36], all of which reported an improvement in adherence. Mbuagbaw [33] and Ingersoll [35] stated a behaviour change model that underpinned the intervention. Link to support was reported in one trial [33] which had a statistically significant effect when the adherence threshold was reduced from 95 to 90% for Visual Analogue Scale (VAS). Interventions delivered by mobile phone call Five trials evaluated interventions delivered by mobile phone call which reported a total of 12 outcomes [29,[38][39][40][41]. An improvement in adherence was measured in one of the 6 objective primary outcomes and in three of the 6 secondary subjective outcomes reported. One trial [38] showed a reduction in HIV viral load. The Huang [39] trial split the group into treatment naïve and treatment experienced. Two trials looked at participants that were not adherent [38,41], only Belzer [38] improved adherence. No trials were similar enough to pool (Table 4). Link to support was identified in 3 trials [38,39,41] and three or more BCTs were found in 3 trials [38,40,41]. Of these only Belzer [38] showed an improvement in adherence. Only Kalichman [41] explicitly stated a behaviour change model that underpinned the intervention. Mobile phone calls by nature were interactive. Interventions delivered by mobile phone imagery One trial [42] reported four objective outcome results of interventions delivered by mobile phone imagery. One outcome showed a statistically significant improvement in HIV viral load (Table 5). Interventions delivered by mixed intervention Four trials evaluated interventions delivered by mixed mechanism [43][44][45][46] and reported a total of 10 outcomes. Clinically significant results (P < 0.05) have been highlighted in bold * refers to odds ratio There were six primary outcomes and four secondary outcomes. All of the outcomes except for the Shet trial [43] reported improvements in adherence. The Adbulrahman trial [46] reported a difference in mean adherence in the intervention group as statistically significant (p = 0.035). They also reported significant biological differences between the control and intervention group -a significantly higher rise in CD4 count (p = 0.017) in the intervention group and higher viral load in the control group (p = 0.001). In both Abdulrahman [46] and Maduka's trial [45] we were unable to calculate RR or MD based on the data provided. Maduka's report a statistically significant (95% CI P = 0.007) improvement in median CD4 count [45]. The Lester trial [44] which also involved text messaging and telephone follow up (for those requesting it or not responding) showed a statistically significant improvement in adherence (Table 6). Interactivity was identified in three trials [43][44][45] of which two showed an effect. Link to support was identified in two trials [44,45] all showed statistically significant improvements in adherence. Maduka [45] reported improvements in adherence in both objective and subjective measures (CD4 improvement and SRA). None of the mixed interventions used behaviour change models or reported having more than three or more BCTs as part of their interventions. Discussion We identified 19 trials that investigated the effect of different mobile phone mechanisms on adherence to HIV medication. This review used a systematic approach, a replicable search strategy and standard systematic review methods [48] and is the first to include interventions delivered by mobile phone call. Previous reviews of mobile phone interventions designed to increase ART adherence have grouped all "mobile phone interventions that used any text messages" together without differentiating between interventions delivered via text message and mobile phone call, and BCTs used in the interventions were not described. We present pooled analyses of objective outcomes and our review is the first to differentiate between objective and subjective adherence measures. Self-reported adherence outcomes may differentially overestimate benefits in the intervention group [49] due to lack of participant blinding, recall bias and the desire to please the provider [7]. We only pooled objective measures of the same outcome as such analyses allow the clinical benefits achieved for patients to be more clearly interpreted. We found no effect when interventions delivered by text message were pooled in the RR, however, there was a moderate effect in SMD. There was substantial heterogeneity across the trials and individual trials reported objective improvements in adherence. It was unclear if the delivery mechanism (daily, weekly, scheduled or triggered mechanism in text messages) had an effect since, individually, the results were of mixed statistical significance. Text message interventions described as 'interactive' and using more than three BCTs all showed improvements in adherence. None of the trials had a low risk of bias. Previous reviews have found that text messaging is effective in increasing adherence to ART [19,22,23], Finitsis et al. [21], reported a pooled OR of 1.48 (1.09 to 2.01) on any HIV outcome, however, objective and subjective outcomes were pooled across all types of intervention provided they included some text messaging. Although pooling in this way affords greater statistical power, use of subjective outcomes in trials where participants cannot be blinded may have resulted in over-estimated effects and it is difficult to identify which intervention components were effective. A similar methodology was used in Thakkar et al. [23] which concluded that mobile phone text message approximately doubles medication adherence in chronic disease. This review included trials [44,45] combining text message with counselling, which may have inflated results [23]. Mayer et al. [22] also reported a larger SMD than the SMD we calculate in our review (SMD 0.87 vs. SMD 0.42), however, the authors included trials with a pre-post study design, converted all outcomes to SMD, and pooled all trials that included any text message. We find the effect of text message-delivered daily prompts to take medicines to be inconclusive. This is consistent with the findings of other trials of text message-delivered daily prompts designed to increase adherence to oral contraception, TB medication, malaria prophylaxis or antibiotics, pooled RR 1.0 (CI 0.77-1.3) [31,33,50]. Intervention fatigue may explain the ineffectiveness of daily medication prompts. All text message interventions with interactivity included in this review improved adherence, however we were unable to pool results (differing outcome measures and Hardy et al. [30] used an intervention as the control). This finding supports the conclusions of the Wald et al. [24] systematic review which explored the effects of two-way communication and interactivity in mobile phone-delivered interventions targeting adherence to any medication and concluded that interventions involving two-way text messaging improved medication adherence [24]. Mbuagbaw et al. also showed that interactivity improved adherence to ART [51] and Finnitis reports that interventions which include interactivity are more effective [21]. In our review, we distinguish between interactivity and a specific link to support from a person. These characteristics were heterogeneous, which is unsurprising given the nature of interactivity varied e.g. texting back to confirm you have taken medicine rather than texting back if you would like to speak to a health care provider, and the nature of the link to support could require passive or active involvement (a phone call from a health care provider because you requested one or because you didn't respond or a telephone number to call if further advice was needed). Trials of interventions that involve sending a text message and providing phone follow up from a health care provider report increased uptake of long acting contraception and increase adherence to preventative medication for cardiovascular disease, as well as increase adherence to antiretroviral medication and reduce viral load [52,53]. Among the five trials of interventions delivered by mobile phone call included in this review, only one reported a statistically significant reduction in viral load post intervention [32]. One trial using mobile phone imagery reported a reduction in HIV viral load. It is likely that the effect of interventions delivered by mobile phone call would be similar to the effect of adherence interventions delivered by landline -SMD in pooled behavioural outcomes 0.49 (− 1.12 to 2.11) I240%, [54]. The content of calls in both our review and the Cochrane review of phone calls was generally poorly described and is likely to be variable, resulting in different effects across trials [54]. In the one trial in our review which reported beneficial effects, the intervention was well described and involved confirming if medications were taken, providing problem-solving support, and referral to services to address adherence barriers if needed [38]. Of the mixed trials in our review, one trial delivered by mobile automated phone voice messaging showed no benefit, however, the other three mixed trials reported benefit, either in increasing CD4 count or reducing viral load [44][45][46]. All of the mixed interventions which included a link to support improved adherence, however, the time and costs involved requires clarification. A wide range of other factors influence adherence to ART but have not been targeted in interventions to date. These factors include information about how medicines work, why they are important and how to take them, how to develop regular medicine taking habits, reassurance regarding common minor side effects and information about side effects for which help should be sought. The interventions in this review contained few BCTs (median 2 and maximum 6). In other areas such as smoking cessation effective behaviour change interventions delivered by text message included 19 BCTs [55]. Our review has some important limitations. With no existing gold standard objective measures of adherence [56,57], trials included in this review used 15 different adherence measures limiting our ability to conduct pooled analyses of the same outcomes. There were also too few trials to conduct a meta-regression exploring all the factors which could influence heterogeneity of outcomes including: allocation concealment, blinding of outcome assessors, types of participant (treatment experienced/ naive), factors influencing adherence targeted, BCT employed, mode of delivery, and duration of follow up. Adherence measures may be at risk of the 'Hawthorne effect' , where participants alter their behaviour due to awareness of being observed, especially if there is considerable contact in mid-trial follow-up points [58]. Self-reported measures and measures that can be manipulated in the short term such as pill count will be more susceptible to this effect. It is also important to consider that in RCT's the control group may have higher adherence levels by virtue of trial participation and increased surveillance, which may reduce the ability to detect true differences in the trial and thus underestimate intervention effects. In pragmatic trials there may be a trade-off between maintaining internal validity by achieving high follow up and achieving generalisability for "real world" purposes. We coded the BCTs using an established taxonomy [25], however, coding was dependant on the authors' description of the intervention, which often lacked detail. More comprehensive descriptions were requested but responses were limited, especially for transcripts of mobile phone call interventions. It is likely that content of mobile phone calls differed between trials which may influence the outcome of mobile phone counselling interventions. Many of the trials had small sample sizes and were therefore underpowered to detect changes in the outcomes collected. As mentioned before, we did not pool analyses across different outcomes also resulting in reduced statistical power. The median follow-up time across trials was 4 months which is insufficient to determine the long-term impact of the intervention -some studies suggest that adherence slowly decreases with time due to pill fatigue [59]. WHOs current advice is there is high quality evidence for weekly text messages and they are effective to enhance adherence [60] . The evidence is more nuanced than this advice suggests, and new recommendations based on the updated evidence can now be made which recommend only specific interventions that have been shown to be effective. Cost-analyses of existing effective interventions is needed prior to considering widespread implementation. Further clarification regarding the aims or content of phone calls would be helpful for services considering implementation. Future trials should include an exploration of the mechanism of action of interventions. The evidence base would be enhanced if a gold standard measure of ART adherence were agreed internationally. Interventions targeting a wider range of factors influencing adherence might have greater effects than existing interventions and should be evaluated by randomised controlled trial. Conclusions Our review demonstrates text message improves adherence when measured as SMD but not RR. Interventions delivered by text message combined with health care provider mobile phone call have benefits on clinically important outcomes and text message interventions that include a link to a health care professional, interactivity and three or more BCTs all showed improvements in objective adherence measures. The evidence supports consideration of specific interventions shown to be effective for implementation, rather than mobile phone-based interventions in general. Interventions targeting a wider range of barriers to adherence and exploring other mechanisms may be more effective than existing interventions and may reduce the amount of health care provider input needed. Such interventions should be evaluated in a randomised controlled trial with long-term objective and clinically important outcomes alongside associated cost-effective analysis.
2019-07-11T13:15:37.350Z
2019-07-09T00:00:00.000
{ "year": 2019, "sha1": "64df759ec8a3ed6d486df570ebeb4881a87076b6", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-019-6899-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "64df759ec8a3ed6d486df570ebeb4881a87076b6", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
6637499
pes2o/s2orc
v3-fos-license
Epigenetic variation during the adult lifespan: cross-sectional and longitudinal data on monozygotic twin pairs The accumulation of epigenetic changes was proposed to contribute to the age-related increase in the risk of most common diseases. In this study on 230 monozygotic twin pairs (MZ pairs), aged 18–89 years, we investigated the occurrence of epigenetic changes over the adult lifespan. Using mass spectrometry, we investigated variation in global (LINE1) DNA methylation and in DNA methylation at INS, KCNQ1OT1, IGF2, GNASAS, ABCA1, LEP, and CRH, candidate loci for common diseases. Except for KCNQ1OT1, interindividual variation in locus-specific DNA methylation was larger in old individuals than in young individuals, ranging from 1.2-fold larger at ABCA1 (P = 0.010) to 1.6-fold larger at INS (P = 3.7 × 10−07). Similarly, there was more within-MZ-pair discordance in old as compared with young MZ pairs, except for GNASAS, ranging from an 8% increase in discordance each decade at CRH (P = 8.9 × 10−06) to a 16% increase each decade at LEP (P = 2.0 × 10−08). Still, old MZ pairs with strikingly similar DNA methylation were also observed at these loci. After 10-year follow-up in elderly twins, the variation in DNA methylation showed a similar pattern of change as observed cross-sectionally. The age-related increase in methylation variation was generally attributable to unique environmental factors, except for CRH, for which familial factors may play a more important role. In conclusion, sustained epigenetic differences arise from early adulthood to old age and contribute to an increasing discordance of MZ twins during aging. Introduction The risk of most common diseases increases with age. A lifetime of accumulated epigenetic changes was proposed to contribute to the development of such diseases (Bjornsson et al., 2004). Epigenetic mechanisms determine the expression potential of genes without changing the DNA sequence (Jaenisch & Bird, 2003). The molecular basis includes the methylation of cytosines in CpG dinucleotides, which, together with histone modifications, noncoding RNAs, and localization, influence the accessibility of a genomic locus to the transcriptional machinery (Bernstein et al., 2007;Cedar & Bergman, 2009). DNA methylation can be measured on DNA samples that are commonly available in biobanks (Talens et al., 2010). Various studies have investigated whether DNA methylation can change with increasing calendar age. A cross-sectional study of limited sample size reported the genome-wide absence of changes in mean DNA methylation between young (26 years) and old (68 years) individuals (Eckhardt et al., 2006). However, cross-sectional studies that focus on changes in mean DNA methylation can only detect age-related changes that are in the same direction for most individuals. A cross-sectional study that focussed on DNA methylation at the COX7A1 locus reported greater interindividual variation in 20 elderly individuals (> 60 years old) compared with 20 young individuals (< 30 years old; Ronn et al., 2008), indicating that DNA methylation can indeed change with age in a direction that differs per individual. Longitudinal studies are even better suited to investigate this type of age-related methylation changes, even though most of them rarely span more than a period of two decades. A study on global DNA methylation, a measure of the average methylation level of (a representative portion of) CpG sites across the genome, observed changes with a direction that was individual specific in whole-blood samples of 111 individuals (59-86 years) followed over 11 years and 127 individuals (5-72 years) followed over 16 years (Bjornsson et al., 2008). Yet, two smaller studies with 10-20 years of follow-up demonstrated that at specific genomic loci, DNA methylation in blood and buccal swab samples can remain remarkably stable (Feinberg et al., 2010;Talens et al., 2010). A particularly powerful design to investigate the accumulation of epigenetic changes with age is studies of monozygotic twins (Bell & Spector, 2011). MZ co-twins have the same age and a virtually identical genotype, thus controlling for their effect on DNA methylation (Heijmans et al., 2007;, while they may differ in their lifetime exposure to environmental factors and can develop small phenotypic differences with age (Martin et al., 1997). To our knowledge, only one study has as yet adopted this design to study age-related changes into adulthood. This study on 40 MZ pairs aged 3-74 years reported that older MZ pairs (> 28 years old) showed larger within-pair epigenetic differences than younger MZ pairs (< 28 years old) in total DNA methylation and total histone acetylation levels. Although the functional relevance of such measures is uncertain, analysis on smaller subsets of MZ twins indicated similar trends at sites throughout the genome, most of which were repetitive sequences, but also included singly copy genes (Fraga et al., 2005). Also, it remains unclear at what age such changes start to arise in the population and at what rate they subsequently progress with increasing age. The studies performed thus far were generally relatively small, focused on measures of average methylation of the genome, and ⁄ or could investigate limited periods of the adult lifespan only. Here, we report on age-related changes in locus-specific DNA methylation in a combined cross-sectional and longitudinal study on 460 individuals comprising 230 MZ pairs aged 18-89 years. The long age range investigated allowed us to study whether epigenetic changes accumulate linearly, exponentially, or in bursts during the full adult lifespan. Furthermore, we evaluated the influence of familial versus individual factors on the age-related increase in discordance. We assessed both global DNA methylation with the LINE1 assay (Wang et al., 2010) and locus-specific DNA methylation close to genes implicated in various age-related diseases, namely the nonimprinted loci LEP, ABCA1, and CRH and the imprinted loci IGF2, INS (alternate symbol INSIGF), KCNQ1OT1 (alternate symbol KVDMR), and GNASAS (alternate symbol NESPAS). These loci were selected on the basis of their previously shown features of epigenetic regulation as observed in human, animal, or cell culture experiments (Mitsuya et al., 1999;Melzner et al., 2002;Cui et al., 2003;Probst et al., 2004;McGill et al., 2006;Williamson et al., 2006;Kuroda et al., 2009; Table 1). DNA methylation in young and old MZ twins Means and interindividual variation Global DNA methylation and methylation status at nine specific loci were compared between young twins (n = 132 individuals) and old twins (n = 134 individuals). Old individuals had slightly lower mean global DNA methylation than young individuals (Table 2). Larger differences were observed at specific loci. Old individuals had lower mean DNA methylation at five of nine loci (INS, KCNQ1OT1, and the three adjacent IGF2 loci; Table 2) and showed higher mean DNA methylation at three loci (LEP, ABCA1, and GNASAS). No difference was observed for CRH. The interindividual variation in global DNA methylation expressed as the standard deviation (SD) was small in both age groups (SD young = 1.2% and SD old = 1.8%; Table 2). At specific loci, it ranged from small in both age groups at KCNQ1OT1 (SD = 2.3% in both age groups) to large in both age groups at ABCA1 (SD young = 7.8% and SD old = 9.6%; Table 2). With the exception of KCNQ1OT1, methylation variation was always larger in old individuals than in young individuals, irrespective of age-related differences in mean DNA methylation. The SD of global DNA methylation was 1.5-fold larger in old individuals (P = 2.3 · 10 )05 ). At specific loci, the age-related difference ranged from 1.2-fold larger SD at ABCA1 (P = 0.010) to 1.6-fold larger SD at INS (P = 3.7 · 10 )07 ; Table 2). Within-pair discordance The extent of within-pair methylation discordance was also compared between the young and old twins. Similar to the interindividual variation, a small within-pair discordance in global methylation was observed in both age groups. At specific loci, it ranged from small in both age groups at KCNQ1OT1 to large in both age groups at CRH. Furthermore, the absolute within-pair discordance in old MZ pairs was always greater than in young MZ pairs (Fig. 1A). Notwithstanding this overall increase, the old age group still contained pairs who had strikingly similar DNA methylation. With the exception of GNASAS, the SD of the within-pair differences, quantifying group discordance, was significantly higher in old as compared with young MZ pairs (Table 3). For global DNA methylation, methylation discordance in the old MZ pairs was almost double that of the young MZ pairs (P = 9.8 · 10 )05 ). At specific loci, the increase in discordance ranged from 1.4-fold greater in old MZ pairs at KCNQ1OT1 (P = 0.005) to 2.7-fold greater at ABCA1 (P = 3.8 · 10 )07 ). DNA methylation across the complete adult lifespan Changes in within-pair methylation discordance between different age categories The timing of the occurrence of age-related changes in methylation discordance during adult life was investigated in 219 MZ pairs aged 18-89 years, including an additional 61 middle-aged MZ pairs (30-65 years old) and 25 old MZ pairs (> 65 years old; Table 5). In this extended set of MZ twins, global DNA methylation and methylation at five specific loci, representative of the nine loci studied in the young and old MZ pairs as described before, were measured (Table 1). The observed absolute (Wang et al., 2010) 8 11 †Loci are given in alphabetical order. àCpG-site methylation previously reported to associate with gene expression (marked by +), ± means that this association is hinted at. §Amount of CpG units and CpG sites as measured in this study. To quantify these observed changes, the proportional increase in discordance was also estimated per decade. For global DNA methylation, within-pair discordance was 9% greater each decade (P = 3.4 · 10 )06 , Table 4). The greatest increase in within-pair discordance was found at LEP with 16% greater discordance each decade (P = 2.0 · 10 )08 ), and at IGF2 and CRH, within-pair discordance increased each decade with 11% (P = 3.8 · 10 )09 ) and 8% (P = 8.9 · 10 )06 ) respectively. At GNASAS, discordance did not change significantly with age. At KCNQ1OT1, although the discordance was small at all ages, it did increase by 11% each decade (P = 0.002; Table 4). These observed relative increases in discordance were confirmed with absolute differences in the amount of discordance when testing the homogeneity of variance in discordance across the age groups at individual CpG units (Table S7). Dutch and Danish MZ pairs were investigated in this study. The observation that the increase in within-pair twin discordance was not exclusive to the old age group (predominantly Danish twins) but was already apparent at younger age (Dutch twins) indicated that geographic origin of the MZ pairs did not contribute to our findings. To further exclude the influence of origin, we tested whether age-related changes in methylation variation or discordance were different between old Dutch (n = 25 pairs) and Danish (n = 67 pairs) twins. No significant influence of country on age-related changes was observed for either variation or discordance (Table S2). The influence of cellular heterogeneity on methylation variation, discordance, and their age-related increases Methylation was measured on genomic DNA extracted from whole blood, and variation in cellular heterogeneity could induce differences in DNA methylation. In the young twins, neither variation in global DNA methylation (n = 132 individuals) nor within-pair differences in global methylation (n = 66 pairs) were associated with cellular heterogeneity, approximated by percentage neutrophils, as recently described (Talens et al., 2010). The same was true for the majority of loci (at six of nine loci for DNA methylation and at five of nine loci for within-pair differences; Table S3). The strongest influence of cellular heterogeneity was observed for LEP (P = 1.2 · 10 )10 and P = 1.0 · 10 )06 for both tests, respectively). Even in this case, 85% of interindividual variation in LEP methylation and 80% of within-pair differences were independent of cellular heterogeneity. The composition of the leukocyte population changes with age, of which the contribution to our findings was investigated in all Dutch twins. Age-related changes in interindividual variation (n = 304 individuals) at three of five loci and in within-pair discordance (n = 152 pairs) at four of five loci were not associated with changes in cellular heterogeneity (Table S4). Although an association with cellular heterogeneity was observed (P = 0.006 and P = 8.2 · 10 )04 for both tests, respectively), 90% of the age-related changes in either global methylation variation or global methylation discordance were not attributable to it. The strongest influence was observed at LEP (P = 4.1 · 10 )18 and P = 4.2 · 10 )12 for both tests, respectively), yet 90% of change in interindividual variation and 85% of change in within-pair discordance were not attributable to cellular heterogeneity. Thus, the age-related changes in DNA methylation observed cannot be explained by changes in leukocyte population composition. Longitudinal changes in DNA methylation in old age Interindividual and within-pair epigenetic variations, global and at the same 5 loci, were also investigated in 19 elderly twin pairs during 10 years of follow-up (DNA samples obtained in 1997 and 2007). Global and locus-specific interindividual methylation variation was modestly larger after 10-year follow-up, except at CRH (Table S5). Changes in within-pair methylation discordance showed a similar pattern as observed in comparing changes from young to old adults ( Fig. 1: compare C with A and B). Global discordance and that at three loci (IGF2DMR, LEP, and CRH) had increased during the follow-up period, whereas no change was observed at the remaining two loci (KCNQ1OT1 and GNASAS). Familial and unique environmental factors The study of MZ twin pairs enables separation of the effects of familial (i.e., genetic and common environment) and unique (individual) environment on the accumulation of DNA methylation differences with age. For global methylation, the total variation was relatively small and mainly attributable to unique environment. The increase in variation with age Fig. 1B from left to right. àThe SD of discordance at baseline, estimated from the residual variance and the random effect intercept of the linear mixed model. §The proportional (in percentage) increase in variation of discordance each decade, estimated from the random effect of age. -One-sided P-value from a Z-test on the random effect estimate of age from the same linear mixed model. Table S6). The total variation increased significantly with age at all loci in line with the previous analyses (Table S6). This increase could be mainly attributed to unique environmental factors except for the age-related increase in variation at CRH methylation which had a familial component (P = 0.007). Discussion In this study, we report sustained age-related increases in variation of DNA methylation, in an analysis of 460 individuals, comprising 230 MZ pairs, aged from 18 to 89 years. Previously, this question was investigated using cross-sectional and longitudinal study designs on smaller sample sizes with narrower age ranges (Fraga et al., 2005;Feinberg et al., 2010;Gronniger et al., 2010;Talens et al., 2010). Our study extends their findings over the full adult lifespan, supporting the notion that a gradual accumulation of epigenetic changes, globally and at imprinted and nonimprinted loci, occurs up to very old ages. How such changes affect gene expression remains unclear, although some evidence suggests that small differences in DNA methylation may cause an amplified effect on gene expression (Lillycrop et al., 2008). The increase in epigenetic variation was mainly attributable to unique individual factors that cover both stochastic processes and environmental exposures, between which the design of our study cannot distinguish. This may lead to age-related epigenetic dysregulation and may contribute to the age dependency of common diseases (Jaenisch & Bird, 2003;Bjornsson et al., 2004). However, studies that can appropriately address the latter hypothesis will be complex in their design and execution because of the relatively small effect sizes involved and the tissue-and cell-specific nature of age-related changes (Heijmans & Mill, 2012). We measured LINE1 methylation to assess global DNA methylation (Wang et al., 2010) and found that the variation in global DNA methylation, both interindividual and within-pair, was small at all ages but increased proportionally with age, in accordance with a longitudinal study over 10 years (Bjornsson et al., 2008). The small amount of global variation observed may relate to the fact that global DNA methylation assays measure methylation at a multitude of similar loci distributed throughout the genome (Bollati et al., 2009;Wang et al., 2010), while the introduction of stochastic changes occurs at individual loci. We observed the most prominent age-related changes in variation at specific candidate loci for common age-related diseases, such as IGF2, LEP, and CRH, of which the expression was shown to be influenced by its DNA methylation (Melzner et al., 2002;Cui et al., 2003;McGill et al., 2006). We observed a substantial difference between the loci in their vulnerability to epigenetic drift, which was consistently found in our various analyses (interindividual variation, within-pair discordance, young versus old individuals, the adult lifespan, or a longitudinal analysis in old subjects). Of interest, the greatest age-related changes were observed for nonimprinted loci, whereas imprinted loci appeared more stable. MZ twin pairs share characteristics such as age, sex, genotype, and their developmental and childhood environment (e.g., upbringing ⁄ education). They may acquire more unique characteristics as they grow older, because different choices on, for instance, lifestyle and occupation can increasingly change their living environments. The shared characteristics, namely their genotype and shared environment, and their individual characteristics, commonly named the unique environment, may both contribute to epigenetic variation (Martin, 2005;Bell & Spector, 2011). We used the MZ twin study design to investigate how much of the age-related increase in variation is contributed by the familial factors and by the unique individual environment (Purcell, 2002;Bell & Spector, 2011). We observed that most of the variation in young adults could be attributed to the individual environment, indicating that DNA methylation is at least partly independent of familial (i.e., genetic) factors. Interestingly, studies on neonate and infant MZ pairs indicated that such differences are already present and indeed increase at a very young age (Ollikainen et al., 2010;Wong et al., 2010). At most loci, we found that the age-related changes in variation were also mostly attributable to the unique individual environment, supporting the idea that age-related changes in DNA methylation may be mostly independent from familial factors (Jaenisch & Bird, 2003;Bjornsson et al., 2004;Martin, 2009). However, at CRH, the increase in variation was mostly attributed to familial factors. Residual batch effects seem an unlikely explanation for this observation, considering that the design of this study involved a batch allocation scheme that took age and sex into account and the statistical models contained a factorial variable to adjust for its potential influence. A previous study reported familial clustering of variation over time in global DNA methylation using the LUMA assay (Bjornsson et al., 2008). As we studied MZ twins only, we cannot further distinguish whether this familial component is related to genetic or shared environmental factors, both of which may influence the susceptibility to stochastic or environmentally driven changes in DNA methylation (Sandovici et al., 2003). In this study, we observed that differences in mean methylation between the young and the old age groups were relatively small or even absent, while increases in variation were generally more substantial. This finding indicates that epigenetic changes accumulating with age are generally nondirectional or are the outcome of many smaller directed changes with, in part, opposite direction. Changes in DNA methylation can be stochastic or environmentally driven; the relative contribution of each source of variation cannot be investigated in our study. Stochastic epigenetic changes can occur without any environmental influence and may be related to imperfect DNA methylation maintenance mechanisms (Petronis, 2006). We observed that the age-related increase in variation of DNA methylation was gradual from early adulthood to old age, which is compatible with stochastic effects (Bjornsson et al., 2004). Environmentally driven epigenetic changes may occur as a consequence of environmental exposures related to, for instance, lifestyle and occupation (Bollati et al., 2007;Gronniger et al., 2010;Breitling et al., 2011). It is even conceivable that stochastic epigenetic changes occur more often under certain environmental conditions, parallel to stochastic genetic mutations that occur after exposure to high UV irradiation. The power of cross-sectional studies in MZ twins is that large age ranges can be studied in contrast to what is practically possible in longitudinal studies on unrelated individuals. We investigated changes in epigenetic variation, in terms of both interindividual variation and within-pair difference, over a large age range. We observed similar increases in either measure of epigenetic variation throughout the adult lifespan. These results are in line with longitudinal analyses over relatively short time spans in old age, as we report here, middle age (Feinberg et al., 2010;Talens et al., 2010), and childhood (Wong et al., 2010). In view of the consistency of these findings, it is unlikely that generation effects significantly contributed to our findings. Importantly, we were also able to exclude age-related changes in cellular heterogeneity of whole blood as an explanation for our observations, which was previously proposed as a major concern for the interpretation of such studies (Martin, 2005). Whether increased methylation variation in blood has any biological (phenotypic) relevance is an important and as yet unresolved question, although it is likely that other tissues than blood are similarly affected by epigenetic drift (Ronn et al., 2008;Gronniger et al., 2010;Wong et al., 2010). In general, tissues with a high rate of cell division may display more agerelated epigenetic variation through stochastic errors in maintaining and transmitting epigenetic information than tissues with lower rates of cell division (Thompson et al., 2010). In this study, we investigated DNA methylation of MZ twin pairs with ages distributed across the full adult lifespan. In the first phase of the study, we explored differences by comparing young adult twins (18-30 years) with old twins (> 74 years) to generate sufficient contrast between groups. Such age groups at either extreme of the adult lifespan could not be selected from a single twin register. This required a careful consideration of potential biases created by selecting young twins from the Netherlands and old twins from Denmark and additional measurements to confirm the validity of our findings. Genetic differences between the populations were unlikely to play a role, because a genomewide analysis of SNPs of Northern European countries reported a similar structure of genetic variation in the two countries (McEvoy et al., 2009). Further, similar procedures were used for drawing blood (Christensen et al., 2008;Willemsen et al., 2010), DNA was extracted using standard protocols, and there was no indication for differences in DNA quality [including OD 260 ⁄ 280 measurements, bisulfite (BS) conversion rate and success rate of DNA methylation assays]. Moreover, if DNA quality was different between the populations, one would expect a similar systematic effect on all assays, which could be taken into account in our statistical analysis. In contrast, the loci we studied showed a substantial difference in the degree to which the variation in DNA methylation was higher in old twins, which was absent for KCNQ1OT1. More importantly, we experimentally validated the finding from the first phase for six loci. In this second phase, we compared the old Danish twins with a subset of Dutch twins specifically selected for a maximum age overlap [within the limitations of the availability of old twins in the Netherlands twin register (NTR)]. We found no indication for DNA methylation differences between Dutch and Danish twins with a similar old age. Furthermore, the locusspecific associations of DNA methylation variation with age originally observed in the young-old comparison were confirmed both in an investigation of intermediate age ranges selected from the NTR and in a longitudinal investigation of old Danish twins. Taken together, differences in geographic origin or technical variability between populations are unlikely explanations for our observations in phase one of our study, and the second phase yielded further evidence for the occurrence of sustained epigenetic changes during the adult lifespan. In this study, we demonstrate that epigenetic variation in the population, used as a proxy for stochastic and environmentally driven epigenetic changes in individuals, increases gradually with age up to old age. The rate at which changes are introduced differs between loci and can be considerable at loci regulating transcription of nearby genes. The observed increase was mostly driven by the unique environment. Our results have practical implications for study design in epigenetic studies investigating populations with a large age distribution or a long follow-up time (Foley et al., 2009;Talens et al., 2010). Future research should aim to investigate the relative contribution of stochastic and environmental factors to age-related epigenetic changes and the consequences of these changes for the development of common age-related diseases. Experimental procedures Study population The samples in this study are taken from Longitudinal Study of Aging Danish Twins (LSADT) of the Danish Twin Registry (DTR; McGue & Christensen, 2007) and from the Biobank project of the NTR (Willemsen et al., 2010). In the LSADT study, DNA was extracted using the salting out method, and in the Biobank project, QIAamp DNA Blood Maxi (Qiagen, Dü sseldorf, Germany) was used. DNA from both sources was of high quality (260 ⁄ 280 BIOBANK = 1.80; 260 ⁄ 280 LSADT = 1.90). Selection from the Danish Twin Registry (DTR) The LSADT study, based on the DTR, is a cohort sequential study of elderly Danish twins. LSADT began in 1995 with an assessment of all members of like-sex twin pairs born in Denmark before 1920. The surviving members were followed up every 2 years, and additional cohorts were added at the 1997, 1999, and 2001 assessments and subsequently followed at 2-year intervals. During a home visit in 1997, blood was drawn from 689 individuals, from which DNA was isolated (Skytthe et al., 2006). The LSADT project has been approved by The Danish National Committee on Biomedical Research Ethics (journal VF 20040241). Details on design and data collection were described previously (Christensen et al., 2008). This study focuses on the monozygotic twin pairs (MZ pairs) of whom DNA was available from the 1997 assessment, (73 years or older; n = 108 pairs). To investigate differences in epigenetic variation between young and old MZ pairs, all 36 male MZ pairs and 37 randomly selected female MZ pairs formed a study population named 'old Danish twins' (Table 5). For two male MZ pairs and four female MZ pairs, there was insufficient DNA of both co-twins. These MZ pairs were excluded, and the remaining 67 MZ pairs were investigated. For 19 of the LSADT MZ pairs (eight male pairs), a second DNA sample was available from a 10-year follow-up in 2007 for both co-twins. Longitudinal epigenetic changes in the elderly were investigated in these MZ pairs, who were named 'follow-up Danish twins' (Table 5). Selection from the Netherlands Twin Register (NTR) In 2004, the NTR started a large-scale biological sample collection in twin families to create a resource for genetic studies on health, lifestyle, and personality. Between January 2004 and July 2008, adult participants of the NTR (18 years and above) were invited to the project. During a home visit, fasting blood was drawn, from which DNA was extracted and a hematological profile was obtained, consisting of percentages and numbers of neutrophils, lymphocytes, monocytes, eosinophils, and basophils. The study protocol was approved by Central Ethics Committee on Research Involving Human Subjects of the VU University Medical Center, Amsterdam, an Institutional Review Board certified by the US Office of Human Research Protections (IRB number IRB-2991 under Federal-wide Assurance-3703; IRB ⁄ institute codes, NTR 03-180). Details on design, biological sampling, and data collection were described previously (Willemsen et al., 2010). To investigate differences in epigenetic variation between young and old MZ pairs, 37 MZ pairs of each sex were randomly selected from all NTR MZ pairs who were under 30 years old at sampling (n = 98 pairs, 44 male pairs). For three male and five female MZ pairs, there was insufficient DNA of both co-twins. These MZ pairs were excluded, and the remaining 66 MZ pairs were named 'young Dutch twins' (Table 5). To investigate differences in epigenetic discordance over the full adult lifespan, 37 MZ pairs were selected from all NTR MZ pairs between 30 and 50 years of age (135 pairs, 34 male pairs) using a block random selection procedure to guarantee an even distribution over the age range. They were combined with all NTR MZ pairs who were above 50 years of age (49 pairs, 16 male pairs). The MZ pairs between 30 and 65 years of age were named 'middle-aged Dutch twins' (n = 61; 15 male pairs), and the MZ pairs above 65 years of age were named 'old Dutch twins' (n = 25; eight male pairs; Table 5). DNA methylation Assays and measurement DNA methylation was measured using a quantitatively accurate mass spectrometry-based method (Epityper version 1.05, Sequenom, San Diego, CA, USA; Coolen et al., 2007;Ehrich et al., 2008). A total of 10 DNA methylation assays were measured in this study, the LINE1 assay for global DNA methylation (Wang et al., 2010) and nine assays for DNA methylation at seven specific genomic loci [IGF2 (3 assays), LEP, CRH, ABCA1, INS, KCNQ1OT1, and GNASAS; Table 1]. Two novel assays were designed to assess methylation of the CpG sites directly telomeric, named IGF2_pter, and centromeric, named IGF2_qter, of the assay at the IGF2 locus' DMR (Heijmans et al., 2007), named IGF2DMR for clarity. The primers of each assay were designed to create a PCR bias for completely BS-converted DNA (Li & Dahiya, 2002). More details on the design, features, and measurement of the other eight methylation assays were described in detail previously (Heijmans et al., 2007;Talens et al., 2010;Wang et al., 2010). Briefly, BS conversion of 0.5 lg of genomic DNA using the EZ 96-DNA methylation kit (Zymo Research, Orange, CA, USA) was followed by PCR amplification (primers are given in Table S1A), fragmentation after reverse transcription, and analysis on a mass spectrometer. Fragments that contain one or more CpG sites are called CpG units. Randomization and quality control All methylation assays were measured in triplicate on the same BS-converted DNA sample. DNA samples of both co-twins of an MZ pair were always allocated to the same batch for BS conversion (on 96-well plate) and PCR amplification (384-well plate, 3 · 124 DNA samples). Each àAvailability of data on the amount of the major leukocyte fractions (neutrophils, lymphocytes, monocytes, basophils, and eosinophils). §The population sample covering the full adult lifespan combines the young, middle-aged and old NTR, and the old DTR twins. ª 2012 The Authors Aging Cell ª 2012 Blackwell Publishing Ltd/Anatomical Society of Great Britain and Ireland batch contained equal proportions of the age groups measured and the sexes, and each PCR batch contained equal proportions of the BS conversion batches. There were two phases of methylation measurement in this study. First, all ten methylation assays were measured in the young Dutch and the old Danish twins, who were randomly divided over the measurement batches. The ten assays contained a total of 74 measurable CpG units, over which 102 CpG sites were distributed (Table 1). After quality control (Talens et al., 2010), 65 CpG units, containing 93 CpG sites, remained, with a mean call rate of 96.5%. In the second phase, six methylation assays representing observations on the ten assays were measured in the middle-aged and old Dutch twins, who were randomly divided over the measurement batches, and in the follow-up Danish twins, who were all allocated to a single measurement batch. The six loci contained a total of 47 CpG units, over which 66 CpG sites were distributed. After quality control, 42 CpG units, containing 61 CpG sites, remained both for the Dutch twins (average call rate = 96.4%) and for the follow-up Danish twins (average call rate = 95.7%). These CpG units were the same that passed quality control in the first phase. Table S1B gives the CpG units and CpG sites that passed quality control of each assay and the call rates per assay in both phases. Bisulfite conversion was assessed using the MassArray R package (Thompson et al., 2009), which identifies CpG-less fragments containing a TpG and a cytosine on the assay's original genomic sequence. It analyzes the mass spectra, treating these fragments as hypothetical CpG sites, because incomplete BS conversion would result in the same mass shift as Cytosine methylation at a CpG site. For both Danish and Dutch twins, this analysis qualified BS conversion as complete within the technical limitation of the method. Statistical analysis Definitions Methylation variation: the standard deviation of the mean (SD) of the interindividual differences within a group. Within-pair methylation difference: the within-pair DNA methylation difference at each CpG unit, with DNA methylation of co-twin 1 as the reference: difference = Twin1 ) Twin2. Methylation discordance: the range of the within-pair differences in a group. To quantify age-related changes in discordance, the SD of the within-pair differences in a group is used. In figures, the absolute withinpair differences (absolute discordance) are used. Days: a continuous variable for the time between the drawing of blood from each co-twin of a twin pair computed in days, with co-twin 1 as the reference. Batch: a categorical variable with a distinct designation for each combination of PCR and BS batch. Linear mixed models, description of basic models Linear mixed models were used to test for age-related changes in DNA methylation of the assays, its variation, and its discordance, as previously described (Tobi et al., 2009;Talens et al., 2010). More details on the linear mixed model are given in the Data S1 (Supporting information). In all the linear mixed models used for testing age-related changes in interindividual methylation variation, DNA methylation was entered as a dependent variable. Individual was the subject variable. Necessary adjustments were made by entering age, sex, twin designation (T1 or T2, to account for nonindependence), batch, and CpG unit as fixed effects. In all the linear mixed models used for testing age-related changes in within-pair methylation discordance, the within-pair difference was entered as a dependent variable. Family was the subject variable. Necessary adjustments were made by entering age Twin1 , days, sex, batch, and CpG unit as fixed effects. Both basic models were adapted to suit each specific test as described below. Adaptation of basic models for each specific test DNA methylation, methylation variation, and methylation discordance were compared between young Dutch (n = 66 pairs) and old Danish twins (n = 67 pairs). Age group (young or old) was added to the models as a random effect to test for differences in variation or discordance and as an extra fixed effect, replacing age, to estimate adjusted group mean methylation or mean within-pair difference and its SD [using the standard error (SE) of the mean] and test for group differences. Changes in methylation discordance over the full adult lifespan were tested in all Dutch and the old Danish twins (n = 219 pairs), and age was entered as a random effect. Longitudinal methylation variation was investigated in the follow-up Danish twins (n = 19 pairs), and the model was adapted as follows: DNA sample (i.e., individual per year of sampling) was the subject variable. Year of sampling (1997 or 2007) was entered as an extra random effect and as an extra fixed effect. Adjustment for age was made using age at first sampling, and no adjustment for batch was required. Adjusted mean DNA methylation, the differences in means, interindividual variation, and within-pair discordance are all expressed as percentage DNA methylation. The fold change in methylation variation and discordance between groups is expressed as a proportion by dividing the SD in the older group with the SD in the younger group (SD older ⁄ SD younger ). The change in discordance over the adult lifespan is expressed as the proportional increase each decade as percentage of the discordance of the previous decade. Significance of the agerelated changes in variation or discordance was tested with a onesided Z-test applied on the random effect estimate of age group, age, or sampling time divided by its SE, which adds up to a Wald test. Adaptation of basic models for subsidiary tests To test whether age effects were similar between Dutch and Danish individuals, methylation variation and discordance were compared between old Dutch twins (n = 25 pairs) and old Danish twins (n = 67 pairs). Age was entered as an extra random effect. An interaction term age*country was entered as an extra fixed effect, insignificance of which would establish that Dutch and Danes represent the same population. Nested linear mixed models were used to investigate confounding by leukocyte population heterogeneity, approximated by percentage neutrophils, as recently described (Talens et al., 2010). Confounding of methylation variation and discordance was tested on the young Dutch twins (n = 66 pairs). Confounding of age-related changes was tested on all Dutch twins (n = 152 pairs). The basic models are as described above, with age also entered as a random effect when testing age-related changes. Nested models had percentage neutrophils for testing its influence on methylation variation and the within-pair difference in percentage neutrophils (Twin 1 ) Twin 2) for testing its influence on methylation discordance, added to their corresponding basic model as an extra fixed effect. The amount of confounding is determined by the change in residual variance, or the change in the random effect estimate of age, in the nested model with respect to the basic, as described previously (Talens et al., 2010). Variance component models for twin analysis In this study, interindividual methylation variation and within-pair methylation discordance have been investigated separately. In the statistical models commonly used in twin research, both aspects of variation can also be investigated simultaneously, thus correcting each component for the other. The classical twin model for MZ twins (Purcell, 2002) postulates that the methylation values (y) at a given locus for co-twins 1 and 2 of twin pair i are defined by an overall mean (l) that may depend on age, sex, batch, CpG unit, by a random twin-pair effect (b i ), the familial environment, which stands for the shared factors of the twin pair, including common environment and genotype, and by a residual error (e ij ), the individual environment which stands for the factors that are unique to each co-twin (in formula: y ij = l + b i + e ij ). However, this classical twin model is not able to capture that the methylation variance increases with age. We therefore used the following extension of the classical twin model to allow for such age variation: y ij = l + b i + e ij + age ij · a i + age ij · c ij , with l, b i , and e ij as before and a i and c ij quantifying shared (familial) and unique (individual) age effects independently from each other and from b i and e ij . More details on these MZ twin variance component models (Purcell, 2002) are given in the Data S1 (Supporting information). In the linear mixed models used to test the individual and familial components of variation in DNA methylation over the adult age range (n = 219 pairs), DNA methylation was entered as a dependent variable. Individual and family were both subject variables. Age was entered as a random effect around family (with the intercept) and as a random effect around individual, thereby adjusting each variance component for the other. For necessary adjustments, age, sex, batch, and CpG unit were entered as fixed effects. Significance of the age-related increases in total, familial, and individual variation was tested with a one-sided Z-test applied on the random effect estimates of age. The square root of the resulting random effect estimates represents an estimation of the SD, which, expressed as percentage DNA methylation, is easier to interpret. The total of all variation (residual variance, intercept, and familial and individual age-related estimate) and the individual variation (residual variance and individual age-related estimate) was plotted against age to visualize the familial and individual age-related increase in variation, because total variance minus individual variance represents the familial variance. Supporting information Additional supporting information may be found in the online version of this article: Data S1 Statistical models. Fig. S1 The absolute within MZ twin pair difference in % DNA methylation (y-axis) plotted against age (x-axis) per CpG unit of the LEPTIN locus. Fig. S2 The absolute within MZ twin pair difference in % DNA methylation (y-axis) plotted against age (x-axis) per CpG unit of the CRH locus. Fig. S3 The absolute within MZ twin pair difference in % DNA methylation (y-axis) plotted against age (x-axis) per CpG unit of the IGF2DMR locus. Fig. S4 The absolute within MZ twin pair difference in % DNA methylation (y-axis) plotted against age (x-axis) per CpG unit of the KCNQ1OT1 locus. Fig. S5 The absolute within MZ twin pair difference in % DNA methylation (y-axis) plotted against age (x-axis) per CpG unit of the GNASAS locus. Fig. S6 The absolute within MZ twin pair difference in % global DNA methylation (y-axis) plotted against age (x-axis) per CpG unit. Table S1 (A) primers used for the BS PCR. (B) CpG sites per CpG unit of each assay and assay call rates (CR) after quality control for the two phases of this study. Table S2 Significance of the test for interaction between country of origin and the observed age related epigenetic effects. Table S3 Influence of percentage neutrophils on methylation variation and discordance in young MZ twins. Table S4 Influence of percentage neutrophils on age related changes in methylation variation and discordance. Table S6 Significance test for the increase in total, familial and individual variation in DNA methylation. Table S7 Homogeneity of variance test of the within-pair methylation differences across the age groups per CpG unit. As a service to our authors and readers, this journal provides supporting information supplied by the authors. Such materials are peer-reviewed and may be re-organized for online delivery, but are not copy-edited or typeset. Technical support issues arising from supporting information (other than missing files) should be addressed to the authors.
2016-05-04T20:20:58.661Z
2012-08-01T00:00:00.000
{ "year": 2012, "sha1": "e58dc112a095f29acac13bbffba1511aeaa4ad5e", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3399918", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "f4606b7a9551b5855329bc3536f9bf24a6bb6a6a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
53873007
pes2o/s2orc
v3-fos-license
Isolated Soy Protein Supplementation and Exercise Improve Fatigue-Related Biomarker Levels and Bone Strength in Ovariectomized Mice Isolated soy protein (ISP) is a well-known supplement and has been reported to improve health, exercise performance, body composition, and energy utilization. ISP exhibits multifunctional bioactivities and also contains branched-chain amino acids (BCAAs), which have been confirmed to positively affect body weight (BW) regulation and muscle protein synthesis. The combined effects of BCAA supplements and exercise in older postmenopausal women with osteoporosis, sarcopenia, and obesity have been inadequately investigated. Therefore, in this study, we evaluated the potential beneficial effects of soy protein supplementation and exercise training on postmenopausal mice. Forty mice (14 weeks old) with ovariectomy-induced osteosarcopenic obesity were divided into five groups (n = 8), namely sham ovariectomy (OVX, control), OVX, OVX with ISP supplementation (OVX+ISP), OVX with exercise training (ET, OVX+ET), and OVX with ISP and ET (OVX+ISP+ET). The mice received a vehicle or soy protein (3.8 g/kg BW) by oral gavage for four weeks, and the exercise performance (forelimb grip strength and exhaustive swimming time) was evaluated. In the biochemical profiles, we evaluated the serum glucose level and tissue damage markers, such as lactate, ammonia, glucose, blood urine nitrogen (BUN), and creatinine phosphate kinase (CPK). The body composition was determined by evaluating bone stiffness and muscle mass. All data were analyzed using one-way repeated measures analysis of variance. The physical performance of the OVX+ISP+ET group did not differ from that of the other groups. The OVX+ISP+ET group exhibited lower levels of serum lactate, ammonia, CPK, and BUN as well as economized glucose metabolism after an acute exercise challenge. The OVX+ISP+ET group also exhibited higher muscle mass and bone strength than the OVX group. Our study demonstrated that a combination of ISP supplementation and exercise reduced fatigue and improved bone function in OVX mice. Introduction In women, aging causes menopause-related hormonal changes that result in body fat redistribution and loss of muscle mass and strength [1]. Age-related muscle mass loss and decreased strength are termed sarcopenia, which is caused by multiple contributing factors such as changes in endocrine function, chronic diseases, disuse, inflammation, and nutritional deficiencies [2]. Sarcopenia causes physical performance impairment and disability as well as increased risk of falls and fractures [3]. Similarly, aging is usually accompanied by a progressive decrease in bone mineral density (BMD) and bone mass, which is termed osteoporosis if the BMD T score is <2.5 standard deviations of the normal T scores of young adults [4]. The presence of both sarcopenia and osteoporosis in elderly women is termed osteosarcopenia [5]. A previous study reported that muscle mass loss caused by sarcopenia can reduce the mechanical loading of gravitational forces as well as BMD [6]. The risk of falls and fractures in individuals with a combination of sarcopenia and osteoporosis is higher than in those with either condition alone [5]. Sarcopenia is the age-related progressive loss of muscle mass and impairment of physical performance [7,8]. Providing an effective intervention for patients with sarcopenia is crucial. An earlier study reported that resistance training was used as the primary exercise-based intervention to prevent the progression of muscle mass loss and strength impairment in elderly patients [9]. In addition to resistance training, protein intake is necessary for the maintenance of muscle mass. Protein supplementation is generally considered essential to maximize the skeletal muscle response caused by resistance exercise. Despite discrepancies in the results of studies on the benefits of protein supplementation during resistance exercise interventions, Cermak et al. performed a meta-analysis and reported that protein supplementation increased the lean body mass and muscle strength among both younger and older people [10]. Particularly in frail, older people, protein supplementation can increase the muscle mass during prolonged resistance training [11]. In addition to reducing frailty, protein supplementation combined with resistance training also attenuated the negative effects of sarcopenia and aging on body composition and physical function [12]. Branched-chain amino acids (BCAAs) are obtained from isolated soy protein (ISP) and account for approximately 35% of essential amino acids, which are needed for skeletal muscle formation [13]. BCAAs can shift the net balance of protein metabolism from catabolism to anabolism; therefore, the result is increased protein formation instead of waste [14]. Moreover, BCAAs consisting of the amino acids leucine, isoleucine, and valine have been demonstrated to increase protein anabolism levels and synthesis of skeletal muscle protein; therefore, BCAAs are consumed as nutritional supplements for improving sports performance and preventing the loss of muscle mass caused by aging and illness [15][16][17]. However, another study mentioned the ineffectiveness of BCAA-enriched diet for anabolic effects and that they may be caused by altered distribution and availability of various amino acids resulting from excessive consumption of one or more of the BCAAs [18,19]. Besides, some of the positive effects of BCAA on protein balance are mediated by branched-chain keto acids, glutamine, and beta-hydroxy-beta-methylbutyrate [20]. As a comparison to soy protein, diets supplemented with other types of protein have been studied. In one study, a casein-enriched diet was shown to have no positive effects on protein balance [21]. The anabolic effects of BCAA supplements increase muscle protein synthesis and reduce the degradation of muscle protein. We hypothesized that ISP, which contains BCAAs, combined with resistance exercise training (ET) retards the progress of physical disability and improves muscle mass and bone strength in older postmenopausal women with osteosarcopenia and obesity. Therefore, we conducted this study to investigate the synergistic effects of ISP supplementation and resistance exercise training (ET) on biochemical profiles, exercise performance, and body composition of ovariectomized (OVX) mice with osteosarcopenia and obesity. Materials, Animals, and Experimental Design A BCAA-rich ISP supplement in this study was purchased from Best Jet/Gogodone Co. Ltd. (New Taipei City, Taiwan). The nutrients and amino acid categories present in the ISP product were analyzed by SGS Taiwan, Ltd. (New Taipei City, Taiwan) ( Table 1). Forty 14-week-old female Institute of Cancer Research (ICR) strain mice raised in specific pathogen-free conditions were purchased (A Charles River Licensee Corp., Yi-Lan, Taiwan). Before the study started, an environment and diet acclimation program was implemented for 1 week. A standard laboratory diet (No. 5001; PMI Nutrition International, Brentwood, MO, USA) and distilled water were provided ad libitum. The mice were entrained to a 12 h light/12 h dark cycle at room temperature (24 ± 1 • C) and 50-60% humidity in this study. The Institutional Animal Care and Use Committee (IACUC) of National Taiwan Sport University reviewed the animal study protocols, and the ethics committee of IACUC approved this study (protocol number 10602). The mice were randomly distributed into five groups (n = 8), namely sham OVX (control), OVX, OVX with BCAA-rich ISP supplement (OVX+ISP), OVX with progressive ET (OVX+ET), and OVX with BCAA-rich ISP supplement and ET (OVX+ISP+ET). The OVX procedure was performed within 1 day of purchase by an experienced veterinarian on the mice at 14 weeks of age. ISP Supplementation The OVX+ISP+ET group was given ISP 30 min after ET, whereas the ISP group received only the ISP supplement. The recommended dose of ISP for humans is approximately 18.5 g per intake with a normal diet and exercise program. The murine ISP dose (3.8 g/kg) used in this study was determined using a human equivalent dose, which was calculated using body surface area and the following formula from the US Food and Drug Administration: Assuming a human weight of 60 kg, the human equivalent dose of 18.5 g/60 kg (0.308 g/kg) = 0.308 g. Next, the conversion coefficient 12.3 was used, and the murine dose was 3.8 g/kg; 12.3 was used to account for differences in body surface area between mice and human beings. Resistance Exercise Training (ET) Protocol The OVX mice in the ET and ET+ISP groups were subjected to a standardized protocol, which was modified from previous studies [22][23][24]. The mice were placed in a plastic container (height: 65 cm; diameter: 20 cm) filled with tap water (at 30 ± 1 • C) up to a height of 14-18 cm. The training program consisted of three parts, namely the adaptation, muscle growth, and maximum muscle strength phases. At the beginning of the first week, in the adaptation phase, we subjected the mice to 3 min of rest and 2 min of forced swimming (at a 14-cm depth) with 3-6% body weight (BW) loading for six cycles. The muscle growth phase was observed in the second and third weeks. In this phase, we subjected the OVX mice to 1 min of rest and 1 min of forced swimming (at a 14-cm depth) with 10% to 14% BW loading for five to seven cycles. In the third week, we subjected the mice to 1 min of rest and 1 min of forced swimming (at a 16-cm depth) with 14-17% BW loading for five cycles. In the maximum muscle strength phase, we subjected the mice to 3 min of rest and 0.5 min of forced swimming (at an 18-cm depth) with 22% BW loading for 10 cycles. This training protocol was conducted five times each week. Forelimb Grip Strength Test We adopted a low-force testing system (Model-RX-5, Aikoh Engineering, Nagoya, Japan) to evaluate the forelimb grip strength of all the mice in this study. The tensile force data of the mice were recorded using a force transducer, which was equipped with a metal bar (diameter: 2 mm, length: 7.5 cm). Tension equivalent to 10 times the grip strength was the peak tension during each trial, and the tension was recorded using an attached force gauge. The maximal force (grams) of this low-force system was recorded and used to indicate grip strength. The detailed procedures of evaluation have been provided in our previous studies [25,26]. The forelimb grip strength test was performed after administering the ET intervention for 4 weeks. Swimming Exercise Performance Test After 4 weeks of intervention, we conducted a swimming exercise performance test to assess the exercise endurance of the mice in this study. A lead sheet (weighing as much as 5% of the average BW of a mouse) was attached to the tail of mice in this exhaustive swimming challenge test. The test was performed in a plastic container (height: 65 cm; radius: 20 cm). The depth of the water was 40 cm, and the temperature was maintained at 27 ± 1 • C. When a mouse failed to rise to the water surface for breathing for >7 s, we considered the mouse to be exhausted, and the duration of swimming was recorded as the exercise endurance. Clinical Biochemical Profiles At the end of the experimental period, all the mice were euthanized using 95% CO 2 , and blood was immediately collected during the rest status. Serum was obtained by centrifuging the blood samples, and clinical biochemical variables, such as the levels of aspartate transaminase (AST), alanine transaminase (ALT), alkaline-P, albumin, total protein, blood urea nitrogen (BUN), creatinine, creatine phosphate kinase (CPK), uric acid (UA), total cholesterol (TC), triglycerides (TG), and glucose, were measured using an autoanalyzer (Hitachi 7060, Hitachi, Tokyo, Japan). Tissue Glycogen and Weight Determination About 1 h after the last treatment, mice were sacrificed by CO 2 inhalation. The important visceral organs, including liver, kidney, heart, lung, muscle mass (gastrocnemius and soleus muscles in the posterior part of the lower legs), OPF (ovarian fat pad), and BAT (brown adipocyte tissue) were accurately excised and weighed after sacrifice. Part of the muscle samples were kept in liquid nitrogen for glycogen content analysis. Because the liver and skeletal muscles are two major glycogen storage tissues, we selected these two tissues for glycogen content analysis through the method mentioned in our previous study [26]. At the end of the study, the weights of the related visceral organs and muscles were also recorded for body composition analysis. Measurement of Bone Strength The bone strength, stiffness, and energy were assessed in terms of failure load (FL). The FL of the midshaft regions of the right femurs were assessed using a three-point bending test to determine failure (in N) using a computerized testing machine (SV-H1000, Japan Instrumentation System Co., Tokyo, Japan) [27]. Statistical Analysis Data are presented as means and standard deviations of the means (SD), and one-way analysis of variance was applied to analyze the differences between groups. Statistical analysis was performed using SAS 9.0 (SAS Inst., Cary, NC, USA), and values of p < 0.05 indicated statistical significance. Effects of BCAA-rich ISP Supplementation and ET on BW and Organ Weights All the OVX mice had significantly higher BW than the sham OVX (control) mice throughout the study. The BW changes in all the groups in this study are presented in Figure 1. A comparison of the differences in BWs between OVX groups showed that the initial and final BWs in the OVX, OVX+ISP, OVX+ET, and OVX+ISP+ET groups did not differ significantly. The BW, food consumption, and body composition are presented in Table 2. All the OVX groups exhibited a lower intake of food and water than the sham OVX group. Effects of ISP Supplementation and ET on Performance on the Forelimb Grip Strength and Exhaustive Swimming Tests As shown in Table 3, the mean values of forelimb grip strength in the sham OVX, OVX, OVX+ISP, OVX+ET, and OVX+ISP+ET groups were 133, 136,135, 140, and 143 g, respectively; these groups did not differ significantly in the forelimb grip strength. When the grip strength was calibrated using BWs, no difference was observed among the groups. Furthermore, the exercise endurance did not differ significantly among the sham OVX, OVX, OVX+ISP, OVX+ET, and OVX+ISP+ET groups (Table 3). Effects of ISP Supplementation and ET on Performance on the Forelimb Grip Strength and Exhaustive Swimming Tests As shown in Table 3, the mean values of forelimb grip strength in the sham OVX, OVX, OVX+ISP, OVX+ET, and OVX+ISP+ET groups were 133, 136, 135, 140, and 143 g, respectively; these groups did not differ significantly in the forelimb grip strength. When the grip strength was calibrated using BWs, no difference was observed among the groups. Furthermore, the exercise endurance did not differ significantly among the sham OVX, OVX, OVX+ISP, OVX+ET, and OVX+ISP+ET groups (Table 3). Data are mean ± SD, (n = 8). No significant differences at p < 0.05 through one-way analysis of variance. OVX: ovariectomized; ET: exercise training; ISP: isolated soy protein supplementation; SD: standard deviation. Effects of ISP Supplementation and ET on Fatigue-Related Indicators after Acute Exercise Lactic acid, ammonia, glucose, BUN, and CPK levels can indicate fatigue status after exercise. Lactic acid levels were the lowest and highest in the OVX+ISP+ET and sham OVX groups, respectively. Serum ammonia is usually considered a metabolic product; the serum ammonia levels were lower in the OVX+ISP+ET group than in the other groups. The sham OVX group exhibited higher glucose levels than the other experimental groups. The BUN levels were lower in the sham OVX and OVX groups than in the other experimental groups (p < 0.0001). The OVX group exhibited higher CPK levels than the sham OVX (p = 0.0291), OVX+ISP (p = 0.0347), OVX+ET (p = 0.0378), and OVX+ISP+ET (p = 0.0062) groups (Table 4). Effect of ISP Supplementation and ET on Hepatic and Muscular Glycogen Levels During exercise, glycogen is used as an energy source; therefore, glycogen storage in the liver is associated with physical endurance. However, liver glycogen levels in the OVX group did not differ significantly from those in the other experimental groups. Furthermore, the sham OVX and OVX+ISP groups exhibited higher levels of muscle glycogen than the other experimental groups (Table 4). Biochemical Analyses at the End of the Experiment To investigate effects of experimental intervention, we evaluated the biochemical markers at the end of the study. The levels of TG, glucose, UA, and alkaline-P did not differ significantly among the groups in this study ( Table 5). The OVX group exhibited higher ALT, AST, creatinine, and CPK levels than the other groups ( Table 3). The TC level was higher in the OVX group than in the sham OVX (p < 0.0001), OVX+ISP (p = 0.0124), and OVX+ISP+ET (p = 0.0129) groups but was similar to that in the OVX+ET group (p = 0.0536). The OVX+ISP and OVX+ISP+ET groups exhibited higher BUN levels than the sham OVX, OVX, and OVX+ET groups. The OVX group exhibited higher lactate dehydrogenase (LDH) levels than the sham OVX (p = 0.0256), OVX+ISP (p = 0.0173), and OVX+ISP+ET (p = 0.0089) groups. The high-density lipoprotein (HDL-c) levels in the OVX+ISP, OVX+ET, and OVX+ISP+ET groups were higher than those in the sham OVX and OVX groups. The low-density lipoprotein (LDL-c) levels in the sham OVX group were higher than those in the OVX (p = 0.0399), OVX+ISP (p = 0.0013), OVX+ET (p = 0.0246), and OVX+ISP+ET (p = 0.0283) groups (Table 5). Table 5. Effect of ISP supplementation and four weeks of ET on biochemical serum levels at the end of the experiment. Bone Strength at the End of the Experiment Our study investigated bone strength and stiffness at the end of the experiment. The bone energy values of the sham OVX, OVX+ISP, OVX+ET, and OVX+ISP+ET groups did not differ significantly. However, the bone energy value in the OVX group was lower than those in the sham OVX (p = 0.0299) and OVX+ISP+ET (p = 0.0299) groups. The sham OVX group exhibited higher levels of bone stiffness than the OVX (p = 0.0116), OVX+ISP (p = 0.0183), OVX+ET (p = 0.0071), and OVX+ISP+ET (p = 0.0433) groups. Furthermore, no statistical difference was observed among any of the OVX groups with or without intervention in this study. The OVX+ISP+ET and sham OVX groups exhibited higher bone strength than the OVX, OVX+ISP, and OVX+ET groups (Table 6). Discussion In this study, we found that four weeks of ISP supplementation and ET increased the bone strength in the OVX mice. However, the improvements in the muscle mass, forelimb grip strength in the intervention groups were not significantly higher than those in the OVX group. The laboratory data (of fatigue-related biomarkers) of the OVX+ISP+ET group indicated lower fatigue levels than those in the other groups. The OVX+ISP+ET group did not exhibit higher endurance than the other groups in the swimming tests, but the fatigue biomarkers indicated lower levels of fatigue in the OVX+ISP+ET group than in the other groups. Our study demonstrated that OVX increased BW and body fat accumulation in the experimental groups. Estrogen deficiency caused by OVX increased the food intake of the mice. Estradiol controls the food intake through feedback signal regulation; therefore, it affects BW and the amount of food consumed [28]. This finding is compatible with our study, which presented significantly higher weight gain in the OVX groups than in the sham OVX group. The OVX group did not show an increase in the amount of food consumed; however, the mice in the OVX group showed an increase in BW. The increase in weight could be related to menopausal transition in the OVX mice; a previous study had mentioned that menopausal transition is accompanied by a decrease in food intake [29]. The OVX mice exhibited greater muscle mass than the other groups; however, the amount of brown adipose tissue did not differ among the groups. The OVX+ISP and OVX+ET groups exhibited less muscle mass than the OVX group. The difference in muscle mass can be explained using the relationship between BW load and the maintenance of muscle mass; moreover, a previous study had mentioned that weight loss accelerates age-related muscle decline [30]. Contrary to our expectation, the effects of exercise and BCAA-rich ISP supplement on muscle mass (increasing muscle mass) were not significant in this four-week intervention. Findings from previous studies have shown that both resistance training and protein supplementation are less effective in older adults than in younger adults; this is known as chronic blunting responsiveness in older people [31][32][33]. Furthermore, a recent study indicated that supplementation with protein or essential amino acids did not augment the effect of progressive resistance ET on body composition, muscle strength, size, or functional ability among older adults [34]. The blunting response could have caused the lack of difference in muscle mass and body composition among all the OVX groups with or without ISP supplementation or ET intervention. The forelimb grip strength measures maximal and explosive force production, and the swimming test evaluates the aerobic capacity of the mice. In our study, grip strength and swimming endurance did not improve in any of the OVX groups. We considered that the aforementioned blunting response in the older mice could have caused a low-level response to ISP supplementation and ET intervention for four weeks. We supposed that the intensity of exercise and amount of nutritional supplement provided could not improve the physical performance of the OVX mice. The status of muscle fatigue after exercise can be assessed using levels of lactate, ammonia, glucose, CPK, and BUN [35]. Lactate is an oxidized substrate in the skeletal muscles, a precursor for gluconeogenesis in the muscles, and is produced through glycolysis. Lactic acid is formed from lactate; high levels of lactic acid after energy utilization indicate poor exercise endurance. Our study showed that the OVX+ISP or OVX+ET groups accumulated lower levels of lactic acid than the OVX and sham OVX groups. Moreover, because of the synergistic effects of the ISP supplements with ET, the OVX+ISP+ET group exhibited the lowest lactic acid levels. However, the swimming endurance results of the intervention groups were not superior to those of the OVX group. This finding can support our hypothesis that the amount of ISP supplement used or the exercises selected were inadequate for improvement in physical performance. A previous study had mentioned that peripheral and central fatigue levels are related to increased ammonia levels during exercise [36]. The OVX+ISP+ET group exhibited insignificantly lower levels of ammonia than the OVX and OVX+ISP groups. We supposed that ET reduced the ammonia level in the OVX+ISP+ET group. CPK levels indicate muscle injuries, which are caused by muscular dystrophy, severe muscle breakdown, myocardial infarction, autoimmune myositis, and acute renal failure. Our study showed that the OVX+ISP, OVX+ET, and OVX+ISP+ET groups exhibited lower CPK levels than the OVX group. Both ISP supplementation and ET can improve the laboratory data of CPK. However, the combination of ISP supplementation and ET did not show an additive effect on the CPK levels in the OVX group. Aging aggravates bone loss in menopausal women because of the loss of estrogen. We simulated this condition through bilateral OVX. The combination of muscle loss and osteoporosis is called osteosarcopenia. Our study demonstrated that bone strength was higher in the OVX+ISP+ET group than in the OVX group and similar to that in the sham OVX group. However, the bone stiffness did not differ among the groups. ET is considered as a method for increasing bone mass using stress induced by mechanical loading, inhibiting bone resorption, and increasing bone formation [37]. However, contrary to our expectations, the ET intervention did not significantly improve bone strength or stiffness in the OVX mice. The effect of ET on bone strength in the OVX mice might have been attenuated because the intensity and progression dosage were insufficient to change bone stiffness. Our results revealed an additive effect between ISP supplementation and ET intervention on bone strength. Thus far, evidence indicating that ISP supplementation influences bone strength and structure is not available. The mechanism of maintaining bone strength in OVX-induced menopausal osteoporosis in mice requires detailed investigation; consequently, additional studies are needed in the future. Conclusions In conclusion, our study demonstrated that BCAA-rich ISP supplementation and ET improved bone strength in OVX mice. Although the grip strength and exercise endurance performance of the treated mice did not improve significantly, the levels of exercise-induced fatigue-related biomarkers, such as lactic acid, ammonia, and CPK, improved with concurrent ISP+ET intervention in OVX mice. The lipid profile results of the OVX+ISP and OVX+ET groups were superior to those of the OVX group. OVX caused an increase in BW of the mice, but the body composition of muscle mass was not higher in the OVX+ISP, OVX+ET, or OVX+ISP+ET groups than in the OVX group. Although improvements in strength and endurance after ET and ISP supplementation were not observed in this study, we found that the combination of interventions reduced the levels of fatigue-related biomarkers in the OVX mice. Our study revealed the potential additive effects of BCAA-rich ISP and ET on improving bone strength and levels of fatigue-related parameters in OVX mice. Additional studies on ISP supplementation and ET for older postmenopausal women are recommended in the future.
2018-12-02T14:53:35.308Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "35751160f1921a933d80b4856d131ce0afaf4d06", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/10/11/1792/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "35751160f1921a933d80b4856d131ce0afaf4d06", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
62822696
pes2o/s2orc
v3-fos-license
Variation of photosynthetic tolerance of rice cultivars ( Oryza sativa L . ) to chilling temperature in the light Forty-two genotypes from the rice germplasm (Oryza sativa L.) were identified under chilling temperature in the light at bud, seedling and booting stages and divided into three basic types; cultivars tolerant to chilling in the light such as japonica, cultivars sensitive to chilling in the light such as indica and cultivars that have intermediate tolerance to chilling in the light such as hybrid rice cultivar. Photosynthetic characteristics of two cultivars tolerant (c.v. Taipei309 and Wuyujing3), two cultivars sensitive (c.v.CA212 and Pusa) and two intermediated tolerant (c.v. Liangyoupeijiu and Shanyou63) to the chilling treatment in the light were compared. The results showed that, compared to the rice varieties chilling tolerant rice, the sensitive ones indica exhibited a significant inhibition of maximum photosynthetic rate (Pm) and a decrease in the photochemical efficiency of photo-system 2 (PS2)( Fv/Fm), which led to the accumulation of AOS and decrease of Chl content. Interestingly, the ratios of ASA/DHA and GSH/GSSG showed similar changes as those with the performance of chilling tolerance, which indicated that ASA/DHA cycle might be an important protecting strategy in chilling tolerance, especially for the middle tolerant ones. We describe a simple and effective screening method and physiological basis for breeding crops for enhanced tolerance to chilling temperature in the light. INTRODUCTION Indicia hybrid rice is grown on more than 1 trillion hectares (ha), making about 55% of the total rice growing area in China.Its yield is about 20% higher than that of conventional rice cultivars in China (9 -10 t /ha or 0.9-1.0Kg/m 2 (Cheng and Min, 2000;Cheng and Zhai, 2000).However, indicia hybrid and indica-japonica hybrid rice are more likely to suffer from low temperature during the whole developmental stages according to recorded observations over many years.Especially, the rice met with low temperature during the late developmental stages often result in early aging, which seriously restricts the potential for heterotic vigor. field.Studies on chilling tolerance in rice begun in Japan, followed by reports in China, mostly from Yungui Highland of China (Li et al., 2006).Because these researches used empty seed or poor seed-setting rate as indexes for chilling identifications, it would take the whole growth stage to complete the identification.Furthermore, seedempty rate of rice was easily influenced by the rice inheritance and the environment.As rice covers many ecological regions in the world, the chilling identification of different ecotypes from different ecological regions is often inconsistent even for one rice variety. It found that the performance of photo-inhibition, photooxidation and early aging in rice at low temperature and high light intensity in late development stages were closely related (Jiao et al., 2003).On sunny days at a temperature above 25°C, the reaction center of photosystem 2 (PS2) exhibited a dynamic change on reversible inactivation and down-regulation in order to reduce photoinhibition damages under intense irradiance at the noon with low temperature.The photo-damage and early aging caused by PS2 were related to the degradation of PS2-D1 protein and the inhibition of endogenous protection systems such as the xanthophylls cycle and enzymes scavenging active oxygen (Jiao and Ji, 2001).To complement the dissipation mechanisms mentioned above and counteract the oxidative pressure imposed by ROS formation, plants possess a multi-level antioxidant system, consisting of small antioxidants like ascorbate, atocopherol and glutathione as well as a multitude of ROS scavenging enzymes (Asada, 2000).Unfortunately, the metabolism of the antioxidant of leaves on cultivars' difference in chilling-tolerance in rice has not yet been reported (Conklin, 2001). In the study, 42 rice cultivars were used to identify the chilling tolerance and to study their effect of protection in photosynthesis under low temperatures in the light.Our study focused on the antioxidants for the role of chilling tolerance.It will provide the basis on the photosynthetic aspect of genetic approaches to rice chilling breeding. Plant The rice seeds were from Jiangsu Academy of Agricultural Sciences.The 13 japonica rice cultivars included 9516, H45, Wuyujing 3, PEPC transgenic rice, Kitaake and Suhuxiangjing.The 7 indica rice cultivars were Yangdao6, Xiangxian, IR64 and Peiai64S.The 6 japonica hybrid rice cultivars include SZ601 and the 16 indica hybrid rice cultivars include Yueyou 938, Shanyou63, X07s/zihui100 and Liangyoupeijiu.These cultivars were selected as materials in Nanjing, China, during the years between 2003 and 2008.The rice seeds were sterilized in 5% H2O2 for 5 min, soaked in water for 24 h, incubated at 35°C for 48 h and finally sowed by stage.Seedlings at similar developmental stages were transplanted into pots (5 hills per pot, 1 seedling per hill) and grown in an outdoor net-room.A completely randomized design with five replicates was utilized.Average temperature varied from 21°C to 27°C, with daily temperature differences from 7.1°C to 8.7°C.Chemical fertilizer was applied with a combination of 2.0 g N, 1.6 g P2O5 and 1.4 g K2O per plot as basal dressing and 1.0 g N as top dressing at the tillering and booting stages.The soil type was paddy soil. The treatment of chilling temperature on rice at different stages According to the method developed by Li and Cheng, (2005), uniform sprouting seeds (the length of bud was 0.05 m and the root height was 0.1 m) at bud stage were placed on a white plastic tray (0.23 × 0.17 m) with filter paper soaked with water.Each tray was sown with 50 buds of one variety.All the cultivars were treated less than 4°C for 5 d.The livability of treated buds was calculated.At seedling stage (6-week-old), uniform buds of rice cultivars were grown at growth champers at 28°C L/26°C D. The six-week old seedlings of each cultivar were then placed at 8°C with a PPFD of 600 mol/m 2 s for 2 d.The livability of these seedlings treated was again calculated.At booting stage, the rice plants were placed under 15°C with a PPFD of 600 mol/m 2 s for 5 d.After that, the treated plants were transported to outdoors in a screen house.Rice plants were watered and fertilized regularly.The seed-setting rate of the treated plants was measured after harvesting all the plants. Photosynthetic rate (P) Photosynthetic rate (P) of intact leaves in rice were monitored with a Li-Cor 6400 (Lincoln, Nebraska, USA) at 25°C under varying irradiance according to the method of Li et al. (2002a).The gas source was compressed air (CO2 concentration was 350 mol mol - 1 ).The light source was halogen light source.Varying irradiances on leaf surface were obtained by regulating the distance between light source and leaf chamber.A layer of circulating water between leaves and the illumination source was maintained for heat insulation (keep at 25°C and 60% relative humidity).The P under varying irradiation such as 0, 50,100,150,200,400,600,800,1000 and 1200 mol/m 2 s was measured respectively.The photosynthetic rate at each PPFD was surveyed with 4 to 6 repetitions.The photosynthetic light response curves were obtained by measuring the steady state rates under different PPFD.The photosynthetic CO2 response curves were obtained by measuring the steady state rates under different CO2 concentrations in ranges of 0 -1000 mol mol -1 . Chlorophyll fluorescence parameters Chlorophyll fluorescence parameters were measured using FMS-2 fluorescence meter (Hansatech, UK) and calculated according to Genty et al. (1989).The rice leaves were modulated measuring beam (0.12 mol/m 2 s) to determine the initial fluorescence yield (Fo). Maximum fluorescence yield (Fm) was determined during a saturating photon pulse (4000 mol/m 2 s).Variable Chlorophyll fluorescence (Fv) was calculated as Fv = Fm -Fo.Primary PS2 photochemical efficiency was expressed as Fv/Fm.Chlorophyll content in leaves was measured according to the method of Aron (1949). Determination of O2 - The content of O2 -was measured according to the method of Wang and Luo (1990).Leaf segments (about 5 g fresh mass) were immediately homogenized using a chilled pestle and mortar with acid washed quartz sand in 65 mM phosphate buffer (pH 7.8).The homogenate was filtered through 4 layers of miracloth.The filtrate was then centrifuged and 5000 × g for 10 min at 0-4°C.Phosphate buffer (0.9 ml) and 10 mM hydroxylamine hydrochloride (0.1 ml) was added in 1 ml of supernatant.This mixture was incubated at 25°C for 20 min.A half ml of incubated mixture was injected into 0.5 cm 3 17 mM p-aminobenzoic acid and 0.5 cm 3 17 mM -napthaleneamine at 25°C for 20 min.The developing solution was shaken with equal volume of n-butanol and subsequently separated into two phases.This phase with n-butanol phase was taken out and measured at 530 nm.The phosphate buffer without sample was used as control.If there were large quantity of chlorophyll in the sample, ethyl ether was used to replace n-butanol and the mixture was centrifuged at 1500 × g for 5 min.The absorbance of water phase at 530 nm was then measured.The production of O2 -was compared to the standard curve of developing NO2 -reaction. Measurement of malonyldialdehyde (MDA) Membrane lipid peroxidation was determined by the accumulation of membrane lipid peroxidation product-MDA according to the method of Heath and Packer (1968).The reaction between MDA (1 mol) and thiobarbituric acid (TBA, 2 mol) formed red-brown trimethine that can be detected quantitatively with spectrophoto-meter.Leaf discs (0.5 g) were ground in a solution containing 5 ml of 10% trochloroacetic acid (TCA) and some quartz sand.The homogenate was centrifuged for 10 min at 3,000 × g to remove cell debris.Then 2 ml of supernatant was collected and further mixed with 2 ml of 0.67% TBA (w/v).After keeping in boiling water for 20 min and cooling fully, the mixture was centrifuged at 3000 × g again.Finally, the supernatant was measured at 532 nm and 600 nm with a spectrophotometer. Measurement of H2O2 According to the method of Patterson et al. (1984), one gram of leaf blades was homogenized in 3 ml cold acetone.The homogenate was centrifuged for 10 min at 16000 × g.The supernatant (1 ml) was added with 0.1 ml of 20% TiCl2 in concentrated HCl, 0.2 ml concentrated ammonia solution.The peroxidation product with Ti component was washed five times with acetone, drained and dissolved in 3 ml 1 M H2SO4.The absorbance of the solution was measured at 410 nm and quantified according to the standardization curve of H2O2 produced by a similar procedure. Fatty acids Fatty acids were analyzed according to method of Yu and Su (1996).Lipids were methy-esterified in solution with 0.4 M KOH and benzene-petroleum ether (1:1, v/v).The fatty acid methyl esters were separated by gas chromatography (Shimadzu GC-17A, Japan) equipped with a hydrogen flame detector and a capillary column SP-2330 (15 m in length and 0.32 cm in i.d.).The column was iso-thermally run at 165°C and the detector was held at 250°C.The standard reagents of fatty acids were purchased from Sigma (US). Total glutathione and glutathione disulphide content Glutathione was determined by enzymatic assay using 0.1 g ground frozen leaf material in 1 ml extraction buffer containing 6% HClO4 and 0.2mM DTPA (Luwe et al., 1993).After centrifugation at 14,000 × g for 10 min, the supernatant was used to measure the content of glutathione (as GSH) and GSSG by determining the rate of absorbance increase at 412 nm for 120 sec (n = 7 from two independent experiments).Supernatant aliquot of 0.4 ml was neutralized with 0.6 ml of 0.5 M phosphate buffer (pH7.5).For GSSG assay, the GSH was masked by adding 20 l of 2-vinylpyridine of the neutralized supernatant, whereas 20 l of water was added in the aliquots utilized for the total glutathione pool (GSH+GSSG) assay.Tubes were mixed until an emulsion was formed.Glutathione content was measured using 1 ml of reaction mixture containing 0.2 mM NADPH, 100 mM phosphate buffer (pH7.5), 5 mM EDTA, 0.6 mM 5,5'dithiobis (2-nitrobenzoic acid) and 0.1 ml of sample obtained as described above.The reaction was started by adding 3 u GR and was monitored by measuring the change in absorbance at 412 nm for 1 min.The amount of GSH was estimated by the differences between the amount of total glutathione and that of GSSG.A standard curve for GSH in the range of 0 -30 molml -1 was prepared for the calculation. Extraction and analysis of ascorbate Leaf samples were homogenized in 5% metaphosphoric acid.The homogenate was centrifuged at 18000 × g and the supernatant was then used.The ASC and DHA content were determined based on the methods developed by Kampfenkel et al. (1995) and Foyer et al. (1995) with some modifications. Statistical analysis All results reported here are the means of replicates.Data were subjected to the analysis of variance (ANOVA) using STAT-GRAPHICS plus 5.1 statistical software (Statistical Graphics Corp., Princeton, NJ). Identification of chilling-tolerant rice cultivars at different growth stages Forty two rice cultivars were screened for the sensitivity or tolerance to chilling in the light at the bud, seedling and booting stages (Table 1).The livability of buds under 4°C after 5 d, the livability of seedlings under 8°C after 2 d and the seed-setting rate under 15°C after 5 d were used as indexes of chilling tolerance, respectively.The results showed that the identification at the bud or seedling stage was consistent with that at the booting stage.The chilling identification at the booting stage appeared to be related with those of the bud (R 2 = 0.7980) and seedling stage (R 2 = 0.8873) (Figure 1).Similar degree of performance to chilling was observed for different rice varieties at different stages.Furthermore, a plot of chilling tolerance index at different stages showed (Figure 2) that these 42 rice cultivars may be divided into different groups: (1) Japonica cultivars tolerant to chilling temperature: The members of this group include Taipei309, Kitaake and Wuuyujing 3. (2) Indica cultivars sensitive to chilling temperature: The members of this group include CA212, Xiangxian and Pusa. (3) Hybrid rice cultivars tolerant to chilling at middle degree: The members of this group include Liangyoupeijiu and Shanyou 63. Japonica hybrid rice cultivars were usually more tolerant to chilling than indica hybrid rice cultivars.Therefore, two tolerant cultivars such as Wuyujing 3 and Taipei309, two sensitive cultivars such as CA212 and Pusa and two middle tolerant ones such as Shanyou63 and Liangyoupeijiu were chosen for further examine the physiological and biochemical factors for their responses to the chilling temperature in the light. A Taipei 309 Figure 3. Changes in photosynthetic rate to PAR of leaves of six rice varieties due to chilling treatment. Table 2. Apparent quantum yield and carboxylation efficiency of six rice cultivars after treatment of low temperature in the light.Measurements were on the top two leaves.Means ± SE (n = 6).Photosynthetic rates of attached leaves were measured at 25°C, 340 µmol/mol CO2 and 21% O2.Photosynthetic rates of attached leaves were measured at 25°C, PFD 600 µmol/m 2 s and 21% O2. ), obvious changes of photosynthetic rate were observed on the leaves of six rice varieties after low temperature treatment.Compared to the plants without chilling treatment, the tolerant cultivars Wuyujing 3 and Taipei 309 managed to retain higher activities than the sensitive cultivars CA212 and Pusa.It appeared that the sensitive cultivars decreased in both Pm and apparent quantum yield at a low light intensity, resulting in a greater inhibition of photosynthetic rate under high light intensity after chilling treatment than the tolerant ones.These results are consistent with those obtained in the initial screening (Table 1). Culvitars At certain CO 2 concentration, the photosynthetic rates of plant increased due to the increase of CO 2 concentration.Figure 4 shows the relationship of photosynthesis to CO 2 response in the leaves of different rice varieties after chilling treatment.Compared with photosynthesis to light response curves, the photosynthesis to CO 2 response curves of different rice varieties after chilling treatment changed less.The carboxylation efficiency of Wuyujing3, Taipei309, Shanyou63, Liangyoupeijiu, Pusa and CA212 after chilling treatment were respectively 89.50, 98.48, 96.30; 90.80, 60.90 and 91.6% of the corresponding cultivars without chilling treatment (Table 2).The results demonstrated that the Rubisco carboxylation ability to CO 2 were relatively stable and not easy to be changed under chilling condition. Changes of Chl fluorescence parameters, chlorophyll content, O 2 -forming rates and H 2 O 2 contents in six rice cultivars after low temperature treatment in the light The Chl fluorescence parameters are good indicators for the assessment of PS2 physiological state.As shown in Figure 5, primary photochemical efficiency of PS2 (Fv/ Fm) in various rice cultivars decreased to different extent after low temperature treatment in the light as compared to those without treatment.The decreases of Fv/Fm of japonica Taipei309 and Wuyujing3 were less than those of indica Pusa and CA212, while those of Indica hybrid Shanyou 63 and Liangyoupeijiu were between them.Similar to Fv/Fm, low temperature treatment reduced Chl content of cultivars that were susceptible to chilling temperature, such as CA212 and Pusa, more significantly than the tolerant ones, while the indica hybrid Shanyou63 and Liangyoupeijiu was between the two different types.As shown in Figure 6, the changes in Chl content and Fv/Fm in leaves of the six rice varieties were consistent.As showing in Figure 5, O 2 -forming rates, MDA content and H 2 O 2 content in the cultivars susceptible to chilling temperature accumulated more than those of the tolerant ones, demonstrating the chilling damages.These results demonstrated that the stable PS2 activity (Fv/Fm) after chilling treatment in tolerant rice such as japonica subspecies provided the physiological basis to protect photosynthetic rate from the influence of chilling damage. Changes of fatty acids in six rice cultivars after low temperature treatment in the light To better understand the sensitivity to chilling, we investigated the content of fatty acids in leaves in six rice cultivars (Figure 7).The content of unsaturated fatty acid (UFA) in the leaves of tolerant cultivars such as Wuyujing3 and Taipei309 were generally greater and the indexes of unsaturated fatty acid (IUFA) were higher than the cultivars susceptible to chilling temperature such as Pusa and CA212 at normal conditions.After low temperature treatment in the light, (the index of Saturated fatty acid) ISFA and IUFA in the tolerant cultivars decreased by 27.1 and 7.1%, respectively, as compared to 34.2 and 9.9% for those of the susceptible cultivars.These results suggested that the difference in fatty acid contents and the changes under chilling temperature might be one of structure basis for the tolerance of chilling temperature in rice. Changes in the antioxidant content of the different rice varieties after low temperature treatment Our results showed that, after the low temperature treatment in the light, total glutathione and ascorbate contents in leaves of the six rice varieties were generally enhanced, but different types of chilling tolerance demonstrated different degree of increase (Figure 8).For example, the total ascorbate contents of Wuyujing3, Taipei309, Liangyoupeijiu, Shanyou63, Pusa and CA212 under chilling temperature were increased by 106.0,102.0,210.0, 136.0, 250.0 and 192.7%, respectively, while the total glutathione contents in these varieties were elevated by 115.7,110.0,128.8, 136.4, 259.0 and 195.7%, respectively.However, it appeared that there was no difference in the total antioxidant content during chilling temperature for these varieties.Further analysis of the glutathione disulphide (GSSG) in glutathione pool after chilling temperature (Figure 8) indicated that GSSG contents in the cultivars susceptible to chilling temperature such as Pusa and CA212 increased by 1226.0 and 928.0%, while GSSG contents in the tolerant cultivars increased only by 140.0 and 118.0%.The changes of dehydroascorbate (DHA) content in the rice cultivars were consistent with the changes of GSSG.Furthermore, the changes of ASA/DHA ratio and GSH/GSSG ratio in the rice cultivars were different for different types; the sensitive cultivars decreased more than the tolerant ones, while the indica hybrid rice cultivars were in the middle.It implied that rice's tolerance of chilling in the light might be closely related to the ratio of reduced and oxidative forms of the antioxidant pool, especially to the reduced antioxidant.The role of antioxidant molecular in rice cultivars after chilling temperature would orchestrate different processes through the generation of appropriate signals (H 2 O 2 ) and the balance between oxidant state and reduced state.The Chl content in rice leaves was significantly and negatively correlated with total contents of ascorbate, DHA and H 2 O 2 (p < 0.01) and significantly and positively correlated with the ratios of ASA/DHA and GSH/GSSG (p < 0.01).These results suggested that the ratios of ASA/DHA and GSH/GSSG had greater impact on rice's chilling sensitivity than other indexes examined in this study. DISCUSSION The combination of low temperatures and high light inten- sity resulted in irreversible inhibition of photosynthesis, mainly due to modifying contents of membrane lipids and the activities of other antioxidant enzymes (Lyons et. al., 1979;Noctor et. al., 2000;Li et. al., 2002b;Viswanathan, 2006).In recent, more researches showed that electron transfer were also the main processing influenced by chilling treatment, which also showing a decrease in the efficiency of light energy conversion in PS2 (Fv/Fm).In this paper, besides these mention above factors, the antioxidants, such as ASA/DHA ratio (r 2 = 0.811) and GSH/ GSSG ratio (r 2 = 0.728) also play important role to chilling tolerance in rice.On the whole, we think the mechanism on tolerance of chilling temperature in the light in rice could be explained as follows: when rice plant met with the chilling stress temperature, photosynthesis could be heavily inhibited (Ben, 1987) and more severe even if it was under weak light intensity (Murata, 1989).Light energy conversion in PS2 (Fv/Fm) were influenced at first, However, the motility ability was weak and the protein in the membrane such as PS2-D 1 protein decomposed easily.The activity of VDE (violaxanthinde-epoxidase) and SOD (superoxide dismutase) are thus depressed (Jiao and Ji, 2001).Consequently, the main increase appeared to be GSSG in GSH reductive pool.As GSH that is used for ASA regeneration did not increase, the regeneration of ASA was inhibited and H 2 O 2 was not cleaned efficiently.Therefore, the assimilation of O 2 -and H 2 O 2 might attack the photosynthetic membranes, resulting in damage to photosynthetic membranes.So we suggested that the antioxidant such as ASA or GSH be another barrier, ahead of modification in membrane lipid composition under chilling treatment in the light. Rice plants would often meet with chilling stress in the field.Chilling stress not only decreased the ability of photosynthesis but also the yield in rice at late development of stage.So it is very important to breeding rice varieties tolerant to chilling.While chilling identification in rice would be an important first step for evaluation.The previous identification method also focused on observation throughout the whole stage with seed-setting rate as an index of chilling, which was a time-consuming process and screening materials may be limited.In this paper, 42 rice varieties were studied at three different stages in rice.The result showed that the chilling identification at the booting stage appeared to be related with those of the bud (R 2 = 0.7980) and seedling stage (R 2 = 0.8873) (Figure 1), exhibiting that the characteristics of chilling tolerance at earlier developmental stages might find the effect and simple indexes in the future.The japonica rice subspecies was usually more tolerant to low temperature, indicated by a higher regeneration ability of ASA and GSH, a higher index of unsaturated fatty acid under chilling in the light.While the sensitive ones such as indica rice subspecies is on the contrary.Those in the indica hybrid rice cultivars were in the middle.The strong regeneration ability of ASA and GSH of the tolerant ones such as Wuyujing3 and Taibei309 might be helpful to remove efficiently ROS. In fact, rice in the process of long-term cultivation has very difference in domesticated rice and wild rice.The different performances tolerant to low-temperature in cultivars were not as typical of wild rice significantly.In this paper, the cultivated rice with middle tolerance of low-temperature might depend more on their regeneration of antioxidants such as ASA or GSH.Therefore, further investigations are needed to determine how GSH/ GSSG or ASA/DHA will play the role in cell of cultivars with middle tolerant to chilling. Figure 1 . Figure1.The correlation between seed-setting rate and livability rate of bud and seedling after chilling treatment for rice cultivars. Figure 4 . Figure 4. Changes in photosynthesis CO2 response curves in the leaves of different rice varieties after chilling treatment Figure 5 . Figure 5. Changes in Fv/Fm, Chl contents, O2 generation rate, MDA content and H2O2 content in leaves of six rice varieties after treatment of low temperature in the light.Measurements were on the top two leaves.Means ± SE (n = 6). Figure 6 .Figure 7 . Figure 6.Comparison in chl content and Fv/Fm of leaves of six rice cultivars after treatment of low temperature in the light. Table 1 . The chilling tolerance identification of different rice cultivars at the bud stage, seedling stage and booting stage.
2018-12-22T02:09:39.530Z
2010-03-01T00:00:00.000
{ "year": 2010, "sha1": "d5ce6343e5da32d61dca35810558250f7245a4c3", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/E37A64417411.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "d5ce6343e5da32d61dca35810558250f7245a4c3", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
215749673
pes2o/s2orc
v3-fos-license
Identifying genetic variants underlying phenotypic variation in plants without complete genomes Structural variants and presence/absence polymorphisms are common in plant genomes, yet they are routinely overlooked in genome-wide association studies (GWAS). Here, we expand the type of genetic variants detected in GWAS to include major deletions, insertions, and rearrangements. We first use raw sequencing data directly to derive short sequences, k-mers, that mark a broad range of polymorphisms independently of a reference genome. We then link k-mers associated with phenotypes to specific genomic regions. Using this approach, we re-analyzed 2,000 traits in Arabidopsis thaliana, tomato, and maize populations. Associations identified with k-mers recapitulate those found with single-nucleotide polymorphisms (SNPs), but with stronger statistical support. Importantly, we discovered new associations with structural variants and with regions missing from reference genomes. Our results demonstrate the power of performing GWAS before linking sequence reads to specific genomic regions, which allows detection of a wider range of genetic variants responsible for phenotypic variation. are shorter than the original reads, termed k-mers. After k-mers have been extracted from all reads, k-mer sets from different samples can be compared against each other. Importantly, kmers present in some samples, but missing from others, can identify a broad range of genetic variants. For example, two genomes differing in a SNP (Fig. 1A, Extended Data Fig. 1,2) will have k k-mers unique to each genome; this is true even if the SNP is found in a repeated region or a region not found in the reference genome. SVs, such as large deletions, inversions, translocations, etc. will also result in k-mer differences. Therefore, instead of defining genetic variants in a population relative to a reference genome, a k-mer presence/ absence pattern in raw sequencing data can be directly associated with phenotypes to enlarge the tagged genetic variants in GWAS 5 . Reference-free GWAS based on k-mers has been used with bacteria, which have many dispensable genes [5][6][7] . It has also been applied to human genomes, which are much larger and have many more unique k-mers 3,8 , but this was restricted to case-control situations, and due to high computational load, population structure was not corrected for all k-mers. While k-mer based approaches are likely to be especially appropriate for plants, their large genomes, highly structured populations, and excessive genetic variation 9-11 make the use of existing k-mer methods difficult. An attempt with k-mers in plants was limited to a small subset of the genome, and also accounted for population structure only for a small subset of k-mers 12 . Here, we present an efficient method for k-mer-based GWAS and compare it directly to the conventional SNP-based approach on more than 2,000 phenotypes from three species with different genome and population characteristics -A. thaliana, maize and tomato. In brief, we have inverted the conventional approach of building a genome, using it to find population variants, and finally associating variants with phenotypes. In contrast, we begin by associating sequencing reads with phenotypes, and only then infer the genomic context of associated sequences. We posit that this change of order is especially effective in plants, for which defining the full population-level genetic variation based on reference genomes remains highly challenging. permutations of the phenotype 15 . This is computationally challenging, as the full GWA analysis has to be run many times. We therefore implemented a LMM-based GWA specifically optimized for the k-mers application (Extended Data Fig. 3C) 16,17 . We calculated the p-value thresholds for SNPs and k-mers, with a 5% chance of one falsepositive. The threshold for k-mers was higher than SNPs (35-fold), but lower than the increase in test number (140-fold), due to the higher dependency between k-mers (Fig. 1A). Twenty-eight SNPs and 105 k-mers passed their corresponding thresholds. Using LD, we linked SNPs to k-mers directly without locating the k-mers in the genome. Four families of linked genetic variants were identified with both methods (Fig. 1C). As expected, the k-mers tagged the same genomic loci as the corresponding SNPs (Fig. 1D, for 25 bp k-mers, Extended Data Fig. 5E). Therefore, k-mers identified the same genotype-flowering time associations as SNPs. To increase the chances of discovering new associations, we evaluated 1,582 phenotypes from 104 A. thaliana studies (Supplementary Table 1, Fig. 2A). There was substantial overlap in significant SNP and k-mer associations (Fig. 2B), with k-mer and SNP hits numbers for each phenotype being highly correlated (Fig. 2C, Extended Data Fig. 6A). For 137 phenotypes, only a significant SNP could be identified, likely due to the more stringent thresholds for k-mers, as the most significant SNPs rarely passed the k-mer threshold in these cases (Fig. 2D). Moreover, often, a k-mer passing the SNPs threshold was in high LD with the top SNP (Fig. 2E). Although the k-mer thresholds were more stringent than the SNPs thresholds (Extended Data Fig. 6B), for 129 phenotypes only k-mers but no SNPs associations were identified. The p-values of top SNPs and k-mers were highly correlated (Fig. 2F), with top k-mers having a lower p-value in almost 9 out of 10 cases (Fig. 2G). In addition, we found that associated k-mers were on average closer to top SNPs than the other way around: 29% of top SNPs were in complete LD with associated k-mers vs. 13% the other way around, and 73% vs. 67% in LD≥0.5 (Fig. 2H), consistent with k-mers often containing the top SNP, but SNPs in many cases only being linked to the causal variant identified by k-mers. Case studies of k-mer superiority In addition to simply improving the strength of associations (Extended Data Fig. 7A), we sought to identify cases where k-mers provided a conceptual improvement. We first looked at the fraction of dihydroxybenzoic acid (DHBA) xylosides among total DHBA glycosides 18 (red circle, Fig. 2F). In this case, all significant k-mers mapped uniquely near AT5G03490, encoding a UDP glycosyltransferase, already identified in the original study (Fig. 3A, Extended Data Fig. 7C). The stronger k-mer associations could be traced back to two nonsynonymous SNPs, 4 bp apart, in the gene's coding region. Due to their proximity, one kmer holds the state of both SNPs, and their combined information is more predictive of the phenotype than either SNP alone (Fig. 3B). Our next case study was seedling growth in the presence of a flg22 variant 19 , for which we could map to the reference genome only three of the 10 significant k-mers, in the proximity of significant SNPs in AT1G23050 (Fig. 3C, Extended Data Fig. 7D). To identify the genomic source of the remaining seven k-mers, we assembled the short reads from which they originated. The resulting 962 bp fragment included also the three mappable k-mers ( Fig. 3D), but didn't contain a 892 bp helitron TE 20 present in the reference genome. While the k-mer method did not identify a new locus, it revealed an SV as the likely cause of differences in flg22 sensitivity. Finally, we looked for phenotypes with only identified significant k-mers. One was germination in darkness under low nutrient supply 21 , for which none of the 11 found k-mers ( Fig. 3E, Extended Data Fig. 7E,F) could be traced back to the reference genome. The reads containing these k-mers assembled into a 458 bp fragment that had a hit in the genome of Ler-0, a non-reference accession 22 . The flanking sequences were syntenic with the reference genome, with a 2 kb SV that included the assembled 458 bp fragment (Fig. 3F). This variant affected the 3' UTR of the bZIP67 transcription factor gene. Accumulation of bZIP67 protein but not bZIP67 mRNA appears to mediate environmental regulation of germination 23 ; an SV in the 3' UTR is consistent with translational regulation of bZIP67. This case demonstrates the ability of our k-mer method to reveal associations with SVs not tagged by SNPs. k-mer-based GWAS in maize To demonstrate the usefulness of our approach with larger, more complex genomes, we turned to maize 24 , a species with a ~2.5 Gb genome and extensive presence/absence variation of genes 10,25,26 . We applied our approach to 252, mostly morphological, traits 27 in 150 inbred lines with short read sequence coverage of at least 6x 28 . A total of 2.3 billion kmers were present in at least five accessions (Extended Data Fig. 8A). For 89 traits, significant associations were identified by at least one of the methods, and for 37 by both (Fig. 4A). As in A. thaliana, statistically significant variants as well as top associations were well correlated between both methods ( A major challenge for maize is the high fraction of short reads that do not map uniquely to the genome. Previously, additional information had to be used to find the genomic position of SNPs, including population LD and genetic map position 28 . We therefore compared SNPs and k-mers using LD, without locating k-mers in the genome. In several cases, a k-mer marked a common allele in the population with strong phenotypic effects, without the allele having been identified with SNPs. For example, for days-to-tassel, one clear SNP hit was also tagged by k-mers (Fig. 4D,E), but a second variant was only identified with k-mers. Another example was ear weight for which no SNP (Extended Data Fig. 8F), but several unlinked k-mer-tagged variants were identified (Fig. 4F). Thus, new alleles with high predictive power for maize traits can be revealed using k-mers. As with SNPs, the difficulty of unique short read mappings also undermined the ability to identify the source of k-mers associated with specific traits. For example, we tried to locate the genomic position of the k-mer corresponding to the SNP associated with days-to-tassel in chromosome 3 (Fig. 4D). Only about 1% of reads from which the k-mers originated could be mapped uniquely to the reference genome. However, when we assembled all these reads into a 924 bp contig, we could place it to the same place as the identified SNPs. This Voichek and Weigel Page 4 Nat Genet. Author manuscript; available in PMC 2021 March 24. fragment had two single-base pair differences relative to the reference genome, and was not in the proximity of any gene. Thus, we could use the richness of combining reads from several accessions to more precisely locate the variant origin. k-mer-based GWAS in tomato At ~900 Mb, the tomato genome is smaller than that of maize, but it presents its own challenges, as there is a complex history of introgressions from wild relatives into domesticated tomatoes 29,30 . Starting with 981 million k-mers from 246 accessions (Extended Data Fig. 9A), we performed GWA on 96 metabolites measurements 31,32 . For many metabolites, an association was identified by both methods, but three had only SNP hits and 13 only k-mer hits (Fig. 5A). Similar to the other species, the number of identified variants as well as top p-values were correlated between methods (Fig. 5B,C). Top k-mer associations were also stronger than top SNPs (Extended Data Fig. 9D), even more so than in A. thaliana or maize. As a case study, we studied the concentration of guaiacol, responsible for a strong off-flavor in tomato 31 . Associated SNPs were found in chromosome 9 and what is called "chromosome 0" (Fig. 5D), which contains sequence scaffolds not assigned to the 12 nuclear chromosomes. From the 293 guaiacol associated k-mers, 180 could be mapped uniquely to the genome, all close to significant SNPs. Among the remaining k-mers, of particular interest was a group of 35 k-mers in high LD and with especially low p-values ( Fig. 5E). Assembly of the corresponding short reads resulted in a 1,172 bp fragment, of which the first 574 bp aligned near significant SNPs in chromosome 0 (Fig. 5F), and the remainder matching the non-reference NON-SMOKY GLYCOSYLTRANSFERASE 1 (NSGT1) gene, which had been originally pinpointed as causal for variation in guaiacol 33 . The 35 significant k-mers covered the junction between these two mappable regions. Most of the NSGT1 coding sequence is absent from the reference genome, but present in other accessions. The significant SNPs identified in chromosomes 0 and 9 apparently represent the same region in other accessions, connected by the fragment we assembled (Fig. 5F). Thus, we identified an association outside the reference genome, and linked the SNPs in chromosome 0 to chromosome 9. k-mer based kinship estimates We have shown that one can assemble short fragments from k-mer-containing short reads and find hits not only in the reference genome, but also in other published sequences. This opens the possibility to apply our method to species without a high-quality reference genome, since contigs that include multiple genes can be relatively easily and cheaply generated 34 . The major question with such an approach is then how to correct for population structure in the GWA step without kinship information from SNPs, determined by mapping to a reference genome. To learn kinship directly from k-mers, we estimated relatedness using k-mers, with presence/absence as the two alleles. We calculated the relatedness matrices for A. thaliana, maize, and tomato and compared them to the SNP-based relatedness. In all three species there was agreement between the two methods, although initial results were clearly better for A. thaliana and maize than for tomato (Fig. 6). The inferior performance in tomato was due to 21 accessions (Extended Data Fig. 10), that Voichek and Weigel Page 5 Nat Genet. Author manuscript; available in PMC 2021 March 24. appeared to be more distantly related to the other accessions based on k-mers than what had been estimated with SNPs. This is likely due to these accessions containing diverged genomic regions that perform poorly in SNP calling, resulting in inaccurate relatedness estimates. In conclusion, k-mers can be used to calculate relatedness between individuals, thus paving the way for GWAS in organisms without high-quality reference genomes. Discussion The complexity of plant genomes can make SNP-based identification of genotypephenotype associations challenging. We have shown that k-mers can not only identify almost all associations found by SNPs and short indels, but also SVs and variants in sequences not present in reference genomes. The expansion of variant types detected by the k-mer method complements SNP-based approaches, and increases opportunities for finding and exploiting complex genetic variants driving phenotypic differences in plants regardless of reference genome quality. k-mers mark genetic polymorphisms in the population, but the types and genomic positions of these polymorphisms are initially not known. While one can also use k-mers for predictive models without knowing their genomic context, in many cases the genomic context of associated k-mers is of interest. The simplest solution is to align k-mers or the corresponding short reads to a reference genome 35 . More interesting are cases where k-mers cannot be placed on the reference genome. For these, one can first identify the originating short reads and assemble these into larger fragments, which is a very effective path to uncovering the genomic context of k-mers. The resulting fragment also captures phased haplotype information. Combining reads from multiple accessions can provide high local coverage around k-mers of interest, increasing the chances that sizeable fragments can be assembled and located. A further improvement will be the use of k-mers to tag heterozygous variants. In our current implementation, which relies on presence/absence of k-mers, one of the homozygous states has to be clearly differentiated not only from the alternative homozygous state, but also from the heterozygous state. This did not affect comparisons between SNPs and k-mers in this study, as we only looked at inbred populations, where only homozygous, binary states are expected. Another improvement will be the use of k-mers to detect causal copy number variations. So far, we can only tag copy number variants if the junctions produce unique kmers, but it would be desirable to use also k-mers inside copy number variants. Normalized k-mer counts would create a framework that could, at least in principle, detect almost any kind of genomic variation. The comparison of k-merand SNP-based GWAS provides an interesting view on tradeoffs in the characterization of genetic variability. The lower top p-values obtained with k-mers where a SNP is the underling variant suggest incomplete use of existing information in SNP calling. On the other hand, our analysis likely included some k-mers that represent only sequencing errors. While requiring k-mers to appear multiple times in a sequencing library and in multiple individuals removes most sequencing errors, this can also lead to some k-mers being labeled erroneously as absent. Finally, there is the increase in test load, an inevitable result of increasing the search space to tag more genetic variants. k-mers invert how GWAS is usually done. Instead of first locating sequence variations in the genome, we begin with sequence-phenotype associations and only then find the genomic context of associated sequences. Technological improvement in short-and long-read sequences as well as methods to integrate them into a population-level genetic variation data-structure will expand the covered genetic variants 36,37 . While traditional GWAS methods will benefit from these technological improvements, so will k-mer-based approaches, which will be able to use tags spanning larger genomic distances. Therefore, we posit that for GWAS purposes, k-mer based approaches are ideal because they minimize arbitrary choices when classifying alleles and because they capture more, almost optimal, information from raw sequencing data. Curation of an A. thaliana phenotype compendium Studies containing phenotypic data on A. thaliana accessions were located by searching NCBI PubMed using a set of general terms. For most studies, relevant data was obtained from the supplementary information. Otherwise, requests were sent to the corresponding authors. Data already uploaded to the AraPheno dataset 38 was downloaded from there. Phenotypic data in PDF format was extracted using Tabula software. Different sets of naming for accessions were converted to accession indices. In case an index for an accession could not be located, we omitted the corresponding data point. In case an accession could potentially be assigned to different indices, we first checked if it was part of the 1001 Genomes project; if so, we used the 1001 Genomes index. In case the accession was not part of it, one of the possible indices was assigned at random. Phenotypes of metabolite measurements from two studies 39,40 , were filtered to a reduced set by the following procedure: take the first phenotype, sequentially retain phenotypes if correlation with all previously taken phenotypes is lower than 0.7. Data from the second study 40 , were further filtered for phenotypes with a title. Assignment of categories for each phenotype was done manually (Supplementary Table 1). All processed phenotypic data can be found in https:// zenodo.org/record/3701176#.XmX9u5NKhhE Whole genome sequencing data and variant calls of A. thaliana Whole genome short reads for 1,135 A. thaliana accessions were downloaded from NCBI SRA (accession SRP056687). Accessions with fewer than 10 8 unique k-mers, a proxy for low effective coverage, were removed, resulting in a set of 1,008 accessions. The 1001 Genomes project VCF file with SNPs and short indels was downloaded from http:// 1001genomes.org/data/GMI-MPI/releases/v3.1 and condensed into these 1,008 accessions, using vcftools v0.1.15 41 . We required a minor allele count (MAC) of 5 individuals, resulting in 5,649,128 genetic variants. The VCF file was then converted to a PLINK binary file using PLINK v1.9 42 . The TAIR10 reference genome was used for short read and k-mer alignments. Coordinates for genes in figures were taken from Araport11 43 . Voichek and Weigel Page 7 Nat Genet. Author manuscript; available in PMC 2021 March 24. Whole genome sequencing data and variant calls of maize Whole genome short reads of maize accessions corresponded to the "282 set" part of the maize HapMap3.2.1 project 28 . Sequencing libraries "x2" and "x4" were downloaded from NCBI SRA (accession PRJNA389800) and combined. Coverage per accession was calculated as number of reads multiplied by read length and divided by the genome size, data for 150 accessions with coverage >6x was used. Phenotypic data for 252 traits measured for these accessions were downloaded from Panzea 27 . Two of these phenotypes were constant over more than 90% of the 150 accessions, these two were removed from further analysis ("NumberofTilleringPlants_env_07A", "TilleringIndex-BorderPlant_env_07A" 32 . These were filtered to a reduced set by the following procedure: take the first phenotype, sequentially retain phenotypes if correlation with all previously taken phenotypes is lower than 0.7. Metabolites were ordered as reported originally 32 . Only one repeat, the one with more data points and requiring at least 40 data points was retained. The VCF file with SNPs and short indels 31 was obtained from the authors and filtered for the relevant 246 accessions. Variants were further filtered for MAC of ≥5, resulting in a final set of 2,076,690 variants. Reference genome SL2.5 29 (https:// www.ncbi.nlm.nih.gov/assembly/GCF_000188115.3/) used to create the VCF file was used for alignments. Calculate and comparison of kinship matrices Kinship matrix of relatedness between accessions was calculated as in EMMA 46 , with default parameters. The algorithm was re-coded in C++ to read directly PLINK binary files. For k-mers based relatedness the same algorithm was used, coding presence/absence as two alleles. For comparison of k-mersto SNPs-based relatedness we correlated (Pearson) the values for all n 2 pairs, for n accessions. For tomato, 3492 pairs had a relatedness more than 0.15 lower for k-mer than for SNPs. 3,298 (94.4%) of these pairs were between a set of 21 accessions and all other 225 accessions. We calculated the correlation twice: for all pairs, and only between pairs of these 225 accessions. Voichek and Weigel Page 8 Nat Genet. Author manuscript; available in PMC 2021 March 24. GWA on SNPs and short indels or on full k-mers table Genome-wide association on the full set of SNPs and short indels was conducted using linear mixed models with the kinship matrix, using GEMMA version 0.96 14 . Minor allele frequency (MAF) was set to 5% and MAC was set to 5, with a maximum of 50% missing values (-miss 0.5). To run GWA on the full set of k-mers (e.g. in Fig. 1B), k-mers were first filtered for k-mers having only unique presence/absence patterns on the relevant set of accessions, MAF of at least 5%, and MAC of at least 5. Presence/absence patterns were then condensed to only the relevant accessions and output as a PLINK binary file directly. GEMMA was then run using the same parameters as for the SNPs GWA described above. Phenotype covariance matrix estimation and phenotypes permutation EMMA (emma.REMLE function) was used to calculate the variance components which were used to calculate the phenotypic covariance matrix 46 . We then calculated 100 permutations of the phenotype using the mvnpermute R package 15 . The n% (e.g. n=5 gives 5%) family-wise error rate threshold was defined by taking the n th top p-value from the 100 top p-values of running GWA on each permutation. In all cases, unless indicated otherwise, the 5% threshold was used. Scoring p-values from GWA for similarity to uniform distribution and filtering phenotypes Each SNP-based GWA run was scored for a general bias in p-value distribution, similar to Atwell et al. 47 . All SNPs p-values were collected, the 99% higher p-values were tested against the uniform distribution using a kolmogorov-smirnov test, and the test statistic was used to filter phenotypes for which distribution deviated significantly from the uniform distribution. A threshold of 0.05 was used, filtering 89, 0, and 295 phenotypes for A. thaliana, maize and tomato, respectively. k-mer GWA Association of k-mers was done in two steps, with the aim of getting the most significant kmers p-values. The first step was based on the approach used in Bolt-lmm-inf or GRAMMAR-Gamma 16,17 . For phenotypes y, genotypes g, and a covariance matrix Ω, the kmer score is: T score 2 = 1 γ g T Ω −1 y 2 g T g Where g = g − E(g) and y = y − E(y). The first step was used only to filter a fixed number of top k-mers, thus we could use any score monotonous with T score 2 and specifically g T Ω −1 y 2 g T g which is independent of γ (see supplementary note on calculation optimization and Supplementary Table 3). In the second step, the best k-mers were run using GEMMA to calculate the likelihood ratio test p-values 14 . The number of k-mers filter in the first step was set to 10,000 for A. thaliana and 100,000 for maize and tomato. Both steps associate k-mers while accounting for population structure, while the first step uses an approximation, the second use an exact model. Therefore, real top k-mers might be lost as they would not pass the first filtering step. To control for this, we first defined the 5% family-wise error-rate threshold based on the phenotype permutations, and then identified all the k-mers which passed the threshold. Next, we used the following criteria to minimize the chance of losing k-mers: we checked if all identified k-mers were in the top N/2 k-mers from the ordering of the first step (N=10,000 or 100,000 dependent on species). For example, in maize all k-mers passing the threshold in the second step should be in the top 50,000 k-mers from the first step. The probability that this will happen randomly is 2 -m , where m is number of identified k-mers, in most phenotypes this is very unlikely. In 8.5% of phenotypes from A. thaliana the criteria was not fulfilled, for these phenotypes we re-run the two-steps with 100x more k-mers filtered in the first step, that is 1,000,000 kmers. For 6 phenotypes the criteria still did not hold, these phenotypes were not used in further analysis. In tomato, 33% of phenotypes did not fulfill these criteria, in these cases we re-run the first step with 100x more k-mers filtered (10,000,000), 17 phenotypes still did not pass the threshold and were omitted from further analysis. The permutations were not re-run, and the threshold defined using 100,000 k-mers was used, as the top k-mer used to define the threshold tended to be high in the list. For maize all phenotypes passed the criteria and no re-running was needed. SNP-based GWAS on phenotype permutations To calculate thresholds for SNPs-based GWAS we used the two step approach used for kmers. The permuted phenotypes were run in two steps as we were only interested in the top p-value to define thresholds. We filtered 10,000 variants in the first step which were then run using GEMMA to get exact scores 14 . The non-permuted phenotype were run using GEMMA on all the variants. Calculation of linkage-disequilibrium (LD) For two variants, x and y, each can be a k-mer or a SNP, LD measure was calculated using the r 2 measure 48 . For a k-mer, variants were coded as 0/1, if absent or present, respectively. For SNPs one variant was coded as 0 and the other as 1. If one of the variants had a missing or heterozygous value in a position, this position was not used in the analysis. The LD value was calculated using the formula: r 2 = (p(x = 1&y = 1) − p(x = 1) * p(y = 1)) 2 p(x = 1) * p(y = 1) * p(x = 0) * p(y = 0) Comparing Col-0 and Ler genome assemblies with k-mers The list of 31 bp k-mers that are part of the Col-0 TAIR10 and the Ler genomes 22 were created using KMC v3 45 . The k-mer lists from the two genomes were filtered for: k-mers appearing in a single genome and ones appearing only once in a genome. The positions of the filtered k-mers were identified by checking each position in the genome against the filtered lists. In Extended Data Fig. 1, k-mers from these lists are plotted around four variants, defined previously 22 . The statistics presented in Extended Data Fig. 2 are for all variants reported in Supplementary Tables of Zapata and colleagues 22 , under the titles: "Lindel_Allelic", "Lindel_NonAllelic", "IntraChromTransloc", "InterChromTransloc", and "InversionSites". Calculating LD of closest SNP/k-mer (Supplementary Figure 1) To calculate LD between all k-mers and all SNPs in the A. thaliana 1001 Genomes Project (1001G) collection, the 1001G imputed SNPs matrix was used 49 LD cumulative graph (Figure 2E,H) For a set of phenotypes and for every l=0,0.05,..,1 we calculated the percentage of phenotypes for which exists a k-mer or a SNP in the pre-defined group which is in LD ≥ l with top SNP or top k-mer, respectively. The pre-defined groups are: (1) all the k-mers which passed the SNPs defined threshold in Figure 2E or (2) all the SNPs or k-mers which passed their own defined thresholds in Figure 2H. The percentage is then plotted as a function of l. Retrieving source reads of a specific k-mer and assembling them For a k-mer identified as being associated with a phenotype we first looked in the k-merstable and identified all accessions taking part in the association analysis and having this kmer present. For each of these accessions we went over all sequencing reads and filtered out all paired-end reads which contained the k-mer. To assemble paired-reads, SPAdes v3.11.1 was used with "--careful" parameter 50 . Analysis of flowering time in 10C To find the location in the genome of the 105 identified k-mers, k-mers were first mapped to the A. thaliana genome. 84 of the k-mers had a unique mapping, one was mapped to multiple locations and 20 could not be mapped. For the 21 k-mers with no unique mapping we located the sequencing reads they originated from, and mapped the reads to the A. thaliana genome. For each of the k-mers we looked only on the reads with the top mapping scores. For the one k-mer which had multiple possible alignment also the originating reads did not have a consensus mapping location in the genome. For every k-mer from the 20 nonmapped k-mers, all top reads per k-mer, in some cases except one, mapped to a specific region spanning a few hundred base pairs. The middle of this region was defined as the kmer position for the Manhattan plot in Figure 1D. To find the locations of all k-mers presented in Extended Data Fig. 5D, we used only uniquely uniquely k-mers. To find the location of the 93 associated k-mers of length 25 bp, presented in supplementary Extended Data Fig. 5E, we followed the same procedure: 87 k-mers had unique mapping, one mapped multiple times and 5 could not be mapped. For the 5 non-mappable k-mers and the k-mer with non-unique mapping, we located the originated short reads and aligned them to the genome. The phenotype in the original study was labeled "flgPsHRp" 19 . For each of the 7 k-mers which could not be mapped uniquely to the genome, the originated reads from all accessions were retrieved and assembled. All the seven cases resulted in the same assembled fragment (SEQ1, Supplementary Table 2). Using NCBI BLAST we mapped this fragment to chromosome 1: position 40-265 were mapped to 8169229-8169455 and position 262-604 were mapped to 8170348-8170687. For every accession from the 106 that were used in the GWAS analysis we tried to locally assemble this region, to see if the junction between chromosome 1 positions 8169455-8170348 could be identified. We used all the 31 bp kmers from the above assembled fragment as bait, and located all the reads for each accession separately. For 11 out of the 13 accessions that had all 10 identified k-mers we got a fragment from the assembly process. In all 11 cases the exact same junction was identified. For 1 of the 4 accessions that had only part of the 10 identified k-mer we got a fragment from the assembler, which had the same junction. For 43 of the 89 accessions that had none of the identified k-mers the assembly process resulted in a fragment, in none of these cases the above junction could be identified. Analysis of germination in darkness and low nutrients (Figure 3E, F) The phenotype in the original study was labeled "k_light_0_nutrient_0" 21 . The 11 identified k-mers had two possible presence/absence patterns, separating them into two groups of 4 and 7 k-mers. The short-read sequences containing the 4 or 7 k-mers were collected separately and assembled, resulting in the exact same 458bp fragment (SEQ2, Supplementary Analysis of root branching zone (Supplementary Figure 2) The phenotype in the original study was labeled "Mean(R)_C", that is Branching zone in no treatment 53 . No SNPs and 1 k-mer (AGCTACTTTGCCACCCACTGCTACTAACTCG) passed their corresponding 5% thresholds. The k-mer mapped the chloroplast genome in position 40297, with 1 mismatch. No SNPs and another k-mer (CCGGCGATTACTAGAGATTCCGGCTTCATGC) passed the 10% family-wise error-rate threshold. This k-mer mapped non-uniquely to two place in the chloroplast genome: 102285 and 136332. Fig. 7A) Analysis of Lesion by Botrytis cinerea UKRazz (Extended Data The Lesion by Botrytis cinerea UKRazz phenotype was labeled as "Lesion_redgrn_m_theta_UKRazz". In the GWA analysis 19 k-mers and no SNPs were identified. All k-mers had the same presence/absence pattern. The short-read sequences from which the k-mers originated were mapped to chromosome 3 around position 72,000bp, and contained a 1-bp deletion of a T nucleotide in position 72,017. Whole genome sequencing reads were mapped to the genome for the 61 accessions with phenotypes used in these analyses. We manually observed the alignment around position 72,017 of chromosome 3, without the prior knowledge if the accession had the identified k-mers. For 20 accessions, we observed the 1-bp deletion in position 72,017, all 19 accessions containing the k-mers were part of these 20. Analysis of days-to-tassel and ear weight in maize (Figure 4) Ear weight phenotype was labeled "EarWeight_env_07A" in original dataset 27 . Days to tassel were measured in growing degree days (GDD) and was labeled as "GDDDaystoTassel_env_06FL1" in original dataset. In comparison of LD between k-mers and SNPs in days to tassel (Fig. 4E, upper panel), two SNPs were filtered out as having more than 10% heterozygosity and one as having, exactly, 50% missing values. In days to tassel the k-mer which was similar to identified SNPs was AGAAGATATCTTATGAACTCCTCACCAGTAA. The 171 paired-end reads from which this k-mer originated mapped to the genome as follows -2 (1.17%) aligned concordantly 0 times, 2 (1.17%) aligned concordantly exactly 1 time, and 167 (97.66%) aligned concordantly >1 times. The assembly of these reads produce two fragments, the first of length 273bp with coverage of 1.23 and the second of length 924bp and with coverage of 27.41 (SEQ3, Supplementary Table 2). We aligned this second fragment to the genome using Minimap2, with the default parameters 54 were checked manually by locating the reads containing them and aligning the reads to the genome, in all cases no reads were able to be aligned to the genome (>99.5% of reads). For the 35 k-mers not mapping to genome and in high LD, visualized in Figure 5E, all reads containing at least one of the k-mers were retrieved and assembled (SEQ4, Supplementary While in the genome with the apparent deletion only the junction between the two fragments will be tagged by unique k-mers, in the genome with the apparent insertion, the entire insert will be tagged (bottom panel). Only 0.4% of the previously characterized long insertions/deletions are not tagged by unique k-mers. Voichek and Weigel Page 16 Nat Genet. Author manuscript; available in PMC 2021 March 24. Fig. 3. Pipeline for k-mer-based GWAS (A) Creating the k-mer presence/absence table: Each accession's genomic DNA sequencing reads are cut into k-mers 45 , filtering k-mers appearing less than twice/thrice in a sequencing library. k-mers are further filtered to retain only those present in at least 5 accessions, and ones that are found in both forward and reverse-complement form in at least 20% of accessions they appeared in. All k-mer lists are combined into a k-mer presence/absence table. Extended Data (B) Genome-wide associations on the full k-mers table using SNP-based software: the kmers table is converted into PLINK binary format, which is used as input for SNP-based association mapping software 14,42 . (C) GWA optimized for the k-mers: k-mers presence/absence patterns are first associated with the phenotype and its permutations using a LMM to account for population structure 16,17 . This first step is done by calculating an approximated score of the exact model. Best k-mers from this first step (e.g. 100,000 k-mers) are passed to the second step, In which an exact p-value is calculated 14 for both the phenotype and its permutations. A permutation-based threshold is calculated, and all k-mers passing this threshold are checked for their rank in the scoring from the first step. If not all k-mers hits are in the top 50% of the initial scoring, then the entire process is rerun from the beginning, passing more k-mers from the first to the second step. This last test is built to confirm that the approximation of the first step will not remove true associated k-mers. Voichek and Weigel Page 28 Nat Genet. Author manuscript; available in PMC 2021 March 24.
2020-04-14T14:27:44.859Z
2020-04-13T00:00:00.000
{ "year": 2020, "sha1": "0747c42f3b270642e4af31c2469e7bf35d4f2769", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7610390", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "e35582abd4c9f8eac94fc81943e1f71e9323e2fa", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
10525276
pes2o/s2orc
v3-fos-license
Tocotrienol-Rich Fraction Ameliorates Antioxidant Defense Mechanisms and Improves Replicative Senescence-Associated Oxidative Stress in Human Myoblasts During aging, oxidative stress affects the normal function of satellite cells, with consequent regeneration defects that lead to sarcopenia. This study aimed to evaluate tocotrienol-rich fraction (TRF) modulation in reestablishing the oxidative status of myoblasts during replicative senescence and to compare the effects of TRF with other antioxidants (α-tocopherol (ATF) and N-acetyl-cysteine (NAC)). Primary human myoblasts were cultured to young, presenescent, and senescent phases. The cells were treated with antioxidants for 24 h, followed by the assessment of free radical generation, lipid peroxidation, antioxidant enzyme mRNA expression and activities, and the ratio of reduced to oxidized glutathione. Our data showed that replicative senescence increased reactive oxygen species (ROS) generation and lipid peroxidation in myoblasts. Treatment with TRF significantly diminished ROS production and decreased lipid peroxidation in senescent myoblasts. Moreover, the gene expression of superoxide dismutase (SOD2), catalase (CAT), and glutathione peroxidase (GPX1) was modulated by TRF treatment, with increased activity of superoxide dismutase and catalase and reduced glutathione peroxidase in senescent myoblasts. In comparison to ATF and NAC, TRF was more efficient in heightening the antioxidant capacity and reducing free radical insults. These results suggested that TRF is able to ameliorate antioxidant defense mechanisms and improves replicative senescence-associated oxidative stress in myoblasts. Introduction Adult skeletal muscle contains a subpopulation of cells that readily proliferate and differentiate when required to maintain the structure and function of skeletal muscle [1]. These cells were first identified by Mauro in 1961 as quiescent cells located between the basal lamina and sarcolemma of myofibers, known as satellite cells [2]. Human satellite cells can be isolated and cultured in vitro with a limited proliferative capacity depending on the donor age. Proliferating satellite cells are known as myoblasts [3]. The proliferative lifespan of myoblasts remains stable during adulthood but decreases from infants to adolescents, and the cells ultimately reach replicative senescent [4]. During aging, a progressive loss of muscle mass and strength is observed, and this phenomenon is known as sarcopenia. Although the underlying mechanism is still uncertain, sarcopenia is believed to be the result of certain intrinsic or extrinsic factors, such as immobilization, chronic diseases, changes in hormone, and proinflammatory factors, as well as nutritional status in older adults [5]. Additionally, the accumulation of reactive oxygen species (ROS) has been suggested to play a vital role in this age-related muscle atrophy [6]. Redox imbalance observed in senescent satellite cells can be attributed to elevated ROS production or an impaired endogenous antioxidant defense system, leading to oxidative damage [7,8]. The vulnerability of proliferating myoblasts to oxidative damage will affect muscle regeneration and contributes to the development of sarcopenia, suggesting that oxidative stress, satellite cells, and sarcopenia are interrelated [6,7]. Oxidative stress in aged skeletal muscle can cause oxidative damage in cells, manifested as damaged DNA, lipid peroxidation, and protein carbonylation [9,10]. In muscle fibers, free radicals can be produced intrinsically by mitochondria and regulate fundamental signaling pathways in skeletal muscle. The presence of reactive oxygen species (ROS) or reactive nitrogen species (RNS) can be counteracted by the antioxidant defense system, which includes antioxidant enzymes, vitamins, and glutathione, resulting in sustained redox balance [9]. If the antioxidant defense is overwhelmed by excess ROS or RNS, oxidative stress occurs which leads to muscle injury [8,10]. In addition to the existing oxidative stress during aging, insufficient antioxidant intake among the elderly can contribute to the occurrence of sarcopenia [11]. Low antioxidant levels in older individuals were associated with poor muscle strength and low physical performance and can cause frailty in the elderly [12,13]. An in vivo study demonstrated that vitamin E deficiency caused poor muscle performance and accelerated aging development [14]. Hence, introducing antioxidants such as vitamin E could be a relevant strategy to delay sarcopenia progression; however, more studies are needed [15]. Vitamin E is a lipid-soluble vitamin with two subclasses, tocopherols and tocotrienols [16]. A previous study reported that -tocopherol was able to repair laser-induced disrupted myoblast membranes, indicating a therapeutic effect for vitamin E in the muscle [17]. However, the less-explored subtype of vitamin E is the class of tocotrienols. Similar to -tocopherol, a potential therapeutic effect of tocotrienolrich fraction (TRF) was proposed owing to its reversal effect on stress-induced presenescence (SIPS) model of myoblasts [18]. In our laboratory, we also found that TRF was superior to -tocopherol in ameliorating replicative senescencerelated aberration and promoting myogenic differentiation [19]. Thus, it would be of interest to elucidate the effects of tocotrienols on the dynamics of oxidative status in senescent myoblasts. Therefore, the aims of this study were to investigate the effects of tocotrienol-rich fraction (TRF) in reestablishing oxidative status during replicative senescence of myoblasts and to compare these effects with other antioxidants, such as -tocopherol (ATF) and N-acetyl-cysteine (NAC), in young, presenescent, and senescent myoblasts followed by the measurement of cell viability and apoptosis as the final outcomes of antioxidant treatment. Cell Culture. Human Skeletal Muscle Myoblasts (HSMM) were purchased from Lonza (Walkersville, MD, USA). Briefly, myoblasts were cultured in Skeletal Muscle Basal Medium (SkBM) that was supplemented with human epidermal growth factor, fetal bovine serum, dexamethasone, L-glutamine, and gentamicin sulfate/amphotericin B (Lonza, Walkersville, MD USA). Cells were cultivated at 37 ∘ C in a humid atmosphere containing 5% CO 2 . The myoblasts then underwent serial passaging. The number of divisions was calculated for each passage using the formula ln( / )/ ln 2, where N is the number of cells at the harvest stage and n is the number of cells at the seeding stage [20]. When cells reached replicative senescence, they were unable to proliferate within 10 days in culture. Myoblasts were divided into 3 different stages, young (<15 cell divisions), presenescent (18-19 cell divisions), and senescent (>20 cell divisions), based on their decreasing proliferative capacity which was represented by hyperbolic proliferative lifespan curve and diminishing percentage of BrdU incorporation. The presence of senescent cells was confirmed by SA--gal staining [19]. Antioxidants. Tocotrienol-rich fraction (TRF) was purchased from Sime Darby Sdn. Bhd., Selangor, Malaysia (TRF Gold TRI E 70), while alpha-tocopherol (ATF) was a gift from the Malaysian Palm Oil Board (MPOB) (Selangor, Malaysia). Both vitamin E subclasses are palm oil-derived. TRF consists of -tocotrienol (ATT; 26.89%), -tocotrienol (BTT; 3.64%), -tocotrienol (GTT; 31.66%), -tocotrienol (DTT; 13.66%), and -tocopherol (ATF; 24.15%). Briefly, stock solutions of TRF were freshly prepared in 100% ethanol (1 : 1) and kept at −20 ∘ C for no longer than one month. A similar process was performed for ATF. TRF and ATF were then incubated overnight with fetal bovine serum at 37 ∘ C before use. Then, both TRF and ATF were diluted in culture medium and used at a final concentration of 50 g/mL [19]. N-Acetyl-cysteine (NAC) was purchased from Sigma-Aldrich (St Louis, USA). NAC was freshly prepared in culture medium to the desired final concentration. A dosage of 1.0 mg/mL NAC was used for subsequent experiments, which was determined using a cell viability assay (Supplemental 1A in Supplementary Material available online at https://doi.org/10.1155/2017/3868305). A dosage of 25 g/mL of TRF or ATF with 1.0 mg/mL NAC was used for combined treatment of TRF and NAC (TRF + NAC) and combination of ATF and NAC (ATF + NAC) (Supplemental 1B and C). Cell Viability. The optimal concentrations of NAC, the combination of TRF and NAC (TRF + NAC), and the combination of ATF and NAC (ATF + NAC) were determined using a cell viability assay (Supplemental 1). The effects of H 2 O 2 and antioxidants were also determined using cell viability assay. Assessment of Lipid Peroxidation. Lipid peroxidation in myoblasts was measured using a dye, C11-BODIPY5 (581/ 591) (Molecular Probes, Eugene, OR, USA), which is a lipid peroxidation sensor that can shift from red to green fluorescence emission upon oxidation of the polyunsaturated butadienyl segment of the fluorophore. Briefly, myoblasts were incubated in 10 M C11-BODIPY for 30 min. After that, cells were washed with PBS, trypsinized, and reconstituted in PBS and the oxidized BODIPY was quantitated using a flow cytometer (CytoFLEX, Beckman Coulter Pasadena, CA, USA) with the 525/40 bandpass channel while reduced BODIPY was quantitated with 585/42 bandpass channel. The percentage of cells that was positively labeled with either oxidized or reduced BODIPY was obtained and the ratio of these percentages (oxidized/reduced BODIPY) was reported. For observation, the same staining protocol was applied, followed by visualization of the nuclei using Hoechst 33342 (Molecular Probes, Eugene, OR, USA). Cells were then observed under a fluorescent microscope (EVOS FL digital inverted microscope, Thermo Fisher Scientific, USA). Cellular Uptake of Vitamin E. Vitamin E extraction was performed according to the protocol by Mazlan et al. [22]. After trypsinizing and counting the cells, 50 mg/mL butylated hydroxytoluene (BHT) was added to stop autooxidation. The hexane layer of supernatant was then collected, vacuum-dried, and stored at −80 ∘ C before analysis with HPLC. A total of 100 L hexane was added before further dilution with hexane for HPLC analysis. The uptake of vitamin E was analyzed using an HPLC fluorescence detector (Ex/Em: 294 nm/330 nm) (RF-10A, Shimadzu, Japan). A TRF standard was used, and the concentrations oftocopherol (ATF), -tocotrienol (ATT), -tocotrienol (BTT), -tocotrienol (GTT), and -tocotrienol (DTT) uptake in cells were calculated in g/mL per million cells. Determination of Antioxidant Enzymes at the Transcriptional Level. Total RNA was extracted using TRI reagent and polyacryl carrier (Molecular Research Center Inc., Ohio, USA). For gene expression determination, quantitative realtime RT-PCR (qRT-PCR) was used. The expression of SOD1, SOD2, CAT, and GPX1 mRNA was quantitatively analyzed using KAPA SYBR FAST One-Step qPCR kit (Kapa Biosystems, Boston, Massachusetts, USA). For RT-PCR, 400 nM of each primer was used, and the primer sequences are shown in Table 1 [21]. The master mix was prepared, and PCR reactions were carried out in a Bio-Rad iQ5 Cycler (Hercules, CA, USA). The program included cDNA synthesis for 5 min at 42 ∘ C; predenaturation for 4 min at 95 ∘ C; and PCR amplification for 40 cycles of 3 sec at 95 ∘ C and 20 sec at 60 ∘ C. These reactions were followed by a melt curve analysis of each targeted gene. The melt curve analysis of each pair of primers and agarose gel electrophoresis that was performed on the PCR products were used to determine the primer specificity (Supplemental 2). The expression level of each targeted gene was normalized to that of glyceraldehyde 3phosphate dehydrogenase (GAPDH). The relative expression value (REV) was calculated using the 2 −ΔΔCt method of relative quantification and the following equation: REV = 2 Ct value of GAPDH−Ct value of the gene of interest . (1) Activities of Antioxidant Enzymes. The activities of three antioxidant enzymes, superoxide dismutase (Sod), catalase (Cat), and glutathione peroxidase (Gpx), were determined. These enzymes were extracted in PBS by sonication following the 24-hour treatments. A Sod assay was performed according to Beyer Jr. and Fridovich [23]. In brief, substrate solution was freshly prepared by mixing L-methionine, nitro blue tetrazolium (NBT), 1% Triton-X, and PBS, pH 7.8 (Sigma, St Louis, USA). Then, 20 L of enzyme extract and 10 L of 4 mg/100 mL riboflavin (Sigma, St Louis, USA) were added before incubation under 20-W Sylvania GroLux lamps in a cupboard for 7 min. Absorbance was measured using a UV/VIS spectrophotometer (Shimadzu, Kyoto, Japan) at 560 nm. Sod specific activity was expressed in mU/mg of protein. A Cat assay was carried out using the method described by Aebi [24]. Enzyme extract was added to a quartz cuvette. The reaction was started by adding 30 mM H 2 O 2 (Merck, Darmstadt, German), and absorbance was measured kinetically for 30 seconds using a UV/VIS spectrophotometer (Shimadzu, Kyoto, Japan) at 240 nm. Cat-specific activity was expressed in mU/mg of protein. A Gpx assay was carried out according to Paglia and Valentine [25]. Substrate solution was freshly prepared by mixing reduced glutathione, PBS at pH 7.0, sodium azide, the reduced form of nicotinamide adenine dinucleotide phosphate (NADPH), and glutathione reductase (1 U/mL) (Sigma, St Louis, USA). Enzyme extract was added to the substrate solution, and the reaction was started by adding H 2 O 2 (Merck, Darmstadt, German). The conversion of NADPH to NADP + was measured kinetically using a UV/VIS spectrophotometer (Shimadzu, Kyoto, Japan) at 340 nm for 5 min. Gpx-specific activity was expressed in mU/mg of protein. The total protein was extracted using lysis buffer, and its concentration was determined using a Bio-Rad Protein Assay (Hercules, CA, USA) at 595 nm with a microtiter plate reader (VersaMax Molecular Devices, USA). The protein concentration was used to normalize the enzyme activity. Measurement of Reduced to Oxidized Glutathione (GSH/ GSSG) Ratio. The GSH/GSSG ratio was determined using a GSH/GSSG-Glo6 Assay Kit (Promega, Madison, WI), which is based on the firefly luciferase reaction. Based on the manufacturer's instructions, both the GSH and GSSG luminescent reaction schemes were performed and measured using a microplate reader with an integration time of 1 s/well (Infinite5 200, Tecan, USA). The ratio was calculated directly from Net RLU as stated in the technical manual. Assessment of Apoptotic Events. Apoptosis profiles were determined using an Annexin V-FITC Apoptosis Detection Kit II (BD Pharmingen, CA, USA) according to the manufacturer's instructions. Two dyes were used, Annexin V-FITC, which was detected by FL1, and propidium iodide (PI), which was detected by FL3. The cells were analyzed with a FACS Calibur flow cytometer (Becton Dickinson, CA, USA). The percentage of cells that were negatively stained with both dyes (FITC −ve /PI −ve ) was reported as viable cells, while percentage of cells that were positively stained with Annexin V-FITC only (FITC +ve /PI −ve ) or both Annexin V-FITC and PI (FITC +ve /PI +ve ) was reported as early and late apoptotic cells, respectively. Statistical Analysis. Statistical analyses were performed using SPSS 22.0 software (IBM, NY, USA). All of the data are reported as the means ± standard deviation (SD) from at least three replicates. For all of the tests, < 0.05 was considered statistically significant. To determine the significance between two treatment groups, comparisons were made using an independent -test, while ANOVA was used to analyze multiple groups, followed by a post hoc Tukey's HSD or LSD (if equal variance was assumed) and Dunnett's T3 (if equal variance was not assumed) tests. Effects of Antioxidants on Senescent Myoblasts. To determine whether antioxidant properties alone are sufficient to ameliorate senescent myoblasts, in our study we tested a synthetic antioxidant, N-acetyl-cysteine (NAC), and its combination with TRF and ATF, besides treatment with TRF or ATF alone. Previously, the beneficial effects of TRF or ATF on senescent myoblasts were reported, where a concentration of 50 g/mL TRF or ATF alone was able to increase cell viability, improve cellular morphology (more spindle-shaped cells were observed), and decrease the total of SA--gal staining positive cells during replicative senescence [19]. In the present study, we found that NAC was not toxic to cells up to the highest concentration used with a 24-hour incubation (Supplemental 1A). Thus, 1.0 mg/mL of NAC was used in the subsequent experiments. Different concentrations of TRF or ATF in combination with NAC were tested using cell viability assay (Supplemental 1B and 1C) in which a concentration of 25 g/mL TRF or ATF was used with 1.0 mg/mL NAC in the subsequent experiment. NAC alone and in combination with TRF (TRF + NAC) or ATF (ATF + NAC) significantly improved the cellular morphology of senescent myoblasts (cells became spindle-shaped) and reduced the percentage of cells positive for a senescence biomarker (SA--gal staining) (Figures 1(a) and 1(b)). Effects of Antioxidants on ROS Generation during Replicative Senescence. To elucidate the effects of aging on the oxidative status of myoblasts, we expanded cells in culture until replicative senescence was achieved. The generation of ROS was observed at all stages of cell culture using carboxy-H 2 DCFDA (in green, H 2 O 2 , peroxynitrite, hydroxyl radicals, etc.) and DHE (in orange, superoxide anion) ( increased in presenescent and senescent myoblasts compared to young myoblasts ( < 0.05), and senescent myoblasts had the highest levels of ROS, suggesting that senescent cells experience more severe oxidant insults compared to young and presenescent cells (Figure 2(d)). Both TRF and ATF treatments reduced the amount of intracellular ROS in senescent myoblasts, which was indicated by the decline in fluorescence intensity as well as the total number of positively stained cells (Figure 2(c)). The treatments were likely to diminish ROS in young and presenescent cells as well (Figures 2(a) and 2(b)). Quantitative analysis showed that both TRF and ATF significantly diminished intracellular H 2 O 2 generation in presenescent and senescent myoblasts ( < 0.05) (Figure 2(d)). However, only ATF-treated senescent myoblasts had significantly lower intracellular superoxide anion levels compared to untreated senescent control cells ( < 0.05) (Figure 2(d)). Presenescent cells treated with TRF + NAC and ATF + NAC exhibited significantly lower ROS generation ( < 0.05) (Figure 2(d)). In senescent myoblasts only, TRF + NAC treatment caused a significant decrease in ROS generation ( < 0.05) (Figure 2(d)). A similar reduction in free radical generation was not observed in cells treated with NAC alone, indicating that the antioxidant effects observed were the result of TRF and ATF actions. The presence of ROS in all treatment groups in young, presenescent, and senescent myoblasts can be visualized in Figures 2(a), 2(b), and 2(c), respectively. To confirm the effects of ROS on normal myoblasts, we measured the cell viability immediately after a short-term of H 2 O 2 insult. Our results showed decreased cell viability at all stages of aging after 45 min of incubation with 1 mM, 1.5 mM, 2 mM, and 2.5 mM H 2 O 2 , < 0.05 (Figure 2(e) Effects of Antioxidants on Lipid Peroxidation during Replicative Senescence. To determine the oxidative damage in myoblasts, lipid peroxidation levels were measured with C11-BODIPY5 (581/591), a sensitive fluorescent reporter for lipid peroxidation. The total number of cells that underwent a shift from red to green was gradually augmented from young to senescent cells (Figures 3(a)-3(c)). In senescent myoblasts, the amount of cells with lipid peroxidation, which was indicated by the percentage of cells with oxidized BODIPY (in green), was increased as represented in the right-shifted fluorescence intensity histogram (Figure 3(d)). Quantitative analysis showed that the ratio of cells with oxidized BODIPY to reduced BODIPY (in red) (oxidized/reduced BODIPY ratio) was significantly increased in presenescent and senescent myoblasts compared to young myoblasts ( < 0.05) (Figure 3(e)). Cellular Uptake of Vitamin E. To validate the cellular uptake of vitamin E, we determined the concentration of vitamin E isomers (ATF, ATT, BTT, GTT, and DTT) in all groups of cells (Figures 4(a) and 4(b)). Young, presenescent, and senescent TRF-treated myoblasts showed the presence of 5 vitamin E isomers, while a significantly inferior number of isomers were found in control cells (Figure 4(a)). A significantly higher concentration of ATF was observed in young, presenescent, and senescent ATF-treated myoblasts compared to untreated cells ( < 0.05) (Figure 4(a)). The NAC-treated cells contained the lowest concentrations of vitamin E isomers compared to its combination with TRF or ATF group (Figure 4(b)). TRF + NAC-treated cells contained high levels of all 5 isomers of vitamin E, while ATF + NAC cells only contained a high ATF concentration (Figure 4(b)). TRF Treatment Modulates Antioxidant Capacity. The modulation of antioxidant capacity by TRF, ATF, NAC, TRF + NAC, and ATF + NAC during the replicative senescence of myoblasts was investigated by determining the mRNA expression of antioxidant enzymes (SOD1, SOD2, CAT, and GPX1) and activities of antioxidant enzymes (Sod, Cat, and Gpx), as well as the ratio of GSH/GSSG in young, presenescent, and senescent myoblasts (Figures 5 and 6). Overall, TRF treatment was more effective in modulating antioxidant enzymes expression, especially at transcriptional level, compared to other antioxidants in senescent myoblasts. There was no significant change in SOD1 mRNA expression levels with aging and antioxidant treatment ( Figure 5(a)). However, SOD2 mRNA expression in the presenescent control was significantly increased compared to the young control ( < 0.05) (Figure 5(b)). TRF treatment upregulated SOD2 mRNA expression in young, presenescent, and senescent myoblasts compared to their corresponding untreated controls ( < 0.05) (Figure 5(b)). Furthermore, our results showed that only TRF + NAC modulated the expression of SOD2 mRNA, while NAC and ATF + NAC did not modulate any antioxidant enzymes at the transcriptional level ( Figure 5(b)), suggesting that TRF exerted a modulatory effect at the transcriptional level. Moreover, Sod activity was increased in TRF-treated presenescent and senescent myoblasts compared to their untreated controls ( < 0.05) (Figure 6(a)). In senescent myoblasts, treatment with TRF + NAC significantly increased the activity of Sod compared to the senescent control ( < 0.05). However, similar increases in Sod activity were observed in young myoblasts treated with ATF + NAC, while, in presenescent myoblasts, treatment with NAC, TRF + NAC, and ATF + NAC decreased Sod activity. In the presenescent control, CAT mRNA expression was significantly higher than the young control, < 0.05 (Figure 5(c)). In contrast, Cat activity in the presenescent control was lower than in the young control ( < 0.05) (Figure 6(b)). Treatment with either TRF or ATF upregulated CAT mRNA expression in senescent myoblasts compared to untreated controls ( < 0.05) (Figure 5(c)), while only TRF increased Cat activity in senescent myoblasts and ATF increased Cat activity in presenescent myoblasts (Figure 6(b)). TRF + NAC significantly increased CAT mRNA expression in young myoblasts, in comparison to their untreated controls ( Figure 5(c)). Treatment with NAC, TRF + NAC, and ATF + NAC increased Cat activity in presenescent myoblasts ( < 0.05), while, in young myoblasts, Cat activity was significantly increased with TRF + NAC and ATF + NAC treatment ( Figure 6(b)). However, only ATF + NAC-treated senescent myoblasts demonstrated increased Cat activity. No significant changes were observed in GPX1 mRNA expression with aging in myoblasts, even though expression of this gene was modulated by TRF, which was evident by the upregulation of its mRNA expression in young, presenescent, and senescent cells ( < 0.05) ( Figure 5(d)). However, a similar pattern of expression was not observed for the enzyme activity of Gpx. Gpx activity was significantly higher in the senescent control compared to both the young and presenescent controls ( < 0.05) (Figure 6(c)), while both TRF-and ATF-treated senescent myoblasts exhibited lower enzyme activities compared to the senescent control ( < 0.05) (Figure 6(c)). Gpx activity levels were increased in TRF + NAC-treated presenescent myoblasts but decreased in ATF + NAC-treated presenescent myoblasts (Figure 6(c)). NAC-, TRF + NAC-, and ATF + NAC-treated senescent myoblasts exhibited significantly lower Gpx activity compared to the untreated senescent control. A similar decrease in Gpx activity was observed in young myoblasts treated with NAC alone. The GSH/GSSG ratio is always used as an indicator for antioxidant capacity. However, in our study, there was no significant change observed in the GSH/GSSG ratio between young, presenescent, and the senescent control ( Figure 6(d)). In young and presenescent myoblasts, TRF treatment reduced the ratio significantly compared to untreated controls ( < 0.05) (Figure 6(d)). Determination of the GSH/ GSSG ratio in myoblasts showed that treatment with NAC alone in young, presenescent, and senescent myoblasts resulted in a significantly increased GSH/GSSG ratio ( < 0.05) (Figure 6(d)). A similar increase in the GSH/GSSG ratio was observed in presenescent cells treated with TRF + NAC. Effects of Antioxidants on Cell Viability and Apoptosis Profile. To evaluate the beneficial effects of TRF, ATF, NAC, and their combinations on oxidative status and its final outcome, cell viability (Figure 7(a)) and apoptosis profile (Figures 7 myoblasts compared to their corresponding untreated controls (Figure 7(a)). Apoptotic changes were determined by Annexin V-FITC staining. Figure 7 Figure 7(c). The percentage of viable cells in the presenescent and senescent controls was significantly lower than in the young control ( < 0.05). Treatment with TRF, ATF, TRF + NAC, or ATF + NAC was able to increase the number of viable cells during replicative senescence, signifying the protective role of antioxidant treatment against cell death. A similar increase in cell viability was observed in presenescent myoblasts treated with NAC, TRF + NAC, or ATF + NAC ( < 0.05). Increases in early and late apoptotic events were observed in senescent myoblasts compared to young and presenescent cells ( < 0.05) (Figure 7(d)). Treatment with TRF, ATF, NAC, TRF + NAC, or ATF + NAC significantly reduced the number of early apoptotic cells in senescent myoblasts compared to untreated controls ( < 0.05). Only TRF, TRF + NAC, and ATF + NAC treatment significantly decreased the number of cells undergoing late apoptotic events ( < 0.05). Early and late apoptotic events were also increased in presenescent myoblasts and were reduced by TRF + NAC or ATF + NAC treatment ( < 0.05). Discussion Vitamin E is well known for its free radical-scavenging capacity, which plays an important role in antioxidant defense mechanisms. The lesser known form of vitamin E, tocotrienols, has been reported to possess greater antioxidant effects and better membrane penetration ability compared to tocopherols [26,27]. These features contribute to their efficient uptake by targeted cells in addition to exhibiting higher scavenging power due to active recycling [26,28]. The present study has demonstrated the effectiveness of TRF, a broad mixture of vitamin E, in combating oxidative stress and enhancing antioxidant defense mechanisms in senescent myoblasts, resulting in the improvement of senescenceassociated oxidative stress as evidenced by decreased programmed cell death and increased cell viability. In brief, increased oxidative stress as a result of redox imbalance during aging leads to increased susceptibility of satellite cells to apoptosis and affects muscle regeneration [6,8,29,30]. The findings of this study showed that ROS generation was significantly increased in senescent myoblasts, Figure 6: Effects of antioxidants treatment on antioxidant enzymes activities and GSH/GSSG ratio in young, presenescent, and senescent myoblasts. (a) Sod enzyme activity, (b) Cat enzyme activity, (c) Gpx enzyme activity, and (d) GSH/GSSG ratio. a Denoting < 0.05, significantly different compared to the young control, b < 0.05, significantly different compared to the presenescent control, and c < 0.05, significantly different compared to the senescent control. Data are presented as the mean ± SD, = 3. resulting in higher levels of oxidative damage as indicated by elevated lipid peroxidation. Increased oxidative stress during replicative senescence has been reported to be similar to the conditions in satellite cells derived from aged individuals [8,10]. Moreover, elevated ROS in myoblasts endangers cellular endurance as shown in our study. We found that the number of viable cells decreased with increasing exogenous H 2 O 2 concentration [31]. The presence of free radicals in cells can be counteracted by antioxidant defense systems. However, antioxidant capacity decreases with advancing age, resulting in the accumulation of free radicals, which threaten cell viability [8]. As reported by Fulle and his coworkers [8], the levels of the antioxidant enzymes Cat and glutathione transferase in satellite cells isolated from the elderly were drastically decreased compared to cells from young donors. SOD1 and GPX1 mRNA expression were not significantly different across the cell stages. However, SOD2 and CAT mRNA expression were upregulated in presenescent myoblasts, which was similar to Sod activity in a previous study [32]. Additionally, a decline in Cat activity was observed in presenescent myoblasts compared to young myoblasts. Because antioxidant enzymes are regulated in response to oxidative stress [33], we postulated that presenescent myoblasts were attempting to compensate for the decreased Cat levels by upregulating CAT mRNA expression to overcome the progressive increase in oxidative stress. Previous study reported that there is a compensatory machinery in antioxidant defense systems which maintains the integrity of muscle [34]. However, we found that senescent cells were less responsive to increased oxidative stress. The levels of SOD2 mRNA in senescent cells were lower compared to presenescent cells. The expression of SOD2 mRNA was critical because lack of gene SOD2 can cause mitochondrial damage and lifespan shortening in Drosophila [35]. On the other hand, Gpx activity was increased in senescent myoblasts compared to young and presenescent myoblasts. A previous study also reported enhanced Cat and Gpx activity during aging, which could be an adaptive mechanism to the elevated H 2 O 2 levels [36], but the response may be inadequate to counteract the existing ROS. Our study also reports on the nonenzymatic antioxidants in senescent myoblasts, glutathione [9]. The GSH/GSSG ratio in our study remained unchanged at all ages, which is similar to data reported previously, suggesting that there is no alteration in GSH membrane transportation during aging [37]. Therefore, based on the intracellular ROS levels, lipid peroxidation, and enzymatic antioxidants, our results indicated that senescent myoblasts experience oxidant insults as a result of a less effective antioxidant defense system, which barely counteracted the elevated levels of cellular senescenceassociated oxidative stress. The vitamin E concentration in cells increased with vitamin E treatment (TRF or ATF alone), particularly in TRFtreated myoblasts, which contained all five isomers that were tested. NAC treatment did not affect vitamin E uptake by the cells, as shown by higher levels of vitamin E in myoblasts treated with TRF + NAC and ATF + NAC compared to cells treated with NAC alone. In brief, vitamin E can act as a nonenzymatic antioxidant to combat oxidative stress [9]. Thus, given its free radical-scavenging power, both TRFtreated and ATF-treated senescent myoblasts demonstrated reductions in their intracellular ROS generation and lipid peroxidation levels. Findings from a previous animal study reported a similar reduction in lipid peroxidation levels after TRF and ATF supplementation [38], revealing the protective effects of vitamin E against oxidative damage. TRF has been reported to improve senescence-associated phenotypes in H 2 O 2 -induced myoblasts [18]. In another study, ATF showed protective effects in H 2 O 2 -induced myoblasts [39]. Therefore, both TRF and ATF could potentially be used to protect cells against reactive oxidants. Our data showed that TRF regulates the expression of antioxidant enzymes in young, presenescent, and senescent myoblasts. SOD2 mRNA expression was upregulated in all TRF-treated cells, while Sod enzyme activity was upregulated in TRF-treated presenescent and senescent myoblasts. The SOD2 mRNA encodes one of the Sod isoforms (MnSod), which is located in mitochondrial matrix [40]. Gianni et al. [41] suggested that a superoxide related mitochondrial stress is more apparent than cytosolic stress during aging. Previous studies also reported that increased mitochondrial DNA or RNA mutations were correlated with increasing age and abnormality of aged muscle [42,43]. Hence, agerelated oxidative stress is thought to lead to mitochondrial dysfunction, which eventually may lead to progressive loss of muscle mass and strength [44]. Instead of SOD1 mRNA, TRF regulated SOD2 mRNA; thus, we hypothesized that TRF may act on the mitochondria and potentially protect this organelle during aging. The upregulation of SOD2 mRNA by TRF could be a compensatory mechanism to counteract elevated oxidative challenges in mitochondria during replicative senescence and prevent accumulation of oxidative damage that can trigger cellular aberration. Evidence demonstrated that -tocotrienol potentially protects renal proximal tubular cells from oxidant-induced mitochondria dysfunctional and cellular injury [45]. Increased Sod activity with TRF treatment further validated the ability of TRF to modulate the decomposition of superoxide anions to H 2 O 2 [38]. However, neither Sod mRNA nor enzyme activity was modulated by ATF. In addition to the significant regulation of Sod activity, TRF treatment also upregulated CAT and GPX1 mRNA in young and senescent myoblasts. However, at the enzyme activity level, TRF increased Cat activity, whereas Gpx activity was reduced in senescent myoblasts. Cat and Gpx work in parallel to remove H 2 O 2 in cells [9]. Gpx also catalyzes the elimination of hydroperoxides originated from unsaturated fatty acids at the expense of reduced glutathione [46]. However, the protective role of the glutathione redox cycle is limited at low levels of oxidative stress. During severe oxidant insults, Cat becomes more substantial [46]. Thus, increased Cat at the mRNA and enzyme activity revealed the effectiveness of TRF in enhancing the antioxidant defense system in senescent myoblasts. Because Gpx is involved in hydroperoxide degradation, decreased Gpx in TRF-treated senescent myoblasts may be attributable to decreased lipid peroxidation in cells as reported in this study. This explanation can also be applied to ATF-treated myoblasts, which exhibited decreased Gpx enzyme activity in both the presenescent and senescent stages. In addition, ATF treatment upregulated CAT mRNA in senescent cells but at levels that were significantly lower than TRF-treated cells. These findings indicate that TRF potentially improves the antioxidant defense system in senescent myoblasts, and this effect is better than that of ATF. NAC is a precursor of cysteine that can sustain the production of glutathione, an important antioxidant in cells [47]. In this study, NAC treatment successfully increased the GSH/GSSG ratio in all myoblasts. NAC can scavenge ROS directly [47]. In a previous study, NAC showed protective effects against dystrophic muscle damage in the mdx mouse [48] and attenuates fatigue during prolonged exercise [49]. Our results showed that NAC ameliorated myoblast morphology and SA--gal staining during senescence, which is similar to the effects of TRF and ATF [19]. Similar effects were also observed in cells treated with combinations of TRF + NAC and ATF + NAC. However, intracellular ROS and lipid peroxidation levels were not modulated by NAC treatment alone during senescence. The rate constants of NAC in reaction with superoxide anion, H 2 O 2 , and peroxynitrite were comparatively low [47]; consequently, the influence of NAC on the dyes used in this study would be limited. In our study, lipid peroxidation levels were not improved by NAC, as supported by a previous study that measured malondialdehyde (MDA) as a product of lipid peroxidation in the mdx mouse [48]. As expected, both combined treatment with TRF + NAC and ATF + NAC lowered lipid peroxidation levels. On the other hand, intracellular ROS levels were only ameliorated by TRF + NAC treatment in senescent myoblasts, suggesting that TRF is better than ATF in scavenging ROS, even at lower concentrations, compared to the concentration used for TRF treatment alone. The effects of NAC on antioxidant enzymes were only observed for Gpx activity in senescent myoblasts, although a previous study showed that NAC was able to increase antioxidant enzymes in cocaine-induced hepatocytes [50]. We found that NAC modulated the antioxidant enzyme activity in presenescent myoblasts as shown by decreased Sod activity and increased Cat activity with NAC treatment. Treatment of senescent myoblasts with the combination of TRF + NAC resulted in a stronger antioxidant defense response, which involves the upregulation of SOD2 mRNA expression and the increment in Sod activity, as well as a decrease in Gpx activity, compared to treatment with NAC alone. These results were similar to those observed for TRFtreated senescent myoblasts. Regarding ATF, the combination of ATF + NAC exerted greater effects on antioxidant enzymes compared to treatment with ATF alone. Both Sod activity and Gpx activity decreased whereas Cat activity increased in presenescent myoblasts with ATF + NAC treatment. Increased Cat activity and decreased Gpx activity were also observed in ATF + NAC-treated senescent myoblasts. From the results, both TRF + NAC and ATF + NAC treatment were more effective than NAC alone in ameliorating the antioxidant defense system, which is similar to previous study that showed combination of NAC and vitamin E has better effect on gentamycin-induced nephrotoxicity compared to NAC alone [51]. Therefore, the improved antioxidant defense mechanisms observed in this study may be attributable to the modulatory effects of vitamin E treatment. Satellite cells play very important roles in regenerating injured muscle fibers [1]. Despite the availability of satellite cells, which decline with increasing age [52,53], they may not be the sole reason underlying the impaired regenerative response to injury during aging. Decreased satellite cell proliferative capacity was reported with increased susceptibility to apoptosis, signifying the fact that programmed cell death may also play a part in age-related muscle regeneration impairment [29]. Under stressful stimuli, more satellite cells in old animals underwent apoptosis, thereby distorting skeletal muscle regeneration [29]. However, in a previous study, vitamin E was reported to be able to protect against cell death induced by low doses of oxidants [39]. In another study, it was reported that antioxidant levels were interrelated with the regenerative capacity of muscle stem cells [30]. In short, viable senescent myoblasts were preserved, and the amount of cell death was diminished with TRF treatment, which may be attributed to the improvement of the oxidative status in senescent cells. This may indicate that TRF ameliorates regenerative capacity during aging in human myoblasts [19]. Adequate supply of vitamin E is essential for muscle health. Conversely, vitamin E deficiency can lead to poor muscle performance [12,14]. A study showed that the antioxidant properties alone are insufficient to repair the injured myoblasts [17]. Lipid-soluble vitamin E can easily diffuse into the hydrophobic membrane and act as a "stabilizer" for lipid membranes to facilitate ROS scavenging activity [17]. Accordingly, the findings of our study showed that both TRF and ATF produced greater effects than NAC in oxidative damage prevention. However, ATF, which is a well-known representative of vitamin E, was not as effective as TRF in protecting against replicative senescence in myoblasts. Although ATF improved the oxidative damage to a similar degree as TRF, TRF is superior in enhancing antioxidant defense mechanism. Previous findings showed that TRF-treated rats exhibited better physical performance and oxidative status than ATF-treated rats [38]. Unlike ATF, TRF is a broad mixture of vitamin E that contains all four isomers of tocotrienol and ATF; thus it should be more potent in scavenging free radicals. The distinctive antioxidant properties of each isomer have been attributed mainly to their chemical structure [54]. For instance, the unsaturated isoprenoid side chain of tocotrienols accounts for a higher peroxyl radical-scavenging potential compared to tocopherols [54]. As a result, even at a lower concentration, TRF is able to improve antioxidant defense mechanism and ameliorate the oxidative status of senescent myoblasts. Previous report suggested that, at low concentrations, -tocotrienol can protect cells against H 2 O 2induced apoptosis, while a higher concentration is required for ATF to produce the similar outcome [22]. Our results showed the modulatory effect of TRF is distinguishable compared to other treatments. Previous findings reported that tocotrienols can target a broad range of molecules that might play a role in aging or degenerative diseases [55]. Hence, we suggest that TRF is better than ATF in modulating antioxidant enzymes, particularly at the transcriptional level, to reestablish redox balance in senescent myoblasts. In summary, our study highlights the effects of TRF on oxidative status in myoblasts during replicative senescence. The results of our study showed increased oxidative stress in senescent myoblasts with reduced antioxidant capacity and increased susceptibility towards programmed cell death or apoptosis, which were distinguishable from young myoblasts. Treatment with TRF in senescent myoblasts resulted in diminished ROS and lipid peroxidation in addition to reinforcing the antioxidant defense system by augmenting antioxidant enzymes levels in senescent myoblasts, which ultimately maintained the number of myoblast cells. In conclusion, TRF is a useful antioxidant that can counteract oxidative stress and improve cellular survival during replicative senescence of myoblasts, and thereby, it can potentially be used to ameliorate muscle regeneration such as sarcopenia, although further experiments should be carried out.
2018-04-03T00:22:54.812Z
2017-01-24T00:00:00.000
{ "year": 2017, "sha1": "73dd6b119ac1d7b43aac1d7ed362978698865f17", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/omcl/2017/3868305.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b87a1853959c21c6c01cd55b8a3baae2f8c9dbac", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
50775887
pes2o/s2orc
v3-fos-license
Rational quantum secret sharing The traditional quantum secret sharing does not succeed in the presence of rational participants. A rational participant’s motivation is to maximize his utility, and will try to get the secret alone. Therefore, in the reconstruction, no rational participant will send his share to others. To tackle with this problem, we propose a rational quantum secret sharing scheme in this paper. We adopt the game theory to analyze the behavior of rational participants and design a protocol to prevent them from deviating from the protocol. As proved, the rational participants can gain their maximal utilities when they perform the protocol faithfully, and the Nash equilibrium of the protocol is achieved. Compared to the traditional quantum secret sharing schemes, our scheme is fairer and more robust in practice. The traditional quantum secret sharing does not succeed in the presence of rational participants. A rational participant's motivation is to maximize his utility, and will try to get the secret alone. Therefore, in the reconstruction, no rational participant will send his share to others. To tackle with this problem, we propose a rational quantum secret sharing scheme in this paper. We adopt the game theory to analyze the behavior of rational participants and design a protocol to prevent them from deviating from the protocol. As proved, the rational participants can gain their maximal utilities when they perform the protocol faithfully, and the Nash equilibrium of the protocol is achieved. Compared to the traditional quantum secret sharing schemes, our scheme is fairer and more robust in practice. "Secret sharing" (SS) was first proposed by Shamir 1 , suggesting a secure way to distribute information (secret) to a set of participants. SS splits the secret into several parts and distributes them to different participants, so that only qualified participants can recover the original secret. In Shamir's scheme, a participant is classified as "good" or "bad". A good participant always performs the protocol faithfully, while the bad one would try his best to break it. However, this kind of classification may not reflect practical situations. Indeed, a participant can be neither good nor bad, but rational and always try to maximize his utility. Hence, each rational participant aims to get the secret, but at the same time, prevents others to get it. The involvement of rational participants leads to a major problem in SS. In SS, a participant can recover the secret alone even not sending his share to others, if others have sent out theirs. On the other hand, if participants did not send their shares, none can recover the secret. Therefore, from the viewpoint of a rational participant, not sending his share weakly dominates sending his share. This implies the Nash equilibrium corresponds to the case that nobody sends his share to others, resulting in a failure of Shamir's scheme in the presence of rational participants. To mitigate this problem, Halpern et al. 2 introduced the concept of "rational secret sharing" (RSS), and it has become an active area of research in recent years [3][4][5] . In classical RSS, signed share is used to prevent cheating of participants, while another approach is to use verifiable secret sharing 6 . On the other hand, Hillery et al. 7 have proposed "Quantum secret sharing" (QSS), which can be considered as an extension of Shamir's SS into the area of quantum. In QSS, the secret is split, distributed and reconstructed by quantum operations. QSS provides more perfect security based on the quantum theory such as uncertainty principle and no-cloning theorem. Similar to SS schemes, the existing QSS schemes [8][9][10][11][12][13][14][15][16][17][18][19][20][21] do not consider the rational behavior of participants. However, it is natural for the last participant, if he is rational, to generate the secret and quit with it alone. Thus, rational participants in QSS would always prefer not to provide their shares, making the conventional QSS schemes fail. It should be emphasized those approaches suggested in RSS, such as signed share or verifiable SS, are based on unproven assumptions such as the intractability of integer factorization. In the quantum domain, participants and adversaries are always assumed to have unbounded computational power. As a result, these methods are inadequate for the design of rational quantum secret sharing (RQSS). In addition, there are other technical hurdles to be overcome for the design of RQSS, for example, the existing quantum signature schemes 22,23 fail to deal with the entanglement among distributed shares, and a participant cannot generate copies of his share due to the no-cloning theorem. Designing a workable RQSS is challenging but valuable, and it is also the main objective of this paper. In our proposal, the shared secret is assumed to be a d-dimensional quantum state. Some basic quantum operations, such as the quantum Fourier transform and quantum-controlled-not, are employed. Unlike our previous work 24 and other QSS schemes, here the issue of "rationality" is focused. Game theory is introduced to analyze the rational behavior of participants, based on the concepts of rationality, fairness and Nash equilibrium. As with most of the QSS schemes 7-12 , the threshold structure of our scheme is (n, n) structure, meaning that all the n participants compose the only qualified set and any subset with fewer than n participants is a forbidden set. Our Fairness. The fairness of RQSS is specified by the following conditions. Letting ⁎ a i be the suggested strategy of the protocol and a i be other possible strategy for participant P i , Nash equilibrium. A RQSS protocol should achieve the Nash equilibrium such that no participant has any incentive to deviate from the protocol. A suggested strategy is said to be in Nash equilibrium when there is no incentive for any participant to deviate from it, given that everyone else is following this strategy. Formally, it can be described as follows. For an arbitrary participant P i , if . The corresponding quantum inverse Fourier transform is then given by Consider two d-dimensional quantum states |j 1 〉 and |j 2 〉, the d-dimensional quantum-controlled-not operation is expressed by: where |j 1 〉 and |j 2 〉 are referred as the control particle and target particle, respectively; and "+" is defined as the adder modulo d hereinafter. Design of RQSS In RQSS, similar to other SS or RSS, there is a dealer who would like to distribute a secret to a set of participants. However, there are some district features in RQSS. Random structure. The dealer needs multiple rounds to distribute the shared secret to the participants. In each round, the dealer distributes the real secret (the shared secret) with a probability of γ, otherwise, a test secret is sent. Participants can only know whether the reconstructed secret is a real one or not after the dealer reveals the truth. Post verification. Dishonest participant should be punished and hence the behavior of participants must be verified. However, the methods employed in classical RSS are inadequate for RQSS due to the unbounded computational power in quantum domain. Therefore, quantum operations are to be applied. Generation of multiple same quantum states. In QSS, when the share is an unknown state, a participant cannot generate copies of his share due to the no-cloning theorem. If only one share is kept by a participant, only one secret can be reconstructed. Consequently, the participant who holds the reconstructed secret will have the privilege, breaking the fairness of RQSS. In order to resolve this problem, the dealer has to generate multiple same states and distribute to the participants, allowing all the qualified participants to get the reconstructed secret. Parameters setting based on Nash equilibrium. The parameters of RQSS should be properly set to guarantee that each honest participant can gain his maximal utility under the suggested strategy. The Nash equilibrium is to be achieved to ensure that the protocol can be performed robustly in the presence of rational participants. The details of the RQSS protocol are given as follows. For the sake of clarity, the dealer and the n rational participants are referred as Alice and {Bob 1 , Bob 2 , …, Bob n }, respectively. The shared secret is assumed to be a d-dimensional quantum state, defined as ϕ α To share ϕ among the n rational participants, Alice performs the following procedures for each round. (1) A specific coin having a probability of γ to be "1" (head) is tossed. If it is "1", Alice generates n same real quantum states; otherwise, she generates n same test quantum states. For convenience, every one of these n quantum states is denoted as ϕ. (2) For each ϕ, the quantum inverse Fourier transform is applied to obtain ϕ′. For each ϕ′, Alice generates (n − 1) single particles, p i = |d − 1〉 where i = 1, 2, …, (n − 1), and then performs d-dimensional quantum-controlled-not operation onto ϕ′ and each p i in turns, where ϕ′ and p i are the control particle and the target particle, respectively. An n-particle entangled state Φ is then resulted. Finally, Alice performs the quantum Fourier transform on each particle of Φ to obtain Φ′. (3) For every Φ′, Alice sends one particle of Φ′ to one participant sequentially. The particles transmission is protected by decoy particles, which are randomly selected from two bases, namely the Z-basis and the X-basis, as given in the following forms: (4) For reconstruction, all the particles of one Φ′ will be sent to one of the participants, and eventually everyone will get one Φ′. The participant will perform the quantum inverse Fourier transform on every particle of his own Φ′ and get back Φ. By performing d-dimensional quantum-controlled-not operations, the quantum state ϕ′ and the (n − 1) single particles {p 1 , p 2 , …, p n−1 } can be separated from Φ. Then the original state ϕ can be obtained by applying quantum Fourier transform onto ϕ′. For an arbitrary participant Bob i , the particles {p 1 , p 2 , …, p n−1 } come from other (n − 1) participants. If other participants perform the protocol faithfully, then the obtained particles {p 1 , p 2 , …, p n−1 } should all be in the state |d − 1〉. Therefore, by measuring these particles, Bob i can deduce whether the corresponding participant has sent the correct particle or not. However, it should also be remarked that, there is still a probability of d 1 that Bob i will get the correct measurement result even if the received particle is incorrect. (5) If a participant Bob i finds that he did not receive any particle from Bob j or the particle is not a correct one, Bob i will publicize the cheating behavior of Bob j . Other participants will then terminate the protocol. When the protocol is terminated by any of the participants, the dealer will not continue the next round. As a result, participants will not be able to get the secret if the current round is not the real one. (6) If no cheating behavior is found, the dealer will reveal whether the secret in this round is the real secret or a test one. If it is the real secret, the protocol will be over. Otherwise, the dealer will start the next round. Example To better illustrate the RQSS protocol, we consider a simple case with a dealer sharing a 3-dimensional quantum state to three participants. In each round, Alice firstly decides to distribute the real secret or a test secret, ϕ, according to the result of coin tossing. Three same quantum states are then generated, each specified by ϕ = α 0 |0〉 + α 1 |1〉 + α 2 |2〉. By performing the quantum inverse Fourier transform as given in (2) The quantum Fourier transform is then performed for each particle of Φ and finally Alice obtains Based on the above operations, Alice will get three same entangled states and for simplicity, each one of them is denoted as Φ′. The three particles of each Φ′ will be sent to Bob 1 , Bob 2 and Bob 3 , respectively. Consequently, each participant will have three particles which belong to one of the three entangled states. In the reconstruction, the three particles of one Φ′ will be sent to one participant, and every participant will get one Φ′. When Φ′ is available, Bob i will recover the original state ϕ by following Step (4) of the procedures as described in the last section. First, he gets back Φ by applying the quantum inverse Fourier transform on every particle of his Φ′. Then, two quantum-controlled-not operations are performed to separate the state ϕ′ and two single particles {p 1 , p 2 } from Φ. Finally, the original state ϕ = α 0 |0〉 + α 1 |1〉 + α 2 |2〉 is obtained by applying the quantum Fourier transform onto ϕ′. Bob i can verify the honesty of other two participants by measuring {p 1 , p 2 }. If they sent Bob i the correct particles, {p 1 , p 2 } should both be in the state |2〉. If all the participants are honest, Alice will reveal whether the secret is a real one or not. If it is the real secret, the protocol will be over and all the participants obtain the secret. Otherwise, Alice will start again for the next round. Security analysis In this section, the security of the proposed RQSS protocol is analyzed. Then, with the (n − 1) quantum-controlled-not operations, Φ is obtained as follows (8) Finally, after the n quantum Fourier transforms, one obtains Confidentiality. Given that the initial state Alice generated is . Therefore, only the item whose coefficient α j with = ∑ = j k i n i 1 in (10) can be retained, while other items will be disappeared. Therefore, the quantum state Φ′ can be simplified as , implying that the quantum state of shares is independent of that of the quantum secret. Therefore, participants cannot get any information of the quantum secret from their own shares, and our scheme can meet the confidentiality 25 . Security for outside eavesdropping. In our scheme, the transmission of particles is protected by decoy particles. The decoy particles are randomly selected from the Z-basis or the X-basis, and the secret particle is randomly inserted into the decoy particles. Since an attacker does not know the positions and bases of the decoy particles, if he intends to steal information by measuring the secret particle, he will probably measure the decoy particles with a random basis and would bring errors into the decoy particles. The probability of selecting a wrong basis for a decoy particle is 1 2 and the participant has a probability of − d d 1 to obtain a wrong value for the decoy particle. Therefore, the error rate of one decoy particle for eavesdropping is − . When l is sufficiently large, the probability will be close to 1. Besides direct eavesdropping, another famous attack from outsider is known as "entangle-and-measure". The attacker will entangle an ancillary particle on the secret particle, and then measure the ancillary particle to steal information. It is remarked that, according to the results in 26 , this attack can also be detected due to the errors of decoy particles. Security for dishonest participant. In our scheme, the secret state ϕ is hidden in the entangled state Φ′ as given in (11). As described in Section 5.1, Φ′ is a symmetrical superposition state and each of its particles can be in any state from {|0〉, |1〉, …, |d − 1} with the same probability. Even if (n − 1) participants work together, it is still impossible for them to get the initial secret state. Without loss of generality, we assume {Bob 2 , Bob 3 , …, Bob n } measure their particles and obtain results {r 2 , r 3 , …, r n }. Bob 1 's particle will become the following state From (12), we can see that {Bob 2 , Bob 3 , …, Bob n } still cannot get the secret state ϕ without Bob 1 . This confirms that the secret state can be recovered only if all participants are available, and hence collusion attack from dishonest participants will not succeed. Nash equilibrium In our scheme, there are four possible strategies when a rational participant performs the protocol. • a 1 : send the correct particles to other participants; • a 2 : remain silent, i.e., not send any particles to other participants; • a 3 : send the forged particles to other participants; • a 4 : measure the particles and then send them to other participants, i.e., the shared state will be destroyed. The participant may have the following four utilities. • U 1 : he gets the secret but the other participants do not; • U 2 : he gets the secret and same for the other participants; • U 3 : he does not get the secret and neither the other participants; • U 4 : he does not get the secret but the other participants get the secret. For a rational participant, it is obvious that Scientific REPORTS | (2018) 8:11115 | DOI:10.1038/s41598-018-29051-z Now, we analyze the utility of an arbitrary participant, Bob i , performing different strategies in a round j. (2) Perform strategy a 2 : if the secret in this round is the real secret (the probability is γ), his utility is U 1 ; otherwise, his utility is U 3 . So the utility under a 2 is γU 1 + (1 − γ)U 3 . (3) Perform strategy a 3 : if the secret in this round is the real secret (the probability is γ), his utility is U 1 ; otherwise, his utility is αU 3 + (1 − α)U 2 , where α is the probability that his cheating behavior is detected by the others. As explained in Section 3, we have α = d 1 in our scheme. Therefore, the utility under a 3 is (4) Perform strategy a 4 : it is similar to case (3) and the utility also equals to γ γ The utility of Bob i under different strategy is summarized below: Since U 2 > U 3 , it can be easily deduced that u i (a 2 ) is always less than u i (a 3 ) or u i (a 4 ). Now, letting , the rational participant Bob i will always choose a 1 as his strategy since u i (a 1 ) > u i (a 3 ) = u i (a 4 ) > u i (a 2 ). Therefore, if the parameter γ is set to satisfy the inequality condition γ < , every rational participant will choose a 1 as his optimal strategy, which is the Nash equilibrium, and perform the protocol faithfully. Comparison In our scheme, it is assumed that the shared secret is a d-dimensional quantum state and quantum operations, such as the quantum Fourier transform and quantum-controlled-not, are employed. Although similar assumptions and operations are used in our previous work 24 , the design and focus of this paper are totally different. The main feature of our scheme is to manage the "rationality". The scheme in 24 is only a traditional QSS scheme without considering the "rationality". In particular, we introduce the game theory into the QSS to analyze the rational behavior of participant, based on respective definitions of rationality, fairness and Nash equilibrium. The proposed RQSS possesses some distinct features as discussed in Section 3, including the random structure, post verification based on quantum operation, and parameters setting based on Nash equilibrium. Furthermore, we analyze different strategies and utilities of the rational participant, and derive conditions to ensure rational participants to follow the protocol faithfully, by achieving the Nash equilibrium. All these novel contents do not appear in 24 . Indeed, the protocol of RQSS is also different from that suggested in 24 . In our scheme, the dealer applies the quantum inverse Fourier transform onto the shared state, and then performs the quantum-controlled-not operations and quantum Fourier transform to hide the shared state into an entangled state. For reconstruction, participants need to perform reverse operations, including the quantum inverse Fourier transform, the quantum-controlled-not and the quantum Fourier transform, to obtain the shared state and the verification states. In contrast, participants under the scheme in 24 only perform single-particle measurements and unitary operations to recover the shared state. Such a reconstruction process is not preferable, as participants cannot obtain the verification states to verify the faithfulness of other participants. Conclusion In this paper, we have proposed a RQSS scheme to manage rational participants who try to maximize their utilities. By using quantum operations, the dealer encodes the secret state into an entangled state and distributes to the participants, while participants can use reverse operations to recover the secret state. The behavior of the rational participant is analyzed with the use of Game theory, and suitable mechanisms are proposed to motivate rational participants to perform the protocol faithfully. As proved, our scheme is fair and secure, and the suggested strategy achieves the Nash equilibrium. Compared to the existing QSS schemes, our scheme is more practical in the presence of rational participants. The entangled state is indispensable in our scheme. Compared with the single-qubit state, the multi-particles entangled state is harder to be prepared with the current technologies. However, as discussed in [27][28][29][30][31][32] , some practical ways are possible to generate the entangled state. With the rapid development of quantum technology, generating entangled states would become easier in the future, making our scheme more practical.
2018-07-25T13:31:37.287Z
2018-07-24T00:00:00.000
{ "year": 2018, "sha1": "efea94498c3f72a2df0eee847b297168d17cd47d", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-29051-z.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e533b7a45f134b1b54a1366f2004a96e756d972a", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
56087250
pes2o/s2orc
v3-fos-license
CORPORATE GOVERNANCE AND PERFORMANCE : EMPIRICAL EVIDENCE FROM ITALIAN AIRPORT INDUSTRY This paper empirically examines the degree of maturity of corporate governance of Italian airport companies, after about twenty years from the beginning of the reform aimed at the privatization of the industry. Two corporate governance issues are investigated: i) the development of different corporate governance models by different categories of airports; ii) the relationship between corporate governance models and the technical and financial performance of Italian airport companies. For this reason two indexes have been developed to capture two corporate governance features such as decision-making power concentration and alignment to best practices. Then the correlation of corporate governance indexes with the efficiency, measured by using data envelopment analysis (DEA) methodology, is tested on a significant sample of Italian airports. Introduction In the last thirty years airports have shifted from simple providers of transport facilities to complex economic activities fully exposed to competition, with a primary importance for national and local development (Fleury, 1999). The constant evolution of airports to multibusiness firms capable of attracting massive volumes of investments and stimulating a strong demand of jobs, goods and services went hand in hand with the gradual liberalization of the air transport industry.The first step to free market competition went back to the Airline Deregulation Act (ADA) promulgated in the United States of America in 1978, and the process continued almost ten years after in Europe with the set of laws enacted from the Council of the European Union in 1987, 1989 and 1992 (Valdani and Jarach, 1997). Today airports, as well as other firms involved in deregulation processes, need to adopt a managerial logic and to develop the right managerial tools to cope with the challenges imposed by the global market.At the same time, in several European countries some difficulties still remain which make it hard for these multi-product firms to adopt the right business model to succeed in the market.One of these is the typical concentration of ownership which traditionally follows the deregulation processes.Another important issue is associated with the hard-to-remove historical public presence which can affect both the governance structure and the strategic management of the companies.This fact, often indicated among the main obstacles for the recovering of efficiency of the industry, requires the implementation of actions aiming at favouring a careful and balanced relationship between public and private powers. This paper explores the degree of maturity of the corporate governance systems reached by the Italian airports considering their delay in carrying out the reform aimed at the gradual liberalization of the industry which started in the early Nineties (Sebastiani, 2004).In Italy, the long state property in the industry makes the air transport system a privileged field of study.Notwithstanding the progressive pressures towards privatization, in fact, the State-entrepreneur in Italy seems to be firmly present in the airport industry too (Cafferata, 2010). After about twenty years it is first useful to understand if airports belonging to different categories such as those which are part of groups, those with private majority shareholders, those which are listed on the stock exchange and also those characterized by different traffic volumes have developed different corporate governance models.Secondly the study permits us to verify the crucial relationship between corporate governance and financial and operational performance of Italian airports.In this paper two corporate governance issues are examined: i) the development of different corporate governance models by different categories of airports; ii) the relationship between corporate governance models and the technical and financial performance of Italian airport companies. In particular, the analysis aims to evaluate the existence and the intensity of the link between two corporate governance features, such as the decisionmaking power concentration and the adherence to the best practices established by codes of conduct and literature, and the level of efficiency of airports.For this purpose, two indexes considering both internal and external mechanisms on corporate governance are developed.Internal mechanisms refer to the balance among the main groups of players inside the corporation, while external ones refer to the formal legal and regulatory obligations designed to address the entry, operations and exists of the firm (Babatunde and Olaniran, 2009).This will also permit to bridge the gap between theory and practice and to evaluate the diffusion of corporate governance best practices. The above mentioned link represents one of the most debated and vexed questions in the field of management, since theory assumes that better corporate governance models should lead to more balanced and effective decision-making processes and thus to better performance (Cadbury, 1999;Melis, 2000), but empirical proof is still weak and contradictory (Hermes, 2005;Lai and Stachezzini, 2006;Gupta, 2009).The delimitation of the research field to the Italian airport industry, if it restrains from generalizing the results, on the other hand it permits itself to overcome the one-size-fitall approach in measuring corporate governance.This concept refers to the pretension to identify a unique framework to interpret very different contexts and strategic purposes (Arcot and Bruno, 2006). Measuring corporate governance In the last two decades, most academic research on corporate governance has been dominated by the agency theory approach (Ross, 1973;Fama, 1980;Dühnfort et al., 2008).The necessity of balancing the power inside firms, in this view, is primarily associated with the objective of reducing the agency costs, caused by the information asymmetry and by the differing interests between a principal and the agent of the principal.The agent commits himself to supply a service for the principal in exchange of a compensation, and both players try to maximize their own utility (Macharzina, 1995).In this sense, firms, as suggested by the contractual theory, can be seen as nexus of contracts, formal and informal, through which the use of resources and determined activities are put in charge of an agent to reach the goals set by the principal (Fama and Jensen, 1993).Control mechanisms are needed to reduce the agency problems arising from the separation between ownership (the investors) and control (the management), because managers should act in the interest of the owners (Jensen and Meckling, 1976), but in such a complex environment is not possible to reach the goal by contracts, which are incomplete (Coase, 1937;Alchian and Demsetz, 1972).The effectiveness of shareholders' control on management, in this sense, seems to be strictly related to the capability of corporate structure to streamline managerial action to ownership's objectives.This attempt is extremely expensive for both parties, so the overall goal is to minimize the agency costs, which can be summarized in monitoring costs, bonding costs and residual loss (Meinhövel, 1999). In recent years, nevertheless, the contingency theory has strongly influenced corporate governance literature.This approach moves from the basic idea that every firm operates in a unique context, so it should develop the best corporate governance model in relation to its specific internal features and external influences (Huse, 2007;Daily et al., 2003;Viganò et al., 2011;Krivogorsky and Grudnitski, 2010).Also, a lot of studies showed that external factors such as geographical position, tax system, industrial development and cultural background strongly affect ownership structure and in turn firm's performance (Pedersen and Thompson, 1997). Nonetheless, many authors have investigated the potential link between corporate governance and corporate performance (Thomsen and Pedersen 2000; Frick and Lehmann, 2004).As noticed by Babatunde and Olaniran, the measure of performance matters for analysis of corporate governance studies (Babatunde and Olaniran, 2009).A lot of studies have tried to quantify governance effectiveness using scores and seeking a correlation with firm value, profits, sales growth or capital expenditure as financial performance indicators (Bhagat andBlack, 1999, 2002;Gompers et al., 2003;Dulewicz and Herbert, 2003; G.M.I., 2004; Brown and Caylor, 2006).Criticisms of this approach deal with the difficulty of identifying a plurality of explanatory standards for governance, with very few of them having real significance (Sonnenfeld, 2004). A large part of the studies investigated corporate governance effectiveness focusing on its structural features such as the ownership concentration, the board composition, the separation between the chief executive officer (CEO) and the chairman and the independence of the directors (Alonso-Bonis and de Andrés-Alonso, 2007; Zeitun, 2009).La Porta et al. (1999) found that ownership and control concentration in the hands of large shareholders can serve as mechanisms for resolving collective action problems among shareholders.In literature, there are diverging studies about the effects of the relationship between ownership concentration and performance, someone including the hypothesis that ownership concentration may improve performance (Stiglitz, 1985;Jensen, 1986;Shleifer and Vishny, 1986), someone else stating that ownership concentration may be an obstacle to exploiting growth opportunities as well as discouraging innovation and management autonomy (Hill and Snell, 1988;Burkart et al., 1997). However, Krivogorsky and Grudnitski (2010), in their study carried out on eight European countries, highlighted the effect of country-specific institutional constructs on the relationship between ownership concentration and performance.In this sense the positive association between state ownership and listed firm performance in the Chinese context, shown by Le and Buck (2011), can be interpreted.Considering the field of study of the Italian airport industry, it is worth a mention the existence of many levels of ownership in a company shown by Barca and Becht in the Continental Europe.In fact, cross-ownership, rings and high level of voting concentration in the shareholdings structure make more difficult to identify controlling investors, the perimeters of companies control and the voting leverages in majority voting (Barca and Becht, 2001;Chapelle, 2005). Di Pietra et al. ( 2008) presented evidence that corporate governance quality measured by the fraction of directors that serve on more corporate boards, named -busy‖ directors, positively influences the market value of Italian companies, while they did not appreciate any significant relationship between the board size and the market value.Results about this relationship, however, are contradictory.Mak and Kusandi (2004) reported a negative relationship between board size and firm valuation, in line with the results of previous studies that showed that directors in larger boards may be more reluctant to initiate changes due to expected delays and disagreements (Shaw, 1981), or that the effectiveness of larger boards' activity may be hindered by the poor coordination (Gladstein, 1984) and the lack of motivation (Jewell and Reitz, 1981).Nevertheless, focusing on a sample of smaller firms with a history of poor operating performance, Larmou and Vafeas (2010) identified a setting in which larger board size appeared to be positively related to shareholder value.Furthermore, Davidson III and Rowe developed a theory of intertemporal endogeneity of board composition and financial performance.This means that besides exerting influence on financial performance, board composition is also impacted by board composition (Davidson III and Rowe, 2004). Other studies, on the contrary, tried to fill the gap due to the underestimation of the working and quality standards of firms' employees and bodies in measuring corporate governance.Structural indicators, in fact, cannot easily explain managerial behaviour and organizational performance (Larcker et al., 2004).In this sense Lorsch and MacIver (1989) found that managers' activity, especially in decision-making, benefitted from the board's daily operation.Everyday activity, in fact, is supposed to give more firm-specific information.In line with process-oriented research aimed at understanding the sources of -value-creating board‖ (Huse, 2007), Pugliese and Wenstøp (2007) showed that board working style and board quality attributes were more important sources of board effectiveness than board composition. A lot of studies investigated the roles of the main figures of firm's boards, and in particular the effect of the separation between the chairman and the CEO.Fama and Jensen (1983) suggest that CEO duality violates the principle of separation of decision-management and decision-control and hinders the board's ability to perform its monitoring functions.However, also in this case results are not homogeneous.Even though Rechner and Dalton (1991) found that firms in which the two positions are separated perform better on a number of accounting measures, and Core et al. (1999) found that boards are less effective when the CEO is board chair and when the board is relatively big, some other research presents opposite results.Baliga et al. (1996), for instance, showed that there are no discernable differences in performance that can be attributed to a firm's leadership structure, and in the same way Brickley et al. (1997), as well as other authors (Chen et al., 2008), showed that CEO duality is not associated with inferior performance.Coles et al. (2001) even found that firms that do not separate the positions of CEO and chair of the board have better accounting performance. In their study on the role of the board chair as distinct to that of the CEO, McNulty et al. ( 2011) mixed structural and working aspects.In fact, linking board composition, board process and the exercise of influence, they revealed differences amongst chairs in how they run the board and in the influence they exert on board-related tasks. An important issue emerged in measuring corporate governance in reference to the consideration of the diversity amongst firms.The influence of the context, in fact, often makes the attempt to use the same framework following the -one-size-fits-all‖ approach in vain (Arcot and Bruno, 2006).For this reason Faleye (2007) argues that requiring all firms to separate CEO and chairman duties may be counterproductive because whether CEO duality benefits or hurts the firm is contingent on firm and CEO characteristics.As regards the CEO compensation, it is interesting to consider the analysis carried out by Àlverez Pérez and Neira Fontela (2005) in the Spanish firms, about the diffusion of the stock option plans, following the approach of the theory of agency. The uncertainty of the link between CEO and Chairman is posed again with reference to the relationship between the independence of the directors and firm performance.While Rosenstein and Wyatt (1990) found the existence of such a relationship, Bhagat and Black (2002) provided evidence suggesting that there is not a strong relationship in the long-term, and Coles et al. ( 2001) found that firms that select higher proportions of independent directors perform worse on markets. In measuring corporate governance features, we also considered the study of De Jong et al. (2006), that presented evidence that general meetings often do not provide any significant influence on management, and the study of Cortesi et al. (2009), that investigated the main limits and the areas of improvement in the working of company internal control system. However, in air transport management literature, little has been done on corporate governance, and the most studies are mainly focused on the airline industry.Kole and Lehn (1999) studied the adaptation of the governance structure to the deregulation process in U.S.A., and found a more gradual adaptation for the airlines having a more concentrated ownership structure, smaller boards and more equity-based pay.Carney and Dostaler (2006) investigated corporate governance models focusing on ownership and control relationship, and found that low-cost carriers best fit the pattern of entrepreneurial governance, characterized by a more direct control of management decisions.Alves and Barbot (2007), on the other hand, quantified governance to verify the link with airline business models.They found that low-cost carriers solve their potential agency cost problems differently from full-service carriers, as they organise their boards in order to achieve lower costs and a faster decision-making process. Many more analyses have been carried out on the measurement of the multi-faceted airport performance (Rotondo, 2006).Humphreys and Francis (2002), first of all, made a review of the nature of the performance measurement techniques used by airports.Then a number of empirical investigations on airport financial and technical performance were carried out in the Italian context (Barros and Dieke, 2007;Curi et al., 2010) or elsewhere (Barros, 2008;Oum, 2009), mainly through the use of data envelopment analysis (DEA) or variable factor productivity (VFP).Relying on a well-established methodology this paper aims at taking a step forward by shedding light on the unexplored issue of the link between corporate governance systems and airport performance. Italian airport institutional setting Though nearly 20 years have passed since regulation reform of the airport industry started, the Italian institutional setting can be defined as perennially -stuck in transition‖ from a partial management agreement between the State and the firms, characterized by public presence, to a total management agreement.So some of the gaps which motivated the change still persist, such as the lack of competitive pressure, private funds and efficiency.The slowness of the reform, in fact, has caused the stratification of a lot of heterogeneous situations with reference to both the regulation levels, that of the right of entry into the market of airport management and that of the right to use the airport facilities and to provide services. The Law n. 537/1993 first drove towards privatization providing the formation of companies to manage airports in order to attract new funds and modernize infrastructures.The following Law n. 351/1995 made the process more gradual, repealing the obligation of public majority share in the company.Nevertheless, today the passage to total management agreement disciplined in D.M. 521/97 still has not been completed and in the industry some provisional management agreements remain.Other than the eight airports which benefited from special law before 1993, not all the airports have obtained the total management concession and then have signed the contract with the State.A lot of companies continue to manage airports in accordance with a partial management concession model, sometimes in a precarious way.The distinction between -regular‖ or -precarious‖ partial management concessions is based on the presence of an official agreement between the airport company and the State. While total management agreement allows the company to manage the whole airport for a maximum time of 40 years thus incentivizing direct investments, in the partial management agreement, that lasts for 20 years, the State continues to manage the air-side infrastructures.In the precarious cases the State also collects the aeronautical revenues. The confusion of the regulation about entering into the market has had a direct effect on the right to use facilities, and especially on the setting of aeronautical fares (Sebastiani, 2009).The C.I.P.E.Deliberation n. 86/2000 had introduced the -dual till‖ principle in setting the fares of airport services, which obliged the airports to correlate the remuneration of aviation activities to costs and left the remuneration of non-aviation ones free for the regulation period of 5 years (¹).However, the following Law n. 248/2005 changed the rule in the -single till‖ principle, that is the duty to impute at least 50% of commercial earnings to decrease the aeronautical charges.The new principle also had a retroactive effect.Finally, with the art.17, comma 34-bis of the Decree 78/2009, the Italian airports with more than 10 million annual passengers have been permitted to introduce long term fare systems in line with European standards as a dispensation to the previous rule. In the meantime the European Community Directive 2009/12/CE, from March 2011 requires the airports with more than 5 million annual passengers to set their fares by consulting users and applying to an independent authority in case of disagreement.Up until now the mentioned fare rules have been scarcely enforced and fares did not changed from 2001 to 2008, causing airport discontent for the substantially lower level of the fares compared to the European average (Assaeroporti, 2006). The Italian airport industry, therefore, is very non-homogeneous since it is characterized by a variable configuration in management agreements and consequently in ownerships, where the presence of public administration is still strong.Furthermore, only four companies are listed on a stock exchange and five companies manage a group of airports directly or indirectly by shareholding control. There are also remarkable differences in traffic volume, considering that in the last five years just two airports greatly exceeded the limit of 10 million annual passengers and five moved from 5 to 10 million passengers per year.Fourteen airports, instead, moved between 1 and 5 million passengers.Finally the Italian system can be defined as widespread because it has about 100 airports on the national territory with 47 and 45 of them, respectively, opened to scheduled flights and adhering to the national trade-union.It is also very concentrated, as shown by the fact that the traffic volume of the 21 airports with more than 1 million average passengers represents nearly 96% of the total from 2005 to 2009. Methodology The sample consists of 20 companies managing a total of 27 Italian airports, including all the 21 airports with more than 1 million units in passengers and work-load units (WLU) in the fiveyear period 2005-2009, the four airports they control as a holding company and two out of the other four airports with a traffic volume comprised between 1,000,000 and 500,000 units.The sample airports, whose features are expressed in table 1, account, respectively, for 97.74% and 96.81% of the whole industry's passengers and WLU.The work-load unit, elaborated by the Transport Study Group of the Polytechnic of Central London, is a measure adopted at the international level that helps to overcome some of the limits which affect the measures of passengers and cargo.A single WLU, in fact, expresses a passenger with baggage or, alternatively, 100-kilogram cargo, thus permitting to uniform the traffic volume of airports characterized by different aeronautical activities. In order to capture the characteristics of corporate governance systems two indexes have been developed, the first one as a proxy for the concentration of decision-making power (DPC Index) and the second one as a proxy for the adherence to the best practices (BP Index) prescribed in international reports and codes of conduct (Cadbury Report, 1992 Data was collected during the period from September 2010 to June 2011 by analysing institutional documents of the companies such as Statutes and Corporate governance reports taken from websites or given directly by the airports' legal, administrative and control offices.Each company's top-management was also asked to fill in a structured questionnaire in order to identify the main features of the corporate governance system. Financial and operational performance of the sample airports, instead, was measured making use of the well established data envelopment analysis (DEA), a linear programming method based on the usual hypotheses of the neoclassical analysis of the production function, which permitted us to calculate the relative efficiency of the companies considered as a homogenous set of decision-making units (DMU).Finally we performed a simple correlation analysis between each of the corporate governance indexes and the level of efficiency of airports. It is not the first time DEA is used, though in a different way, to verify the link between corporate governance and firm profitability (Lehmann et al., 2007).In this study we chose to estimate an inputoriented DEA-CCR Index (Charnes et al., 1978), which is probably the most widely used model.It assumes constant return-to-scale relationships between inputs and outputs and considers the first ones endogenous and the second ones exogenous.The companies, namely, aim to minimize the costs of their activity in order to reach the efficiency frontier, keeping output constant.Standard measure is not an a priori calculation, but it is determined automatically inside the sample, because the model selects the benchmark among the units involved.It seemed in line with our research's scope, because the same benchmark logic was used to calculate a number of provisions which constitute the two governance indexes.Other strengths of DEA are that it is a very simple and powerful managerial tool which can handle multiple inputs and outputs, each of them with very different units.On the other side, its main limitations lie in the low ability to indicate -absolute‖ efficiency and in the impossibility to test hypotheses on a statistical basis.Another well-known limit of this method is that the only chance to move away from the frontier is to be -inefficient‖.From the temporal point of view, the analysis of airports' efficiency followed two successive steps.In the first phase, following Barros and Dieke (2007), three inputs and six outputs were selected to analyze airports' efficiency.Inputs were all financial measures like the cost of labour, the capital invested and the other operational costs.Outputs, instead, embraced both physical and financial variables.The physical ones include the number of planes, the number of passengers and the tons of cargo moved by airports, while the financial ones include the aeronautical revenues of airports, their handling revenues and the other nonaeronautical revenues. Because of the high number of airport companies on the efficiency frontier we chose to deepen the analysis in a second phase where, for estimation purposes, two inputs and three outputs were extracted (Simar and Wilson, 2008;Curi et al., 2010).Referring to the inputs, a theoretical approach was followed.Considering their primary importance in the industry, cost of labour and the capital invested were chosen.Referring to the outputs, on the contrary, the correlation among each pair of them was calculated in order to avoid their mutual influence on final performance. We found that a strong correlation, showed in italics in table 2, exists between the aeronautical revenues and, respectively, the number of planes and the number of passengers.At the same time a strong correlation between the number of planes and the number of passengers emerged.This suggested to us to select the aeronautical revenues and to reject the other two outputs. Then we found a significant correlation between handling revenues and tons of cargo.The lower correlation between the handling revenues and the aeronautical revenues compared to the correlation between the tons of cargo and the aeronautical revenues suggested us to select the handling revenues as the second output.Finally, we selected the non-aeronautical revenues which showed correlation values with the other outputs on the average. In short the three financial measures were isolated.This seemed to be in favour of a stronger homogeneity between inputs and outputs, and to be consistent with the scope of verifying the link between financial performance and corporate governance of the airport companies.Some other devices were adopted to reinforce analysis.In order to mitigate the economic shortterm effects, the average data related to the recent three-year period 2006/2008 was used.As the latest official financial data of the Italian airport industry dates back to 2006 (ENAC, 2008), when not available on company websites, data was collected from Assaeroporti's archives and Cerved databases or taken directly from airport companies. In measuring performance with DEA, the data referred to airports belonging to groups were necessarily added.A simple concept of group was adopted, that is a whole of airports managed or controlled by the same company.Therefore each group is considered as a single decision-making unit. The combination of indicators meets both DEA conventions that are a minimum number of observations greater than three times the number of inputs plus outputs [60≥3(2+3)] and a minimum number of units equal or larger than the product of inputs and outputs [20≥(2*3)] (Raab and Lichty, 2002; Boussofiane and Dyson, 1991). The decision-making power concentration (DPC) Index The DPC Index, in particular, accounts for the global concentration of decision-making power inside the company by considering structural aspects and responsibilities of the main bodies at the different levels of the organization.It is composed of 17 provisions divided into 5 areas with different percentage weight: ownership concentration, capital protection, shareholders' decision-making power, board of directors' decision-making power, company's bodies composition (see table 3 for details).In general, higher scores correspond to higher power concentration. Area n. 1, -ownership concentration‖, weights for 25% on the total, and is measured by the company's capital composition.It is a 6-item scale which takes into account the majorities requested for the deliberation validity of the shareholders meetings, ordinary and extraordinary, exposed in the Italian Civil Code (Art.2368).The highest score is related to the event that a single shareholder holds more than 66.6% of the total shares of the company, while the lowest score is set to the companies where the first three shareholders together do not hold more than 50%. Area n. 2, -capital protection‖, weights for 25% on the total, and is measured by five provisions that, if contemplated in the company statute or in a contract between shareholders, strengthen shareholders position, and especially majority shareholders' one.Provision n. 1 refers to the obligation to allocate a certain amount of shares to certain shareholders.Provisions n. 2, n. 3 and n. 4 concentrate on the presence in the statute of the typical protection forms represented by the option right in case of capital increase (provided by art.2441 of the Civil Code), the pre-emption right in case of share sales, the approval clauses in case of new entries.Similarly, provision n. 5 verifies the presence of contractual agreement among shareholders about blocking share transfers. Area n. 3, -shareholders' decision-making power‖, weights for 25% on the total, and is measured by six provisions.The first three concentrate on the shareholders decision-making function inside the meetings.Provision n. 1 analyses the power extent of the shareholders' meeting, because the statute could entrust shareholders with tasks other than those provided by the Art.2364 of the Civil Code.Provisions n. 2 and n. 3 focus on the request of strengthened majorities, which implicate a larger comparison among shareholders and thus a lower power concentration. Provisions n. 4 and n. 5 assess the shareholders' influence on the other bodies' composition, while provision n. 6 assesses the presence of contractual agreement about voting, which is supposed to increase power concentration. Area n. 4, -board of directors' decision-making power‖, weights for 15% on the total, and is measured by three provisions.Provision n. 1 assumes that the lower the number of executive directors is, the higher the power concentration is.Following a comparative approach, the sample mean is chosen as a benchmark.Provision n. 2, instead, focuses on the request of strengthened majorities for the validity of the board deliberations, while provision n. 3 investigates the actual possibility for directors to delegate decisions. Area n. 5, -company's bodies composition‖, weights for 10% on the total, and is measured by two provisions.The first one examines the number of directors while the second one the number of internal auditors.The principle here is that a number of members higher than the sample mean, assumed as a benchmark, encourages comparison and reduces power concentration inside the company. The DPC Index, to be better compared to DEA Indexes, was normalized into a scale of values from 0 to 1. The best practice (BP) Index The BP Index, made up of 10 provisions, instead, measures the degree of adaptation of airport companies' governance systems to the best practices prescribed by international codes of conduct and reports (see table 4 for details).In general, higher scores correspond to better adherence to best practices. The first provision questions if the company, whether listed or not on the stock exchange, chose to agree to codes of conduct or similar codes.Provision n. 2 evaluates the weight of nonexecutive directors, whose vigilance function is fundamental, especially when the interests of the executive directors diverge from the interests of shareholders (Cadbury Report, 1992; Principles of Corporate Governance, 1994; Preda Code, 1999; Higgs Report, 2003).Provision n. 3, in a similar way, measures the presence of independent directors among the non-executive directors.Independent directors neither keep economic affairs with the company nor sign shareholder agreement which can affect their independent judgement (Preda Code, 1999, art.3, lett.a and b).In both the previous provisions the sample average is chosen as a benchmark. Provision n. 4 verifies the separation between the role of the chairman and that of the chief executive officer, because -CEO duality‖ concentrates power on a single person and so it is supposed to be prejudicial to balanced decisionmaking (Cadbury Report, 1992;Hampel Report, 1998). Provision n. 5 focuses on the use of the stockoption system to remunerate executive directors (Àlverez Pérez and Neira Fontela, 2005).This method is capable of orientating directors' activity because it provides incentives to firms' market value maximization.However, it should be used cautiously (Cadbury Report, 1992;Greenbury Report, 1995;Hampel Report, 1998), and for this reason the limit of 1% of the company's capital possession was fixed. Provision n. 6 questions if the company set a limit to the number of tasks undertaken by directors, following the principle that directors should be able to dedicate sufficient time to board work.The same principle is questioned in provision n. 9 about the effectiveness of internal auditors' activity (Bianchi Martini et al., 2006).Stakeholders, in fact, must rely on professionals not involved in excessive tasks in other companies (Assonime, 2010). Provision n. 7 deals with the number of committees appointed inside the board, mainly composed of non-executive and independent directors, in order to improve board's decisionmaking effectiveness and to guarantee the minorities' interests (Cadbury Report, 1992;Hampel Report, 1998;Preda Code, 1999;Smith Guidance, 2005).Also in this case the sample average is chosen as a benchmark.The purpose of balancing majority and minority rights is also related to the possibility of the minority to appoint internal auditors, an issue taken into account by provision n. 8.The introduction of the Board of auditors, in fact, was seen as a way to control majority shareholders' and executives' power by shareholders not involved in decision making.The presence of internal auditors appointed by different shareholders promotes competencies integration and favours common interest (Ambrosini, 1999;Fortuna, 2001;CNDC, 2003). Finally, provision n. 10 verifies if the external auditing body, or bodies related to its activity, has been entrusted with other tasks.Multiple tasks assigned by the same company, in fact, reflects a lower independence (CNDC, 2005;Bianchi Martini et al., 2006).Industry features, nevertheless, suggest not considering the cost accounting certification provided by the Law n. 248/2005 as a separate task. BP Index too was normalized into a scale of values from 0 to 1 to be better compared to DEA Indexes. Results and discussion A number of points emerge from the calculation of the corporate governance indexes and then from their relationships with financial and technical performance of airport companies measured by the DEA indexes. In reference to the first objective of the research, which was to verify the maturity degree of corporate governance models developed by different categories of airport companies, some interesting results are pointed out (see table 5).In general, the industry shows a middle level of concentration of decision-making power and a lower level of adoption of best practices.The main descriptive statistics also reveal, with reference to the DPC Index, a more homogeneous distribution of the units. In particular, with reference to the difference in traffic volumes, expressed by the work-load units, a similar level of decision-making power concentration was found among the airport classes.On the contrary, the adoption of best practices tends to decrease from the airports which move the larger amount of WLU to those which move the smaller ones. With reference to the second category, that of the airports being part of a group, a value of decision-making power concentration slightly higher than the average and a value of best practices adoption significantly higher than the average were found.This fact reveals that the complex management issues faced by the companies which control systems of airports resolve on one hand in the development of corporate governance systems more adherent to the codes of conduct provisions, but on the other hand in more intense protection of majority shareholders role and privileges.The decision-making process of such airport companies, for this reason, seems to be less participated in and balanced. The following two categories, that of the airports with private majority shareholders and that of the airports listed on a stock exchange, present similar results about corporate governance features.Both categories, in fact, show a decision-making power concentration a little lower and a best practice adoption remarkably higher than the sample average.The BP Index value for the listed companies, in particular, is the highest by far.All the companies listed on a stock exchange, interestingly, have values equal to or greater than the median.This result was expected because although the code of conduct adoption is just voluntary and not mandatory, the principles of fairness and transparency exert a stronger influence on listed companies. Moreover, also 80% of the companies which manage groups and 75% of the companies with private majority shareholders have values equal to or greater than the median for BP Index.This fact reveals a stronger attention focused on the best practices than the rest of the companies. Following a benchmarking approach among the different categories, airports with private majority shareholders and airports listed on a stock exchange show the higher maturity degree of corporate governance systems.In order to answer the second question of the research, that is to verify the link between the corporate governance and the performance of airports, the correlation between each of the two corporate governance indexes and the DEA indexes are calculated.Results, shown in table 6, clarify the nature and the direction of the links between such variables. Before investigating these relationships, however, it is useful to comment on the technical and financial performance of the different categories of airports.Taking into account the more significant DEA Index 2, made up of two inputs and three outputs, we found that all the companies with private majority shareholders are on the efficient frontier.They show, in other words, the best performance.Also 80% of the companies which manage groups show the best performance, while the percentage falls down to 50% for the listed companies. While analysing the relationship between corporate governance features and firm performance on the one hand it confirms some of the tendencies supposed by theory and highlights probable cause-effect links, on the other hand it shows weak linear relationships between the terms. First of all a slight negative correlation emerges between the concentration of decisionmaking power and the development of governance systems in line with international best practices.Consistently, while the DPC Index is negatively correlated with performance, the BP Index shows a positive relationship with DEA Indexes.Considering the DEA Index 2, however, the inverse relationship between power concentration and performance is stronger than the positive one between best practices adoption and performance.So it is at least confirmed the direction of the relationships supposed by literature.A stronger concentration of power should interfere with comparison inside firms and thus lead to worse decisions and lower performance.A stronger alignment to best practices should lead to a more balanced corporate governance system and thus to better performance.Power concentration, furthermore, seems to be a stronger driver of performance than best practices adoption. Weak correlations could be partially explained by the limits of the DEA method in expressing firm performance.In fact in DEA only inefficient DMU are put in order.But some other interesting points emerge from the analysis.Since our indexes, as well as DEA, are just preliminary diagnostic tools, it is necessary to understand the reason and the implications connected to the results (Talluri, 2000).The difficulty in assigning a direct link supports literature contributions which highlight the importance of focusing on dynamic and organizational aspects rather than structural or normative ones as factors which determine performance. Managerial culture, skills and tools, in fact, in spite of being sometimes difficult to measure, seem to be more effective in driving companies towards better results.All the same, their presence is not automatically guaranteed by a more intense negotiation activity inside or among company's bodies, as well as by a tighter adherence to provisions of codes of conduct. Moreover, the weak link between the BP and DEA indexes reflects some characteristics of the Italian airport context.Strong public presence, few stock exchange quotations and limited average sizes of the companies basically denote low management complexity which can probably lead to immaturity of governance systems, revealed by a sort of -accomplishment approach‖ to the best practices.In this sense, the formal adoption of the best practices may explain its weak relationship with performance improvement. Conclusions and perspectives The empirical investigation found that, after about twenty years, the reform of the Italian airport industry resolved in a poor degree of maturity of the airport companies' corporate governance models.Because of the slowness and incompleteness of the liberalization process, corporate governance of the Italian airports is characterized by a medium level of concentration of decision-making power and a low degree of coherence with the best practices stated in the international codes of conduct or highlighted by literature. In line with the approach of the contingency theory, specific internal features as well as external influences seem to be important drivers of corporate governance models in relation to different categories of airports.In particular, the analysis found that the adoption of best practices tends to decrease from the larger airports to the smaller ones.Furthermore, companies which control a number of airports present corporate governance models more concentrated but also more adherent to codes of conduct provisions. Not surprisingly, the analysis showed the best results in the clusters quicker to take the reform's chance, those of airports with private majority shareholders and airports listed on a stock exchange.Liberalization seems to have had a good impact on them, as public presence is less intense in both the ownership structure and strategic management. So the above mentioned categories present a lower decision-making power concentration and a higher best practices adoption than the sample average. The study also confirms the existence of a negative relationship between the concentration of power and firm performance, as well as a positive, though less intense, relationship between alignment to best practices and firm performance.The weakness of the links, nevertheless, indicates the necessity to focus future analyses on more effective, sometimes intangible drivers of performance, such as the diffusion of managerial culture, logic and tools inside the organization.These elements, in fact, do not seem to be necessarily connected to power concentration or best practices alignment. The weak relationship between best practices adoption and firm performance, in particular, may indicate a sort of formal approach to good governance models, certainly connected to the development and the features of the Italian airport industry.Such an approach, clearly, does not easily turn into an improvement in efficiency. Notes (1) The C.I.P.E. is a government body which intervenes in economic and financial affairs. Table 4 .2 Description of variables of the Best Practice index (BP Index) N° Provisions Scores (Y = yes; N = no) 1 If listed/non-listed did it agree to codes of conduct /similar codes?Y = 1 / N = 0 Number of non-executive directors on number of executive directors 0/1 if ≤/> to the mean 3 Number of independent directors 0/1 if ≤/> to the mean 4 Does it exist a separation between Chairman and Chief Executive Officer?Y = 1 / N = 0 5 Do the executive directors have a percentage of shares within the 1% of the capital?Y = 1 / N = 0 6 Is there a limit to the number of tasks undertaken by directors?Y = 1 / N = 0 7 Number of committees inside the board 0/1 if ≤/> to the mean 8 Are there internal auditors appointed by minority?Y = 1 / N = 0 9 Is there a limit to the number of tasks undertaken by internal auditors?Y = 1 / N = 0 10 Have been entrusted the external auditing body (or linked bodies) with other tasks?N* = 1 / Y = 0 * Unless the external auditing body has been entrusted with cost accounting certification, ex Law n. 248/2005 ; Principles of Corporate Governance, 1994; Greenbury Report, 1995; Hampel Report, 1998; Preda Code, 1999; Smith Guidance, 2003; Higgs Report, 2003; Combined Code, 2010).These documents, together with corporate governance literature, guided the selection of variables which compose the indexes. Table 1 . Characteristics of the sample Table 2 . Mutual linear correlation among outputs Table 5 . Corporate governance maturity degree for different categories of airport companies Table 6 . Correlations between corporate governance indexes and performance indexes
2019-01-01T23:47:12.846Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "04f53bc77db489d7d5974d97b86e1c2669a67e7b", "oa_license": "CCBYNC", "oa_url": "https://virtusinterpress.org/spip.php?action=telecharger&arg=5406&hash=9aa56be841d7cb038b263b652f55bdfbbbbc292b", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "04f53bc77db489d7d5974d97b86e1c2669a67e7b", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Business" ] }
8622827
pes2o/s2orc
v3-fos-license
Environment-dependent prey capture in the Atlantic mudskipper (Periophthalmus barbarus) ABSTRACT Few vertebrates capture prey in both the aquatic and the terrestrial environment due to the conflicting biophysical demands of feeding in water versus air. The Atlantic mudskipper (Periophthalmus barbarus) is known to be proficient at feeding in the terrestrial environment and feeds predominately in this environment. Given the considerable forward flow of water observed during the mouth-opening phase to assist with feeding on land, the mudskipper must alter the function of its feeding system to feed successfully in water. Here, we quantify the aquatic prey-capture kinematics of the mudskipper and compare this with the previously described pattern of terrestrial feeding. Prior to feeding in the aquatic environment, the gill slits open, allowing water to be expelled through the gill slits. The opposite happens in terrestrial feeding during which the gill slits remain closed at this point. In water, the expansive movements of the head are larger, amounting to a larger volume increase and are initiated slightly later than in the terrestrial environment. This implies the generation of strong suction flows when feeding in water. Consequently, the kinematic patterns of the hydrodynamic tongue during terrestrial feeding and aquatic suction feeding are similar, except for the amplitude of the volume increase and the active closing of the gill slits early during the terrestrial feeding strike. The mudskipper thus exhibits the capacity to change the kinematics of its feeding apparatus to enable successful prey capture in two disparate environments. INTRODUCTION Capturing prey in both the aquatic and terrestrial environment presents numerous challenges because of the different physical demands on the feeding system to enable it to function in water as well as in air (e.g. Deban and Wake, 2000;Deban, 2003). Aquatic vertebrates generally use suction of water into the mouth, which is achieved by expanding the buccal and pharyngeal cavities during feeding (e.g. Muller and Osse, 1984;van Leeuwen and Muller, 1984;Lauder and Shaffer, 1985). Terrestrial vertebrates have evolved capture modes where direct contact with the prey by the jaws and/or tongue is used (e.g. Findeis and Bemis, 1990;Schwenk, 2000;Herrel et al., 2012). Animals that capture prey in both environments will generally have a feeding system that favours performance in one of the environments, or else they will have evolved a feeding system that fully compensates for the different biomechanical demands (Bramble, 1973;Stayton, 2011). Among tetrapods, only a few species of salamanders and turtles have a truly amphibious feeding system. Recent studies have shown how terrestrial and aquatic turtles, and also salamanders, alter their feeding behaviour and kinematics to function in the medium they inhabit (Summers et al., 1998;Stayton, 2011;Heiss et al., 2013). Amphibious salamanders and turtles are capable of performing suction feeding in the aquatic environment. However, in the terrestrial environment, these turtles and salamanders increase the extent and duration of the movements from their aquatic feeding patterns and generally perform sub-optimally. More terrestrially acclimatized turtles and salamanders (turtles that are habitualised to the terrestrial environment and salamanders in their terrestrial mode) use an alternative feeding pattern on land, either by modifying their aquatic feeding pattern or by using morphological adaptations, such as the tongue (Stayton, 2011;Heiss et al., 2013). These studies show how the amphibious lower tetrapods manage to feed both in water and on land by using of a repertoire of movements of the head, the hyoid and oral jaws and/or by using specific morphological adaptations. But how do the feeding patterns of the amphibious vertebrates that have retained many of the characteristics of the cranial system of their aquatic ancestors, such as the amphibious fish, change between environments? Mudskippers are known for their amphibious lifestyle on the intertidal mudflats in tropical river estuaries (Stebbins and Kalk, 1961). Some species of mudskipper have been reported to spend over 90% of their time on land (Gordon et al., 1969). They primarily seek their food out of the water and prey detection occurs chiefly by sight (Stebbins and Kalk, 1961). The feeding system of the mudskippers has been reported to function effectively in the terrestrial environment (Stebbins and Kalk, 1961;Sponder and Lauder, 1981). Based on recent studies, we have learned how the Atlantic mudskipper (Periophthalmus barbarus) pivots the head down, supported by their strong pectoral fins, to allow the oral jaws to be placed over their terrestrial prey (Michel et al., 2014). The terrestrial feeding is further aided by the use of water retained in the buccal cavity. The mudskipper first compresses the buccal cavity, forcing water forward towards its mouth. This water is subsequently sucked back by expansion of the buccal cavity (Michel et al., 2015). In doing this, the mudskipper often manages to transport its prey to the pharyngeal jaw region in a single gape cycle (Michel et al., 2015). The considerable compression of the buccal cavity and the consequent anterior flow of water observed during the entire phase of mouth opening when feeding in the terrestrial environment (Michel et al., 2015), raises questions about the aquatic feeding ability of the mudskipper. This anterior pumping of water is in the opposite direction to the posterior flow of water generated during suction feeding. In the aquatic environment, moving towards prey generates a bow wave in front of the head, which will deviate the prey away from the mouth unless sufficient suction is generated Van Wassenbergh et al., 2010). A combination of blowing out water and the bow wave effect could push the prey away from the mudskipper when approaching the prey while the mouth is opening. Consequently, the mudskipper may have adapted its aquatic feeding system to function in the terrestrial environment to the detriment of its being able to feed under water. In other words, if the mudskipper is capable of feeding in both environments, there must be a considerable alteration in the functioning of its feeding system depending on its current environment. In this study, we examine the function of the feeding apparatus of the mudskipper in the aquatic environment, and compare it to results described previously on its feeding in the terrestrial environment (Michel et al., 2015). This is not a direct comparative study of the functional morphology of the mudskipper's feeding system relative to that of other species of fish, but it is rather a comparative study of how the mudskipper manages to retain the functionally of its feeding system across two different environments. We will focus on the elements of the feeding system that are used in both environments, in particular the oral jaws, the hyoid, and the spatio-temporal volume changes of the head. Analysing the kinematics will allow us see how these elements are employed in each environment. This will answer the question whether aquatic feeding is compromised in the mudskipper, and whether and how environment-dependent adjustment of the feeding kinematics occurs. Ultimately, this will increase our understanding of how the conflicting biophysical demands of the aquatic and terrestrial environments can be dealt with by a vertebrate feeding system. RESULTS Here we present our data on aquatic feeding, and where relevant, we add the terrestrial data from Michel et al. (2015) for comparison. We will first describe the prey capture behaviour and associated movements of the Atlantic mudskipper in the aquatic environment. When under water, the mudskipper approaches its prey to a distance of approximately 2 cm before initiating its prey-capture movement (Fig. 1A,B). The mouth is opened as the mudskipper slowly approaches its prey. The gape of the mouth is increased and the head is expanded laterally at the height of the suspensorium and the opercula, during which time the prey item is moved towards and into the mouth. The mouth is then closed, which results in the prey item being either bitten into or captured in the buccal cavity (Fig. 1A,B; t=0.05). The gill slits are then opened and small particles in the water can be seen exiting from the gill openings as the opercula are adducted. Often a series of mouth openings and expansive movements are used to capture prey in the aquatic environment: when the water from the previous expansion movement has exited from the gills, the mouth is opened again and a new expansive movement is started by the mouth. The opercula are adducted while the mouth is opening for the following expansive movement, with the gill openings closed once the opercula are fully adducted (see Movie 1). Terrestrial feeding is described in extenso by Michel et al. (2015). A brief summary follows: The mudskipper keeps its gill slits closed while on land, but when approaching its prey the opercula are adducted (Fig. 1C,D). Water often becomes visible in or around the mouth as the prey is being captured. Shortly after the maximum gape, the head is expanded, widening at the height of the suspensorium. This is followed by an abduction of the opercula (Fig. 1C,D; t=0.05). In some cases, the gill slits open after the prey is captured to release water (see Movie 2). Kinematics Kinematic profiles of mouth opening (gape distance), gill opening and hyoid depression were measured during the course of the feeding events (Fig. 2). Although there is no difference in the maximum gape distance between the aquatic and the terrestrial environments (F=0.479, P=0.25), there is a slight difference in the duration of the mouth opening. In the aquatic environment, the gape is open from t=−0.08±0.01 until 0.07±0.01, whereas in the terrestrial environment, the gape is open from t=−0.07±0.01 until 0.05±0.01 (F=4.832, P<0.04), where t=0 is the instant of maximum gape. Gill opening The mudskipper opercula are connected to the body by an opercular membrane which seals the ventral and almost the entire posterior end of the opercular cavity. The opening in the flexible membrane is a 1-cm-high slit. The timing of the opening and closing of the gill slits was measured in the ventral view of the high-speed video recordings. Unfortunately, because the gill slits were not visible from the lateral view, we were only able to get accurate data on the medial-lateral width of the opening. We therefore do not have any data on the complete area of the gill opening. We therefore considered any opening of the gills to allow full and unrestricted flow in or out of the gill slits. The time during which the gill slits were opened was measured over the feeding sequence for each individual in each environment (averages in purple bars in Fig. 3). In the aquatic environment, the gill slits were opened prior to the prey capture, and closed at t=−0.04±0.01. They opened again when the mouth was closed t=0.08±0.01. In the terrestrial environment, we found that the gill slits opened only when the mouth was closed t=0.06±0.01. Hyoid In the aquatic environment, the ventral contour of the head shows a depression at the level of the hyoid as the mouth opens (Fig. 2B). The maximum depression was around −0.85 cm just before the maximum mouth opening. In the terrestrial environment, the contour at the height of the hyoid started to elevate just after the maximum gape was reached. The moment of maximum elevation was around 0.03 s after the time of maximum gape. Volume changes and flow velocities In the aquatic environment, the intra-oral volume is reduced prior to the maximum mouth opening (t=0) (Fig. 2C). As the maximum mouth opening is reached, the internal volume increases (t=−0.02± 0.01) and continues, to reach a maximum intra-oral volume of around 4 cm 3 (at t=0.09±0.01). However, in the terrestrial environment, the volume increase was initiated earlier (F=4.741, P=0.05) than in the aquatic environment (t=−0.03±0.01) and continues until a maximum buccal volume of 2.6 cm 3 is reached (t=0.06±0.02). In the aquatic environment, the rate of cross-sectional area change showed a reduction of volume in the posterior region of the head prior to the maximum gape ( Fig. 3A, zone W1). The first spatiotemporal zone of expansion was at the snout and mouth opening (W2), which subsequently decreased after the maximum gape was reached (W3). Just before the maximum gape was reached, the zone posterior to 60% of the head length started to expand (W4). At t=0.03 the entire head posterior to 90% HL was expanded (W4). At t=0.08 the compression began, with the exception of the very posterior zone, around 5% HL (W5). The intra-oral flow velocity in the aquatic environment could not be calculated between t=−0.09 and t=−0.04, since during this time both the mouth and gill slits were open (Fig. 3B). After t=−0.04, only the mouth remained open and the intra-oral flow could again be calculated. From around t=−0.03 a flow in the posterior direction started to be generated, reaching peak flow velocity after the maximum mouth opening. The flow velocity diminished towards Comparison of gape, hyoid and intra-oral volume during prey capture in Periophthalmus barbarus. Mean kinematic profiles for aquatic (blue) and terrestrial (red) prey capture (n=4 individuals, n=8 feeding sequences per environment). t=0 was set as the moment of maximum gape. The kinematic profiles of gape distance (A), depression of the ventral contour of the head at the height of the hyoid (B), and total intra-oral volume (C) are shown. In B, the dotted line represents the hyoid depression and elevation measured by X-ray during terrestrial feeding from Michel et al. (2015). In C, the total average intra-oral volume change measured over the prey capture event is shown. Data are presented as means±s.e.m. the posterior of the head when the mouth was closing. From the instant the mouth was closed, around t=0.07, the gill slits were again opened, allowing a low-velocity, posterior-directed water flow to exit the head. DISCUSSION AND CONCLUSIONS In order to draw a comparison with the data from the aquatic environment, a qualitative description of the pattern observed in the terrestrial environment is provided. This is based on data from Michel et al. (2015). To make such a comparison possible, in Fig. 3C,D the terrestrial data are presented on the same scale as the aquatic data. On land, prior to the maximum mouth opening, there was a slight reduction of the total volume of the head (Fig. 2C). This reduction in volume was due to the compression of the posterior zone of the mudskipper's head (Fig. 3C, zone L1). With the mouth open and the gill slits closed (Fig. 1A), a flow toward the mouth was Michel et al., 2015). The vertical black lines on the graphs denotes t=0 (maximum mouth opening). The grey bars under graphs A and C illustrate the time during which the mouth is open, while the purple bars illustrate the time the gill slits are open. Values in A,C are spatio-temporally interpolated and averaged (two captures×four individuals) rates of change in the cross-sectional area, given as a function of the position along the head. This shows successive compression and expansion events: W1-W5 in A delineate zones of high compression or expansion during aquatic feeding, L1-L5 in C delineate zones of high compression or expansion during terrestrial feeding. In B,D the corresponding intra-oral flow velocities along the anterior-to-posterior axis are given, showing initially forwards (blue colouring) and then backwards motion (yellow to red colouring) of fluid in relation to the head. The dark area on B illustrates the time in which both gape and gills slits are open, and therefore flow velocity could not be calculated. generated (Fig. 3D). At around t=−0.02, nearing the time of maximum gape, the total volume of the head started to increase (Fig. 2C). At the very anterior end of the head, the opening of the mouth rapidly increased the cross-sectional area of the mouth (Fig. 3C, zone L2). The cross-sectional areas posterior to 80% HL then started to expand (Fig. 3C, zone L4), reversing the anterior flow to a flow that was posterior in direction (Fig. 3D). In the terrestrial environment, the total volume of the head continued to increase as the maximum gape was reached (Fig. 2C). The closing of the mouth then rapidly decreased the cross-sectional area of the anterior end of the mouth (Fig. 3C, zone L3). Starting from the anterior end of the buccal cavity, we identified three zones of compression (Fig. 3C, zones L3, L5 and L7), but in each case the posterior zones continued to expand (Fig. 3C, zones L4 and L6). This allowed for a further posterior intra-oral flow (Fig. 3D). At around t=0.05, the mouth was closed and the gill slits started to open. The total intra-oral volume now decreased with a wave of compression that started at the anterior end and finally reached the posterior end of the head (Fig. 3C, zone L7). This created a low-velocity flow directed towards the gill slits (Fig. 3D). When we compare the mudskipper's kinematics in the aquatic and the terrestrial environments, we find that the magnitude and timing of the compression and expansion movements of the head differ according to the environment. The compression and expansion movements of the head are reduced in the terrestrial environment in comparison to those in the aquatic environment. In addition, we found that the onset of the overall volume increase and the posterior intra-oral flow started slightly earlier in the terrestrial environment relative to that in the aquatic environment ( Fig. 2C; Fig. 3B,D). This is similar to the pattern found in the more amphibious salamanders and turtles, where there is a reduction in the size of movements in response to the terrestrial environment. In a previous study on turtles, naïve aquatic individuals would employ longer and more extensive movements of the gape and hyoid in order to feed on land (Stayton, 2011). This is the most likely response when essentially the same motor patterns are used in water as on land (Stayton, 2011). The movements measured for the eel-catfish (Channallabes apus) when feeding in the terrestrial environment were similar in that the hyoid and jaw depression were larger, but the duration of the movements was shorter than those in the aquatic environment (Van Wassenbergh, 2013). The more extensive movements on land might be the result of similar levels of activation of the muscles with a lack of the fluid dynamic resistance which would be present in the aquatic environment (Stayton, 2011). In the more terrestrially inclined species of the amphibious turtles and salamanders, the movements of the hyoid are reduced in response to feeding in terrestrial environments Summers et al., 1998;Stayton, 2011). In our study, we found that the mudskipper, similarly, was capable of larger compressive and expansive movements in the aquatic environment, but that most of these movements were reduced in response to the terrestrial environment. The depression of the ventral contour at the level of the hyoid is different in the aquatic and the terrestrial environments. It should be noted that the ventral contour of the head at the height of the hyoid is influenced by the opening of the lower jaw. During prey capture in the terrestrial environment, the lower jaw is depressed and rotated over 90°, this positions the lower jaw close to the height of the hyoid (Michel et al., 2015). Despite these obstructions, we were able to measure a difference in depression at the hyoid height in the aquatic and terrestrial environments (Fig. 2B). In the aquatic environment, the depression of the hyoid followed the opening of the mouth as part of the expansion of the buccal cavity. This pattern of hyoid depression during the opening of the jaws is similar to that found in other vertebrates feeding in the aquatic environment (Summers et al., 1998;Heiss et al., 2013). On land, the elevation of the ventral contour at the level of the hyoid is observed just after the mouth opens (Fig. 2B). If we compare the terrestrial ventral contour at the level of the hyoid with the movement shown by the X-ray data of the hyoid from Michel et al. (2015), we find that they are not the same (Fig. 2B, terrestrial X-ray curve). This is probably due to our inability to follow the tip of the hyoid accurately from the lateral view. The depression of the ventral contour of the hyoid in the terrestrial environment is likely mostly due to the depression of the lower jaw, of which the proximal end obscures the distal tip of the hyoid in lateral view. The elevation which follows shortly after the depression is probably the contour returning to the ventral outline of the hyoid, which is depressed after the maximum mouth opening (see X-ray curve in Fig. 2B). In amphibious tetrapods feeding terrestrially, the hyoid is either elevated during maximum mouth opening and thus not used, or it is slightly depressed after maximum gape (Summers et al., 1998;Heiss et al., 2013). Here we see evidence of a slight depression in the mudskipper after maximum gape in terrestrial prey capture. Although the depression of the lower jaw may also have affected the aquatic hyoid contour, the lack of elevation after maximum mouth opening indicates that the hyoid is still depressed in the aquatic environment. This suggests that the mudskipper changes the depression of the hyoid in response to the feeding environment. The final difference between prey capture in the aquatic and terrestrial environments is the intra-oral flow which results from the expansive and compressive movements of the mudskipper's head. In the aquatic environment, we found a relatively typical suction feeding sequence. In preparation for capturing prey, water is evacuated from the opercular cavity via the gill slits by compression, followed by expansive movements of the head, which draw in a water flow through the mouth (Fig. 1A,B; Fig. 3; Movie 1). During terrestrial feeding, we find a similar pattern of expansive and compressive movements in the mudskipper's head, but they are less extensive. However, these now result in an anterior flow of water toward the mouth opening (Fig. 3C,D) (Michel et al., 2015). This is followed by a posterior flow, in which water is expelled from the gill slits after prey capture (Fig. 1C,D; Fig. 3D; Movie 1). In both environments, the pattern of compressive and expansive movements of the head are similar prior to maximum mouth opening (Fig. 3A,C). In both, the posterior of the head is compressed, while the anterior is expanding. The difference in flow comes from the timing of the gill slit closure: when closed, an anterior flow is generated; when opened, water can exit the head posteriorly. Based on the timing of the gill slit opening, we can conclude that the mudskipper is capable of controlling this opening. Without the membranes that enclose the gill chambers, the mudskipper would not be able to generate suction through the mouth in the aquatic environment, and the abduction of the opercula would cause water to flow into the gill chambers through the gill slits. Similarly, the intra-oral water in the terrestrial environment could not be directed fully in the anterior direction and would probably at least partly flow out through the gill slits. Relative to the size of its opercula, the mudskipper has large adductor operculi and levator operculi muscles at some distance from the joint between the opercula and the hyomandibular (see Fig. 2 in Michel et al., 2014). These muscles are possibly used to help seal the gill chambers. Alternatively, the adductor hyohyoideus muscles possibly assist in closing the gill slits independently of the opercular musculature (Spierts et al., 2003). However, we cannot be sure whether these muscles are responsible for controlling the opening of the gill slits. The gills slits possibly act as a valve system, using the partial pressure differences to 'passively' seal the gill chambers. However, based on the kinematics of the opercula and observations of the gill slit opening, we can see modification of the pattern in response to the prevailing environment. By studying the prey capture method of the mudskipper in the aquatic environment and then comparing it with its terrestrial prey capture method, we found a very clear modification in the way the feeding apparatus is used in these two environments. The mudskipper is capable of benthic suction feeding in the aquatic environment. It uses rapid and extensive expansion of its buccal cavity to generate a flow of water from the external environment into the mouth in a manner very similar to that of fully aquatic fish. We found evidence that the hyoid may be used to aid in the expansion of the buccal cavity in a manner similar to that of a fully aquatic fish. This is contrasted with its use of its feeding system in the terrestrial environment, where there is a slightly earlier but reduced compressive and expansive movement of the elements along the head; the sealed gill slits and a considerable elevation of the hyoid aid in capturing prey when on land. We see a similar trend towards a reduction in the expansive movements of the head and hyoid in the more terrestrially inclined amphibious tetrapods when feeding underwater compared to on land (Summers et al., 1998;Stayton, 2011;Heiss et al., 2013). Although this does not apply to gape, we find that the mudskipper modulates the use of the elements of its feeding apparatus in response to its medium in a manner that is similar to that of the amphibious tetrapods. In this way, the feeding system of the mudskipper can function in both environments as it uses a single set of anatomical structures across the two disparate environments. As previously stated in Michel et al. (2015), the tetrapodomorphs and the modern sarcopterygians clearly differ in morphology from the mudskippers, but the main functional elements of the mudskipper's feeding system are also present in those groups. A similar usage of the feeding system to capture prey at some stage during the early evolution of the tetrapods' lineage can therefore not be excluded on morphological grounds. The similarities in the motion pattern of the hyoid between mudskippers and the tongue-protruding salamanders have lead us to question the current hypothesis about the evolution of terrestrial feeding behaviour in the early tetrapods (Michel et al., 2015). We propose that a hydrodynamic tongue, such as we find in the mudskippers when feeding on land, may have evolved to move the prey grabbed between the jaws. In this study, we found that the elevation followed by depression of the floor of the mouth by means of the hyoid skeleton as exhibited on land, is retained from a behaviour in which intra-oral water is used in the aquatic environment. An already established kinematic pattern of aquatic suction feeding could have allowed for water-mediated terrestrial feeding by means of gradual anatomical specialization towards better terrestrial capture of prey. This would have been achieved by better control of the opening of the gill slits and of the intra-oral volume changes. This further supports our hypothesis that water-mediated terrestrial feeding may have been an important intermediate step in the colonization of land, as the transition from aquatic feeding to the terrestrial usage of a hydrodynamic tongue only required a small modulation in cranial kinematics while aquatic feeding still remained possible. Study species Five adult individuals of Atlantic mudskippers Periophthalmus barbarus (Linnaeus, 1766) were obtained through the commercial pet trade. The standard length (measured from the snout tip to the posterior end of the last vertebra) of the five individuals was similar (9.9±1.8 cm). One individual was euthanised by using an overdose of MS-222 (Sigma Chemical) and used for computed tomography (CT) scanning (the scanning protocol was described previously by Michel et al., 2014). The mudskippers were kept in a large aquarium (200 l) at a constant temperature of 27°C with a 12L:12D photoperiod cycle. For filming sessions, the animals were transferred to a small plexiglass aquarium (30 liters). The mudskippers were trained to capture prey in a narrow corridor extending along one side of the aquarium to increase the chances of filming the animals in a position perpendicular to the camera. Food was always presented on the bottom of the tanks. The water level was always above the gill slits. All of the specimens used in this study were handled according to University of Antwerp Animal Care protocols. High-speed video recordings Two high-speed cameras were used to record the movements of the feeding apparatus in the lateral and ventral plane simultaneously: a Redlake Motionscope M3 and a Redlake MotionPro HS1000 (Redlake Inc., Tallahassee, FL, USA) with a recording speed of 500 fps (1280×1024 pixels). Several bright LEDs provided the necessary illumination. Of all the lateral view recordings, only those with the lateral side of the head orientated sufficiently perpendicular to the camera lens axis were retained for further analysis. For each individual, two successful feeding events were analysed. The instant of maximum gape was set as t=0 s. Volume and flow velocities The ellipse method of measuring the volume of biological objects was first proposed and applied by Drost and van den Boogaart (1986). They established the use of this method for a wide range of biologically relevant applications. In addition, they tested and validated their model for flow velocities in a suction feeding fish (Drost and van den Boogaart, 1986). Further validation of the model was performed by Aerts et al. (2001) on suction feeding in turtles. Using the ellipse method, the mudskipper head is modelled as 21 elliptical cylinders of which the axes of the ellipse surface correspond to the width and height of the head. The height of each cylinder is defined as the length of the head divided by 21 (Fig. 4). The elliptical cylinders were set based on a fish-bound frame of reference for each frame of each video. In the lateral view, the X-axis was defined as the line connecting the tip of the upper jaw and the middle of the body at the operculum (Fig. 4A). The middle of the opercula was set as the origin of the fish-bound frame of reference. In the dorsal view, the X-axis was determined as the line connecting the snout tip to the middle point between the opercula (Fig. 4B). The contours of the head were digitised frame by frame for each video using 30 landmarks for each frame, 15 on each side of the head for the ventral and lateral views using ImageJ (NIH, TX, USA). The contours of the head were then divided into 21 evenly spaced distances using linear interpolation, perpendicular to the respective X-axes (Fig. 4). This gave us 21 heights and widths for each video frame. In order to reduce data noise caused by manual digitisation, a fourth-order lowpass, zero phase-shift Butterworth filter was used, with a cut-off frequency of 15 Hz. The initial internal volume of the bucco-pharyngeal cavity was obtained by the aforementioned CT-scan of a mudskipper. To visualize the dimensions, the bucco-pharyngeal cavity was sectioned and colour coded using Amira (Mercury Systems). This allowed us to accurately determine the boundaries of the bucco-pharyngeal cavity relative to the outer contours (Fig. 4). As with the frames of the video recordings, the outer contours of the animal and the boundaries of the bucco-pharygeal cavity in the CT-scan were divided into 21 evenly spaced distances using linear interpolation. In this way, the volume of each cylinder was calculated, providing the volume of the buccal cavity before the start of the prey-capture event. We assumed that the volume of the tissues of the head remained constant, therefore any volume change, based on the outer contours of the head, must equal a change of volume of the bucco-pharyngeal cavity. The law of continuity dictates that any volume increase of the buccopharyngeal cavity must immediately be filled with water. Therefore, any volume change would create a flow to, or from the cavity. Because we knew the cross-sectional area at different lengths along the head, we could calculate the instantaneous flow velocity in these areas, based on the change in volume behind it. This calculation of the flow velocity is only possible if either the mouth or the opercular slits are open. If both are open, it is unclear how the volumetric change affects the direction or magnitude of the flow velocity along the bucco-pharyngeal cavity. In each high-speed video, the instant of the mouth opening was determined. The mouth was considered open until the jaws were closed after prey capture. The same was assumed for the opercular opening: once opened, they were assumed open until the gill covers were fully adducted. Fluid flow was calculated along the X-axis of the fish-bound frame of reference, along the line of the upper jaw to the opercula. It was assumed that prey items behave as water particles in these calculations. Kinematic variables In the video recordings of the lateral view, the following landmark coordinates were determined for each frame in addition to the contours: (1) tip of the upper jaw, (2) tip of the lower jaw, (3) the point along the ventral outline of the head at the height of the hyoid (see Fig. 4). Using these land marks, two kinematic variables were measured for the duration of each prey-capture sequence: gape distance and hyoid depression relative to the fish-bound axis. Any hyoid elevation (i.e. when the hyoid tip was lying between the suspensoria) was obscured by the suspensorium and could therefore not be measured. Comparison with terrestrial feeding kinematics In this study, data from the terrestrial feeding in Periophthalmus barbarous, as described by Michel et al. (2015), were used for comparison with the aquatic data. However, because no X-ray recordings could be used in the aquatic environment, only data from external video recordings in the terrestrial environment were used. To do this, the ventral outline of the head at the height of the hyoid was used in both environments. This comparison required a recalculation of the terrestrial kinematics (i.e. it no longer used the correction factor for hyoid elevation as described in Michel et al., 2015). Videos used in Michel et al. (2015) are available via Dryad at http://dx.doi. org/10.5061/dryad.0fg55.
2018-04-03T01:42:55.904Z
2016-10-07T00:00:00.000
{ "year": 2016, "sha1": "0862157e3514b5ae5e3aae0a8c47718773018d0a", "oa_license": "CCBY", "oa_url": "https://bio.biologists.org/content/biolopen/5/11/1735.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4d5574cfa7d7144948fe6cab77acf0b7bfe91938", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
255182441
pes2o/s2orc
v3-fos-license
Antimicrobial potential of Indian Cinnamomum species Cinnamomum is the largest genus of Lauraceae family and has been used as spices, food, and food additives by the people. Total 15 Cinnamomum species are distributed in different parts of Indian sub-continent. Different parts (leaves, stem bark, stem wood, roots, flowers, and fruits) of these species were shade-dried and used for the determination of essential oils. A total of 19 essential oils were identified and quantified from the different parts of (leaf, stem bark, stem wood, root, flower, and fruit) of 15 Cinnamomum species. The stem bark of C. altissimum was rich in the presence of essential oils (52.2 %) whereas minimum levels of essential oils were recorded in roots (17.9 %). The γ-terpinene (11.1 %) was reported as the major component essential oil in C. subavenium flowers. Methanol extract of C. camphora stem wood showed stronger lowest minimum inhibitory concentration against S. aureus (25 ± 0.01 μg/ml), H. pylori (29 ± 0.05 μg/ml), B. subtilis (31 ± 0.03 μg/ml), E. faecalis (33 ± 0.01 μg/ml), C. albicans (38 ± 0.03 μg/ml) when compared to amoxycillin (S. aureus 56 ± 0.05 μg/ml; B. subtilis 27 ± 0.04 μg/ml, E. faecalis 22 ± 0.01 μg/ml), streptomycin (H. pylori 38 ± 0.02 μg/ml) and fluconazole (C. albicans 56 ± 0.01 μg/ml). Methanolic extract of C. camphora stem wood demonstrated maximum antimicrobial activity against S. aureus, H. pylori, B. subtilis, E. faecalis and C. albicans. The essential oil of C. altissimum stem bark displayed significant lowest MIC against S. aureus (21 ± 0.03 μg/ml), E. coli (22 ± 0.03 μg/ml), E. cloacae (37 ± 0.06 μg/ml), L. monocytogenes (47 ± 0.08 μg/ml), and P. chrysogenum (101 ± 0.07 μg/ml) when compared to amoxycillin (E. coli 18 ± 0.01 μg/ml, E. cloacae 21 ± 0.05 μg/ml, L. monocytogenes 31 ± 0.03 μg/ml), and fluconazole (P. chrysogenum 101 ± 0.07 μg/ml). The essential oil of C. altissimum stem bark displayed maximum antimicrobial activity against S. aureus, E. coli, E. cloacae, L. monocytogenes, and P. chrysogenum. Cinnamomum essential oils may be used as an alternative source of antibacterial and antifungal compounds in the treatment of various types of infections. Introduction India is known for its rich biodiversity of medicinal plants as well as for ancient traditional system of medicine. As per ancient literature, the Ayurvedic system of medicine was established in between 2500 and 500 BCE in India (Subhose et al. 2005). Medicinal plants are widely used in the treatment of fever, asthma, gastrointestinal problems, skin, respiratory and urinary complaints, inflammations, rheumatism, hepatic, and cardiovascular disorders (Tian et al. 2014). The biosynthesis of phytoconstituents in medicinal plants are widely depending on the type of plant species, soils and on their interaction with microorganisms (Zhao et al. 2011;Morsy 2014). More than 200 species of Cinnamomum genus are naturalized in Asia, South and Central America, China, and Australia (Cardoso-Ugarte et al. 2016). C. altissimum Kosterm is a large size tree and grows up to 30 m high. The stem girth is about 1.5 m. It is native to Peninsular Malaysia, India as well as in lowland and hill forests of Sumatra (Kochummen 1989;Abdelwahab et al. 2017). The different parts (leaves, stem bark and stem wood) of this species are used in the healing of wounds (Salleh et al. 2015). C. bejolghota (Buch.-Ham.) Sweet [syn. C. obtusifolium (Roxb.) Nees] is an evergreen tree with smooth stem bark and is naturalized in India (Gogoi et al. 2014(Gogoi et al. , 2021, Bangladesh, Myanmar, Nepal, Thailand, and Vietnam (Wu and Raven 1996), China, Sri Lanka, Madagascar, and East of Thailand (Li et al. 2013). C. bejolghota is useful in the treatment of https://doi.org/10.1016/j.sjbs.2022.103549 1319-562X/Ó 2022 Published by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). cough, cold, toothache, and liver complaints (Choudhury et al. 1998;Liu et al. 2020). Its leaves are used in the treatment of diarrhea (Chopra et al. 1956) and bark is useful in fever (Sajem et al. 2008). C. burmannii is a shrub and/or a small tree and is distributed in Southeast Asia, India, Indonesia, and Philippines (Tan 2005). The stem bark powder is used in the treatment of nausea, flatulent dyspepsia, coughs, chest complaints, diarrhea, and malaria (Chopra et al. 1956;Rovira 2008;Nunes et al. 2022). C. camphora (L.) Siebold is an evergreen tree with wide branching. The leaves are green and broad. The flowers are white in color. The plant species is distributed in China, India, Korea, Japan, and Vietnam. It is useful in cough, cold, toothache, diarrhea, dysentery, skin infections and vomiting (Nishida et al. 2006;Hsieh et al. 2006). C. cassia (L.) J.Presl is a small perennial and evergreen tree, grows with thick and aromatic bark (Sharifi-Rad et al. 2021). It is naturalized in China, India, Vietnam, and Indonesia (Wang et al. 2013. In Chinese and Indian traditional medicine, it is used in the treatment of arthritis, bellyaches, dysmenorrhea, nephropathy, dysmenorrhea, menoxenia and diabetes (Chinese Pharmacopoeia Commission 2015; Zhang et al. 2019). C. cassia possesses antitumor, anti-inflammatory, analgesic, anti-diabetic, antiobesity, antibacterial, antiviral, cardiovascular protective, cytoprotective, neuroprotective, and anti-tyrosinase activities (Kwon et al. 2006;Hong et al. 2012). C. glaucescens (Nees) Hand. -Mazz. is a perennial tree with rough bark and is distributed in Himalayan range of Nepal and India (Baruah and Nath 2006;Chinh et al. 2017). It has been used as an analgesic, antiseptic, astringent, and carminative agent. Its seeds are used in the treatment of cold, cough, and toothache (Chopra et al. 1956;Sthapit and Tuladhar 1993). Seed paste is externally applied in curing muscular pain and swellings (Prakash et al. 2013). In Manipuri medicine (India), its bark powder is used to treat kidney complaints (Mikawlrawng and Kuma 2014). Essential oils of C. glaucescens demonstrated antibacterial effects against Escherichia coli, Staphylococcus aureus and Klebsiella pneumoniae (Gyawali et al. 2013). C. insularimontanum Hayata is an evergreen tree with broad leaves (Chung et al. 2003). It is distributed in different regions of Taiwan, Bangladesh, Myanmar, and India and useful in the treatment of inflammations, gastric ulcers, and rheumatic diseases (Lin et al. 2003). C. javanicum Blume is a medium size tree and grows up to 30-35 m high. It is naturalized in southern China to Peninsular Malaysia, Sumatra, Java, India, and Borneo (Wuu-Kuang 2011). Stem crude extract of C. javanicum showed significant antimicrobial activity against Listeria monocytogenes (Yuan et al. 2017;Ardhany et al. 2021). C. kanehirae (Hayata) Hayata (syn. C. micranthum (Hayata) Hayata; syn. C. xanthophyllum H.W. Li) is a perennial tree and grows up to 8-10 m high. It is native to Taiwan (Wu et al. 2017) but, naturalized in tropical and subtropical regions of eastern Asia (India, and Myanmar), Australia, and the Pacific Islands (Liao et al. 2010). As per Chinese traditional medicine, it is used in the treatment of lung infections and nervous depressions. Its essential oils possess antimicrobial (Yeh et al. 2009), and hepato-protective activities (Zisman et al. 2002;Lin et al. 2018). C. kotoense is an evergreen tree and grows up to 45 feet high . It is used in the treatment of headache and boosting of blood circulation in people (Li et al. 2006;Yuan et al. 2020). C. osmophloeum Kaneh is a perennial tree and is distributed in Taiwan, India, South and Southeast Asia. Its essential oils are used in the treatment of bacterial and fungal infections (Kurniawati et al. 2017;Chen et al. 2021). C. subavenium Miq is a perennial tree and grows up to 18-27 m high (Wuu-Kuang 2011). It is distributed in China, Malaysia, Cambodia, Indonesia, India, and Burma (Lin et al. 2008). Its fruit peel, fruits, and leaves are used in the treatment of abdominal and chest pain, hernia, vomiting, nausea, and diarrhea (Liu et al. 2011;Hao et al. 2015). C. tamala (Buch. -Ham.) T.Nees & Eberm. is an evergreen tree, 20 m high and with soft wrinkled bark (Singh and Singh 1992). It is naturalized in tropical and subtropical regions of Australia, South America, and the Himalayan region of Asia (Ahmed et al. 2000), India, Nepal, Bhutan, and China (Rema et al. 2005). The plant species possesses antidiarrheal, antibacterial, and immunomodulatory properties (Kumar et al. 2012). In Nepalese traditional medicine, C. tamala stem bark is used to treat intestinal diseases, nausea, and diarrhea (Kunwar and Adhikari 2005). The leaves are useful in bladder diseases, ulcers, mouth dryness, coryza, diarrhea, and nausea (Kapoor 2000;Tiwari and Talreja, 2020). C. verum J.Presl (syn. C. iners Reinw. ex Blume; C. zeylanicum Blume;) is an erect tree and is found in Sri Lanka, China, Sumatra, Eastern Islands, Brazil, Mauritius, India, and Jamaica (Mehrpouri et al. 2020;Pathak and Sharma, 2021). It possesses stomachic, antirheumatic and carminative properties. Its stem bark paste is mixed with lemon juice and externally applied for the treatment of pimples (Premakumara and Abeysekera 2021;Abeysinghe et al. 2021). C. walaiwarense is an evergreen tree with a large and dense crown, and grows up to 40-70 feet high. The straight, and cylindrical bole (30 -80 cm in diameter) has been used in traditional system of medicine in India (Kostermans 1983;Sriramavaratharajan and Murugan, 2018). The essential oils-rich extract (cinnamaldehyde, eugenol, caryophyllene, cinnamyl acetate and cinnamic acid) possesses antimicrobial (Vigila et al. 2018), antidiabetic, wound healing, andantidepressant activities (Wariyapperuma et al. 2020;Singh et al. 2021). However, the volatile constituents of several Cinnamomum species grown in different countries have been characterized but, we report the distribution of essential oils in different parts (leaf, stem bark, stem wood, root, flower, fruit) of 15 Indian Cinnamomum species. We have also assessed the antimicrobial activities of essential oil from 15 species against selected microorganisms. Extraction of plant materials The shade-dried parts (leaves, stem bark, stem wood, roots, flowers, and fruits) of 15 Cinnamomum species (500 g) were powdered, percolated with methanol for 48 h, and filtered. The filtrates of each part of each species were concentrated in vacuo at room temperature (yield 39.89 g, w/w). The methanolic extracts of different parts of each species were stored separately in a refrigerator at 4°C and were used for the evaluation of antimicrobial activities. Similarly, dried parts of 15 Cinnamomum species were weighed (200 g) and subjected to the extraction process by means of water distillation using Clevenger apparatus (Borosil 3451029, Borosil, India) for 5 h. The essential oils of different parts of each species were concentrated by anhydrous sodium sulfate and yields were determined by using the following formula: Essentialoilyield : Essential oil mass obtained in grams Plant material mass in grams  100 Gas Chromatography-mass spectrometric (GC-MS) and gas chromatography and flame ionization detector (GC-FID) analysis For the determination of essential oils components from different parts of 15 Cinnamomum species, the following GC-MS conditions were used: Agilent 5975 GC-MSD system (Agilent Technologies, Santa Clara, California) with an 19091 N-136I-HP-INNOWax FSC column [60 m (length)  0.25 mm (ID), 0.25 lm (film thickness)]; helium as a carrier gas (0.8 ml/min); GC oven temperature adjusted at 60°C for 10 min and computed to 220°C at a rate of 4°C/min. The split ratio was maintained at 40:1 and injector temperature adjusted at 250°C. The mass spectra were recorded at 70 eV. The mass spectrum range was adjusted in between m/z 35 to 450. The essential oils of different parts were identified based on comparison of their relative retention times with those of standard compounds. Similarly, the identity of essential oils was also confirmed by computer matching against commercial (Wiley GC/MS Library, MassFinder 3 Library, McLafferty, F.W.; Joulain et al., 2004) and MS mass fragmentation patterns reported in literature NIST05.LIB and NIST05s.LIB (NIST, USA; Phutdhawong et al., 2007). The analysis of essential oils was evaluated in an Agilent 7890A gas chromatogram fitted with a HP-5 capillary column (30 m  0.32 mm  0.25 lm; 5 % phenyl À 95 % methylpolysiloxane; carrier gas -hydrogen, 1.5 ml/min; oven temperature 60-240°C at 3°C/min). Essential oil solution (1 %), dissolved in dichloromethane, was injected at 250°C in split mode (1:20). Results of study are expressed as normalized with relative area, calculated from a FID (280°C) signal. Minimum inhibitory concentration (MIC) The minimum inhibitory concentration of methanolic extracts, essential oils and standard compounds was determined by using microdilution method. Methanolic extracts (50 lg/ml initial concentration), essential oils (10 lg/ml) from different parts of 15 Cinnamomum species and reference compounds (amoxicillin, streptomycin -5 lg/ml initial concentration) were dissolved in dimethyl sulfoxide (5 %; Merck KGaA, Germany) for the initial stock solutions. The dilution series of methanolic extracts and essential oils were prepared in 96-well microtiter plates (Microplate Manager 4.0; Bio-Rad Laboratories, Hercules, California, USA). Each bacterial suspension containing 10 7 CFU/ml of the bacterial cells (100 ll) were added to each well. Amoxicillin and streptomycin (Sigma-Aldrich, St. Louis, MO, USA) were used as antibacterial positive controls. Similarly, the last rows consisting of medium with selected microbes were used as negative controls. After 24 h incubation at 37°C, the staining of viable microorganisms was conducted by mixing 20 ll resazurin solution (0.01 %) into the plates. The minimum inhibitory concentration (MIC, lg/ ml) was evaluated as the lowest sample concentration, in which no microbial growth was visible. The optical densities of test samples were recorded at 655 nm and compared with blank. The MIC values of methanolic extracts of different parts of 15 Cinnamomum species were recorded in the range of 23 ± 0.01-336 ± 0.06 lg/ml against tested bacterial strains. Similarly, the MIC values of essential oils of different parts of all species were reported in between 21 ± 0.03-690 ± 0.04 lg/ml against tested bacterial species. During this study, the MIC values of standard antibacterial compounds (amoxicillin and streptomycin-18 ± 0.01-58 ± 0.06 lg/ml; positive control) were measured in different ranges. Dimethyl sulfoxide (5 %) solution was used as negative control. evaluated by serial dilution method using 96-well microtiter plates. Each fungal suspension containing 10 7 CFU/ml of the fungal spores (100 ll) were added to each well. The fungal spores were cultivated in potato dextrose broth medium and stored at 4°C during this study. The methanolic extracts and essential oils of different parts of 15 Cinnamomum species and fluconazole were dissolved in 5 % dimethyl sulfoxide solution, consisting of polysorbate-80 (0.1 %; 1 mg/ml), and added to a potato dextrose broth medium with spore inoculum. The microtiter plates were incubated in a rotary shaker (160 rpm) for 72 h at 27°C. The minimum concentrations without conspicuous microbial growth were defined as the minimum inhibitory concentrations that completely suppressed the fungal spore growth. Similarly, C. albicans suspension was prepared according to CLSI methods (CLSI 2002(CLSI , 2007(CLSI , 2012, where fluconazole (Sigma-Aldrich, St. Louis, MO, USA) was used as a standard antifungal agent. MIC values of methanolic extracts of different parts of all species were recorded in between 23 ± 0.01 to 444 ± 0.01 lg/ml against tested bacterial species. During this study, the MIC values of standard antifungal compound (fluconazole) were also recorded in different ranges (30 ± 0.01-7 6 ± 0.01 lg/ml) against selected fungal spores. Dimethyl sulfoxide (5 %) solution was used as negative control. Minimum bacterial concentrations (MBCs) and minimum fungicidal concentrations (MFCs) For the determination of minimum bactericidal concentration of test samples, the same serial dilution method was used. The minimum concentration without visible microbial growth defined as MBC, revealing the death of bacterial inoculum (99.5 %). The optical densities (OD) for each well were determined at 655 nm wavelength and compared to a blank. The amoxicillin (27 ± 0.02 to 77 ± 0.04 lg/ml), and streptomycin (25 ± 0.06 to 90 ± 0.06 lg/ ml) were used as positive controls for bacterial cultures. Dimethyl sulfoxide (5 %) solution was used as negative control. The minimum fungicidal concentration (MFC) of methanolic extracts and essential oils of different parts of Cinnamomum species was evaluated in a potato dextrose broth medium and inoculated in microtiter plates containing 100 ll of broth per well. The cultures were incubated at 28°C for 72 h. The lowest concentration without visible growth was defined as MFC revealing the death of 99.5 %. The commercial antifungal agent (fluconazole -30 ± 0.01-94 ± 0.03 lg/ ml) was used as positive control against tested microorganisms. Similarly, dimethyl sulfoxide (5 %) solution was used as negative control. Statistical analysis Statistical analysis was conducted by using SPSS software. The statistical variables normality was assessed by using a Kolmogorov-Smirnov test. After ensuring the data normality, the variance was analyzed by using one-way analysis of variance (ANOVA). Comparison of the means (mean values ± standard deviations) was analyzed using a Duncan test with a probability level of 5 % error. P < 0.05 was used to define statistical significance. Determination of essential oils components The shade-dried parts of 15 Cinnamomum species were subjected to the determination of components of essential oils and their respective percentage (yield %) were calculated by using standard protocols. The determined quantities of the components of 19 essential oils (leaves, stem bark, stem wood, roots, flowers, and fruits) of 15 Cinnamomum species are presented in Table 1. Analytical studies revealed that stem bark of C. altissimum was rich in the presence of components of essential oils (total yield 52.2 %) while minimum levels of components of essential oils were recorded in roots (total yield 17.9 %). C. insularimontanum (total yield 36.3 %) and C. osmophloem stem (total yield 40.1 %) barks were also rich in the presence of components of essential oils. In the case of C. bejolghota, maximum percentage of components of essential oil was observed in flowers (total yield 49.4 %) only. Similarly, C. mercadoi (total yield 44.4 %) and C. subavenium flowers (total yield 45.7 %) were also rich in the presence of components of essential oils. The leaves of C. burmanii, (total yield 33.5 %), C. javanicum (total yield 40.4 %), C. tamala (total yield 42.3 %), C. verum (total yield 44.0 %) also contain significant quantities of components of essential oil. The fruits of C. walaiwarense (total yield 32 %), C. camphora (total yield 37 %), and C. glaucescens (total yield 36.2 %) were also found to be rich in the presence of components of essential oils but, C. kaneherai and C. kotoense stem woods contain moderate quantities of essential oil (total yield 45.1 and 40.2 %). The cterpinene (11.1 %), a-pinene (10.2 %), a-phellandrene (9.9 %), isoeugenol (9.0 %), camphene (7.9 %), a-terpenene (7.8 %), isoborneole (7.2 %), b-pinene (6.6 %), and elemol (5.1 %) were reported as major components of essential oils in different parts of 15 Cinnamomum species (Table 1). The identities of components of essential oils were confirmed by comparing with standard essential oils (Supplementary Table 2 3.2. Antimicrobial activity of methanolic extracts of different parts (leaves, stem bark, stem wood, roots, fruits, and flowers) The antimicrobial activity of methanolic extracts of different parts (leaves, stem bark, stem wood, roots, fruits, and flowers) of 15 Cinnamomum species were assayed against selected bacterial and fungal species. The ANOVA results displayed a significant difference between the mean suppression halos achieved on treating tested microbes with the methanolic extracts of different parts and standard compounds (Table 2; p < 0.05). Methanolic extract of C. camphora stem wood presented maximum antibacterial activity against S. aureus (MIC 25 ± 0.01 lg/ml), H. pylori (MIC 29 ± 0.05 lg/ml), B. subtilis (MIC 31 ± 0.03 lg/ml), E. faecalis (MIC 33 ± 0.01 lg/ml) and C. albicans (MIC 38 ± 0.03 lg/ml). Similarly, its root extract also displayed strong activity to H. pylori (MIC 26 ± 0.01 lg/ml), B. subtilis (MIC 29 ± 0.02 lg/ml), S. pneumoniae (MIC 31 ± 0.07 lg/ml), E. faecalis (MIC 32 ± 0.03 lg/ml), A. niger (MIC 23 ± 0.01 lg/ml), and R. phaseoli (MIC 30 ± 0.09 lg/ml). The methanolic extract of C. altissimum stem bark displayed stronger antibacterial effects against S. aureus (MIC 26 ± 0.04 lg/ml), E. coli (MIC 33 ± 0.01 lg/ml) and A. flavus (MIC 81 ± 0.05 lg/ml). Methanolic extract of C. bejolghota stem bark also showed strong antibacterial activity against E. cloacae (MIC 32 ± 0.01 lg/ml) and its stem wood extract presented antibacterial effect to S. aureus (MIC 41 ± 0.01 lg/ml). Amoxicillin showed maximum antibacterial activity (in terms of MIC) in the range of 18 ± 0.01 to 51 ± 0.08 lg/ml while streptomycin presented activity in the range of 15 ± 0.05 to 58 ± 0.06 lg/ml (Fig. S7). Similarly, fluconazole demonstrated antifungal activity in the range of 30 ± 0.01 to 76 ± 0.01 lg/ml (MIC; Fig. S8). Maximum MIC (444 ± 0.01 lg/ml) was reported for P. chrysogenum, which showed a high resistance to methanolic extract of C. glaucescens stem bark. The Table 4). The MBC values provided by methanolic extracts of different parts of all species were higher than MIC values of tested microbes. Discussion The biosynthesis of secondary metabolites depends on defense mechanisms against plant pathogens; the quantity produced together with quality may vary as a function of habitat, climatic conditions, and the organ where they synthesized (Zargoosh Moradi et al. 2020). Moreover, habitat and climatic factors can affect the composition of the essential oils, growth and growth phases, and genetic properties of plants (Millauskas et al. 2004). The differences in quantities of essential oils are most likely due to differences in species, as well as their interaction with environmental conditions (Yavari et al. 2010;Sriramavaratharajan et al. 2017;Ghavam et al. 2020 (Cleveland et al. 2015), essential oils can be considered as a natural source of antibacterial and antifungal agents. It has been proven from various studies that the synergistic actions of different essential oils are useful in the treatment of various infections (Mourey and Canillac 2002). The antimicrobial effect of essential oil, that of recovered from stem bark, appears to be mainly attributed to the presence of monoterpenes: a-pinene and b-pinene. a-phellandrene, a-terpinene, cinnamaldehyde, eugenol, and isoeugenol. The monoterpenes are widely used in producing of pesticides, cosmetic products, and antiseptic agents. Moreover, a-pinene and b-pinene have anti-inflammatory (Kim et al. 2015), antibacterial (Dhar et al. 2014), and anticancer properties. A similar type (essential oils) of antibacterial effects against S. aureus (inhibition zone 27.4 mm; MIC 2.5 mg/ml; MBC --------5 6 ± 0.05 b 76 ± 0.01 b 40 ± 0.02 a 30 ± 0.01 a 51 ± 0.03 b C. altissimum parts Leaf 65 ± 0.04 a 98 ± 0.06 a 113 ± 0.09 a 76 ± 0.08 a 78 ± 0.06 a 121 ± 0.03 a 84 ± 0.04 a 67 ± 0.08 a 178 ± 0.05 a 121 ± 0.07 a 102 ± 0.05 a 154 ± 0.08 a 123 ± 0.09 a Stem bark 22 ± 0.03 a 37 ± 0.06 a 47 ± 0.08 a 21 ± 0.03 a 99 ± 0.06 a 81 ± 0.04 a 5.0 mg/ml) were also reported by other researchers (Huang et al., 2014). The essential oils of C. altissimum stem bark demonstrated potent bactericidal effect against S. aureus (39 ± 0.03 lg/ml) and E. coli (45 ± 0.05 lg/ml). The rate of mortality was higher on S. aureus (Gram-positive) than E. coli (Gram-negative). Several researchers have revealed that Gram-negative bacteria are higher sensitive to essential oils than Gram-positive strains (Ravikumar et al. 2012;Chaudhry and Tariq 2008) but our results are not in agreement with the previously reported results. Similarly, the essential oil of C. altissimum fruits showed potent mortality on C. albicans (156 ± 0.04 lg/ml) also. Funding This study was not financially supported by any funding agency. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2022-12-28T16:05:39.920Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "8258e31f387d47d52b7859f6cb126d00c6dfb34f", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.sjbs.2022.103549", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5a3ec1ce131c1dfef624209d1d99cc543427964e", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
81848256
pes2o/s2orc
v3-fos-license
Expression of MicroRNA-221 in Korean Patients with Multiple Myeloma Multiple myeloma (MM) is the leading cause of death among hematologic neoplasms. Recently, microRNA has been reported to be useful in the diagnosis of multiple myeloma. This study examined whether miR-221 could be used as a diagnostic marker for multiple myeloma. The study was performed on 20 patients with multiple myeloma without any other hematological diseases. MicroRNA extraction was performed using formalin-fixed paraffin-embedded (FFPE) tissues obtained from the bone marrow of patients with multiple myeloma. miR-15a, miR-16, miR-21, miR-181a, and miR-221 were selected as the microRNA target genes for multiple myeloma. The significance of microRNA was based on a fold change of <1.5. To quantify the fold changes, data normalized to the human gene, SNORD43, were used as the values of the patient group. Fold change values greater than 1.5 were defined as “overexpression”, whereas values less than -1.5 were defined as “underexpression”. Of note, 65.0% (13/20) of samples showed significant “overexpression” in the levels of miR-221 expression and plasma cells with a group of more and less than 30% in MM patients did not show any significance of plasma cell (P<0.05). The results of other studies showing a correlation between the expression of miR-221 and MM in Caucasians were confirmed. These results suggest that miR-221 may be a useful indicator for diagnosing patients with MM. In conclusion, miR-221 is useful in the diagnosis and determining the prognosis of multiple myeloma in Koreans. MicroRNAs (miRNAs) have also been shown to play crucial roles in the onset of solid tumors and hematologic neoplasms by acting as either tumor suppressors or oncogenes depending on their target genes. Studies are ongoing to better understand the expression patterns of miRNA in normal and pathological conditions. In particular, there is great interest in the potential of miRNA to serve as biomarkers and numerous reports have focused on the effects of miRNA in acute myeloid leukemia, myocardial infarction, and different cancers (eg, prostate, breast, uterine, colorectal, and pancreatic) [10][11][12][13][14][15][16]. The microRNA selection was randomly selected to account for the frequency of multiple myeloma-associated thesis. Here, we compared the expression of genes between Westerners and Korean by choosing miR-15a, miR-16, miR-21, miR-181, and miR-221 which were reportedly associated with MM. Moreover, we aimed to determine if microRNAs could be used as a biomarker in patients with MM. In this study, miR-221 was reported to be overexpression in Western studies, but it was underexpression in previous studies. We tried to find out whether these results are characteristic of Korean people or other causes. This study was conducted in accordance with recommendations of the local ethic committee and the Helsinki Declaration. Before the procedures written informed consent was obtained from all research participants in the study. It was found that miR-221 is contrary to the previous research. Overexpression was reported in westerners, but results of underexpression in vineyard were obtained. Also, there was not enough frequency in precedent reseach. In order to obtain more objective results, sample selection was carried out directly and the number of specimens was increased [25]. The International Myeloma Working Group diagnostic criteria were used [28]. Twenty patients with multiple myeloma (without any other disease). The mean age of 11 male subjects was 61 years, and the mean age of 9 female subjects was 63 years. tissues of patients with MM was hybridized. Normalization was done using the human gene SNORD43 and target gene hybridization. Clinical samples Next, amplify microRNA by attaching label probe. Specimens were interpreted using luminometer. Statistical analysis was performed using QG data analysis of Affymetrix miRNA array. microplate coated with capture probes at room temperature. Pre-amplifiers were hybridized (pre-amplifier stage), followed by bDNA hybridization (amplifier stage). Subsequently, specimens were interpreted using lumi- For a patient and blank specimens, 40 L working solution and 40 L human gene SNORD43 were mixed and divided into two wells using the procedure above. The mixtures were centrifuged at 240 rpm for 20 sec in rack dedicated overhead stirrer, transferred to 6±1°C heating block, and then left to react for 16 hours. After the reaction, the next phase was carried out. 2) Data analysis and software based selection of endogenous reference genes After removing sealing films covered on Capture Plates that were left to react in the incubator, a 200 L of prepared washing buffer was transferred to each well using multichannel pipettes. After eliminating buffer solution completely, A 100 L of 2.0 Preamplifier solution prepared in advance was transferred to each well, and then plates were left to react for an hour at 46°C after pressing sealing films on the plates. The three phases of washing in the above was performed to the plates, and a 100 L of prepared 2.0 Amplifier solution was added to each well. they were left to react for an hour at 46°C. A 100 L of prepared Label Probe was added to each well, and then the plates were left to react for an hour within 46°C heating block after pressing sealing films on the plates. 3) Detection Capture Plates completed with the reaction went through three phases of washing, and then a 100 L of 2.0 substrate solution was added. The Plates were covered with sealing film and left to react for 5 min at room temperature. Specimens were interpreted using luminometer (Lumigen APS-5Capture, Panomics, Santa Clara, CA, USA) within 15 min and quantified as relative light units (RLUs). Statistical analysis For statistical analysis, the paired t-test was used to determine the significance between the experimental group (patients with MM and the control group and between groups with bone marrow plasma cell levels of greater than or less than 30% (>30% and <30% groups). Correlation analysis was conducted using the Pearson correlation. Graphs were created using the error bar graph (SPSS PASW Statistics 18.0, SPSS inc, Chicago, IL, USA,), and P-values less than 0.05 were considered statistically Clinical characteristics of subjects This study involved a total of 25 cases, 20 patients diagnosed with MM (and with no other neoplastic diseases) and 5 Westerners without neoplastic diseases (control group). The study comprised 20 patients each with more than and less than 30% plasma cells in the bone marrow (Table 1). Analysis results of microRNAs As shown in Tables 2∼5, when compared (Table 4). Plasma cell concentrations of bone marrow and miRNA expression levels To assess the potential role of plasma cells concentration within bone marrow and miRNA expression levels, the 20 patients within the experimental group were divided into one of 2 groups (ie, those with plasma cell concentrations within bone marrow of <30% and >30%). Next, mean miRNA expression levels of these groups were compared with the control group. In summary, and as shown in Table 5, no significant differences were observed for the expression of any miRNA tested between the <30% and >30% plasma cell groups. Consequently, our findings suggest that there is no significant impact of plasma cell concentration within bone marrow on the expression levels of miRNA in patients with MM. DISCUSSION The miRNA is an endogenous non-protein coding small cases (30.0%), a finding that contradicts those by who reported that miR-21 overexpression is associated with an increased incidence of various cancers (eg, colorectal, breast, liver, pancreatic, lung, and gastric) and lymphomas [5,17,26]. Overexpression of miR-181a in patients with MM has been implicated in the development of breast, pancreatic, and prostate cancer [5,16]. In this study, the overexpression of miR-181a was seen in 6 cases ( (Table 5). In a study of Westerners, miR-221 was overexpressed [16,17,19]. However, underexpression was found in previous studies [25]. This is to confirm the characteristic result of Korean people. In addition, we tried to confirm whether there was a problem in selection of samples because frequency was low. The specimens were selected for more specific multiple myeloma patients and 20 cases were performed. As a result, miR-16 showed the same result, and miR-221 obtained the result of overexpression. This is thought to be the result of careful examination of multiple myeloma patients. In the present study, we found that miR-221 could be used in the diagnosis of multiple myeloma patients together with miR-16. In a Western population study, miR-221 reported that it could be used as a potential prognostic and diagnostic marker for multiple myeloma [32]. In
2019-03-18T13:58:54.711Z
2018-06-30T00:00:00.000
{ "year": 2018, "sha1": "ba607aafa46835e5796af639ca95360f6e087dd5", "oa_license": "CCBYNC", "oa_url": "http://www.kjcls.org/journal/download_pdf.php?doi=10.15324/kjcls.2018.50.2.197", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d0a167c1fad2511df26a00b717046094148d1caa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232409995
pes2o/s2orc
v3-fos-license
A rare case of recurrent congenital sialolipoma of parotid gland in a 3‑year‑old child: A case report and review of literature Highlights • Sialolipoma is a rare salivary gland tumor defined histologically by mature adipocytes encasing normal salivary glandular components.• Complete excision of the mass with the lobes of the salivary glands involved seems to be adequate for definitive management.• Congenital sialolipoma should be kept in mind in the differential diagnosis of congenital parotid mass, especially when CT scanning shows a well‑circumscribed fat‑like tissue within the parotid gland.• Although surgical excision is usually sufficient to treat sialolipoma of the parotid gland, postoperative follow-up is necessary as multifocal lesions can potentially persist, which could lead to recurrence. Introduction Tumors of the parotid gland in children are uncommon, and represent only 1.3% of all benign salivary tumors [1]. Lipomas of the parotid are uncommon, accounting for only 0.5% of all parotid gland tumors [2]. In a series of 430 salivary tumors in children less than 15 years of age reported by Krolls et al., just three were of a lipomatous nature [1]. Sialolipoma was firstly described by Nagao et al. [3] as a new variant of salivary gland lipoma. Grossly it is characterized as a well-circumscribed, soft, yellow mass and histologically it contains both mature adipose tissue and entrapped normal salivary glandular components surrounded by a fibrous capsule. Sialolipoma has never before been reported as a congenital lesion, or in a child. The first case of congenital sialolipoma was reported by Hornigold et al. in 2005 in a 7-week-old infant as a parotid gland mass [4]. To date, more than 40 cases of sialolipomas have been presented, but the majority is in adult patients. Sialolipoma in children is extremely rare, with only five congenital cases depicted to date [4][5][6][7][8]. To the best of our knowledge, our case is the sixth case of congenital sialolipoma, and this is the first case of recurrent congenital sialolipoma in infant. Interestingly, all congenital cases were derived from the parotid gland. We report a case of congenital sialolipoma in the parotid gland of the 3-year-old male child whose particularity is to be recurrent. The aim of this article is to report one more new case of sialolipoma in infant and discuss the clinicopathological and morphological features of sialolipoma, and the possible cause of its recurrence. Case report A 3 years-old male child with no specific pathological history, in particular, no family history of congenital abnormalities, referred to our otorhinolaryngology department, for the recurrence of a mass in the right parotid region, which was initially treated in another private medical structure. It is a mass noted since birth, progressively increasing in size, without compressive sign or facial palsy, treated surgically at the age of 4 months by tumorectomy with excision of the surrounding parotid tissue. According to the histological report, the histological appearance was in favor of a congenital sialolipoma, and the excision was complete with a clear margin of safety. 4 months later, the mass reappears in the same site (right parotid gland) gradually increasing in size without compressive sign or facial palsy. Physical exam was significant for a soft, non-tender, mobile, 10 cm in diameter mass, interesting the right parotid region and overflowing on the ipsilateral cervical region (Fig. 1), with some centimetric and infracentimetric homolateral lymphadenopathy. On Computed tomography (CT) scan, the mass is subcutaneous, well limited, measuring 90 × 70 × 32 mm, which had fatty density (−108HU), and seat of non-enhanced septa after injection of contrast product, in intimate contact with the superficial parotid lobe (Fig. 2). Faced with this radiological aspect of the mass, the diagnosis of recurrence of congenital sialolipoma was retained. We have planned to perform a total excision of the mass with superfacial parotidectomy. The facial nerve was monitored intraoperatively. A standard modified Blair incision was used, the flaps were lifted (Fig. 3A), then the facial nerve was identified in a stan- dard fashion using the tragal pointer and posterior belly of the digastric muscle as landmarks (Fig. 3B). All branches of the facial nerve were identified and separated from the tumor (Fig. 3C). A complete resection of the mass was achieved and normal facial nerve function preserved. The operative blood loss was less than 20 ml. A closed suction drain was put in situ followed by closure in layers. There was no evidence of facial weakness in the immediate postoperative period. Drain removal was done on postoperative day 2. Histopathological examination of the resected specimen further supported the diagnosis of sialolipoma, showing formation limited by a fibrous capsule consisting of adipose lobules separated by vascular fibrous septa, with local presence of serous acini and excretory tubes (Fig. 4). The 36 months following the surgery were uneventful. There was no evidence of recurrence Frey's syndrome, or facial weakness. This work has been reported in line with the SCARE 2020 criteria [9]. Discussion Parotid lipomatous tumors are classified into several histological variants. The standard (true) lipoma is the most common type [10][11][12]. The term sialolipoma was first used by Nagao et al. it is a mixed tumor consisting of salivary gland elements and mature adipocytes [3]. In their series of 2051 surgically resected primary salivary gland tumors, they reported five cases of well- Fig. 3. Intraoperative images of the right parotid mass. After lifting the skin flap, note the polypoid appearance of the tumor which is clearly seen involving the parotid gland (A). The trunk of the facial nerve (black arrow) was meticulously dissected from the tumor (stars) (B). The facial nerve and its two temporo-facial and cervico-facial branches (2 yellow arrows) have been carefully preserved after exofacial parotidectomy with total removal of the tumor(C). circumscribed parotid tumors, consisting of mature adipocytes and glandular tissue. They proposed that this was a distinct form of salivary gland neoplasm, which they termed sialolipoma. Patients with these lesions have had a mean age of 48.6 years (range 20-67 years). In 2005, the term sialolipoma was accepted in the World Health Organization (WHO) classification of head and neck tumors [13]. It is an extremely rare entity. In one larger study the incidence of sialolipoma was 0.34% of all surgically resected primary salivary gland tumors [3]. Sialolipoma shows a slight male preponderance (male:female ratio = 1.75:1). It is usually seen in adults, the age range spanning 20-75 years {mean, 54 years) [3]. It is more frequently seen in the parotid gland [14][15][16]. A reported single case each involved, respectively, tumors of the soft and hard palate [3]. Clinically, it presents as a slowly growing, asymptomatic painless palpable mass with a wide range of patients aged from 0 month to 84 years with predominance seen in male. Duration of the lesion in published literature varied from 2 months to 11 years [17]. Computed tomography (CT) or magnetic resonance imaging can be helpful in narrowing the differential diagnosis and is superior to ultrasonography in defining exact location and texture of the lesion. Fine-needle aspiration, which is the first-line procedure in diagnosing major salivary gland lesions, is of little help as its accuracy is <50% in lipomatous tumors [7]. Parotitis is the most common disease of the parotid gland in childhood. Congenital parotid tumours are extremely rare, however, these entities should always be considered in the differential diagnosis if the swelling is persistent. Congenital parotid tumors that can occur in the neonatal period are hemangioma, sialoblastoma, cystic hygroma, branchial cleft cyst, and hamartomas. Epithelial parotid tumors that can occur in late childhood are pleomorphic adenoma, mucoepidermoid carcinoma, and acinic cell carcinoma [4]. Congenital sialolipoma, however, is a newly recognized tumor and only 5 reports are available in the English literature [4][5][6][7][8]. (TABLE below) including 2 girls and 3 boys whose age varies between 7 weeks and 10 months. Coincidentally, the left parotid gland was affected in all 5 cases, while in our case it was located in the right side (Table 1). It is extremely difficult to confirm a preoperative diagnosis of sialolipoma. Imaging cannot differentiate reliably between a benign neoplasm and one with a more aggressive growth pattern. In this case, the tumor capsule could not be differentiated from the fibrous septa of the subcutaneous tissue of the cheek. Fine needle aspiration cytology may produce adipose and parenchymal tissue only but, as in this case, more aggressive tumors cannot be discounted. Sialoblastoma could well have a similar presentation [18]. It is, therefore, necessary to proceed to total excision of the mass by superficial or total conservative parotidectomy [4]. Pathogenesis of sialolipoma is not completely understood. According to some authors, pathogenesis of sialolipoma may be associated with salivary gland dysfunction, leading to altered salivary gland configuration which can be explained microscopically by replacement of the normal salivary gland tissue with mature adipose tissue admixed with atrophic salivary glandular elements and chronic ductal epithelial cells changes such as oncocytic metaplasia, fibrosis and lymphocytic infiltrate [19,20]. Complete excision of the mass with involved salivary glands lobes appears to be adequate for definite management. Superficial parotidectomy remains the treatment of choice for all superficial lobe benign tumours [24]. Thus most of the tumors in parotid glands are treated with superficial parotidectomy [25]. The extratemporal portion of the facial nerve is most commonly injured during parotid surgery or during the excision of neck lesions originating from or adjacent to the parotid gland. If the tumor is very large, it is safe to approach it by retrograde dissection, by identifying the buccal or marginal branch of the facial nerve. Facial nerve dysfunction may result from accidental transaction or compression of the nerve during tissue dissection or wound retraction. Monopolar cautery can also cause thermal and electrical injury to the facial nerve [26]. The patient should be counselled and educated about possible anticipated postoperative complications. Parotidectomy in small children may present some difficulties. First of all, anatomical structures are much smaller, the facial nerve is located relatively superficially in comparison with adult patients, and the mastoid process is not fully developed. Consequently, operating under magnification with surgical loops or microscopes is mandatory, and facial nerve monitoring may facilitate identi-fication of the nerve [8]. The most commonly used anatomical landmarks for location of the facial nerve in parotid surgery are the anterior border of the digastric muscle one centimeter below and deep to the cartilaginous pointer of the external auditory canal [4]. In our case, the tumor was carefully dissected from the terminal branches of the facial nerve, continuous monitoring of the intraoperative facial nerve was very helpful to preserve their function. Several studies have identified no local recurrence, malignant transformation or other complications after conservative surgical excision [19,23,27]. Just one case of recurrent sialolipoma was identified but in adults [28]. Long term data regarding recurrence in the pediatric population is not available, with three years of follow-up being the longest reported thus far [22]. This report highlights an exceptional case of congenital sialolipoma of the parotid gland with short-term recurrence (4 months) following the performance of a simple tumorectomy enlarged to the surrounding parotid tissue. Despite complete resection, the tumor reappeared 4 months later. Recurrence may result from a new and independent sialolipoma in the rest of the parotid parenchyma, because a residual sialolipoma was considered unlikely according to the pathology report, which indicated total tumor exeresis with a clear margin of safety. Whereas, in the recurrence, the complete excision of the mass with the lobe of the salivary glands involved (superficial parotidectomy) allowed definitive management of the tumor; there is no more recurrence after three years of follow-up. Conclusion Although it is a very rare benign tumor, congenital sialolipoma should be considered in the differential diagnosis of congenital parotid mass. The recurrence of congenital sialolipoma depends on its management. Thus, our case confirms that the complete excision of the mass with the lobes of the salivary glands involved is adequate for definitive management. Declaration of Competing Interest The authors report no declarations of interest. Sources of funding None. Ethical approval The study is exempt from ethical approval in our institution as it is a "Case report" and not a research study. Consent Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request. Registration of Research Studies Not applicable. Provenance and peer review Not commissioned, externally peer-reviewed.
2021-03-30T06:16:13.677Z
2021-03-16T00:00:00.000
{ "year": 2021, "sha1": "2224526b64d6fa15e5e32e74304098d46b171a76", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijscr.2021.105784", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "09ce0a7b22a24aa63f0334065169e2f806a8873b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254005350
pes2o/s2orc
v3-fos-license
Characteristics and one year outcomes of melioidosis patients in Northeastern Thailand: a prospective, multicenter cohort study Summary Background Melioidosis is a neglected tropical infection caused by the environmental saprophyte Burkholderia pseudomallei. Methods We conducted a prospective, observational study at nine hospitals in northeastern Thailand, a hyperendemic melioidosis zone, to define current characteristics of melioidosis patients and quantify outcomes over one year. Findings 2574 individuals hospitalised with culture-confirmed melioidosis were screened and 1352 patients were analysed. The median age was 55 years, 975 (72%) were male, and 951 (70%) had diabetes. 565 (42%) patients presented with lung infection, 1042 (77%) were bacteremic, 442 (33%) received vasopressors/inotropes and 547 (40%) received mechanical ventilation. 1307 (97%) received an intravenous antibiotic against B. pseudomallei. 335/1345 (25%) patients died within one month and 448/1322 (34%) of patients died within one year. Most patients had risk factors for melioidosis, but patients without identified risk factors did not have a reduced risk of death. Of patients discharged alive, most received oral trimethoprim-sulfamethoxazole, which was associated with decreased risk of post-discharge death; 235/970 (24%) were readmitted, and 874/1015 (86%) survived to one year. Recurrent infection was detected in 17/994 patients (2%). Patients with risk factors other than diabetes had increased risk of death and increased risk of hospital readmission. Interpretation In northeastern Thailand patients with melioidosis experience high rates of bacteremia, organ failure and death. Most patients discharged alive survive one year although all-cause readmission is common. Recurrent disease is rare. Strategies that emphasize prevention, rapid diagnosis and intensification of early clinical management are likely to have greatest impact in this and other resource-restricted regions. Funding USNIH/10.13039/100000060NIAIDU01AI115520. Introduction Melioidosis, infection caused by Burkholderia pseudomallei, is a public health concern throughout the tropics, especially in Southeast Asia. B. pseudomallei is found in soil and water, and causes infection after transcutaneous inoculation, inhalation, or ingestion. 1 Northeastern Thailand is a hyperendemic zone of melioidosis 2 where the burden is both substantial and expanding: the incidence has increased between 1997 and 2006 to 21 cases per 100,000 people per year 3 and B. pseudomallei is the second most common cause of bacteraemia. 4 Diabetes is the major risk factor for melioidosis and is also increasing in Thailand. 5 A recent retrospective study in Thailand from 2012 to 2015 reported a 30 day-case fatality rate of 39%. 6 In contrast, an Australian study reported a recent case fatality rate of 6% in the setting of prolonged intravenous treatment. 7,8 It is likely that most cases of melioidosis worldwide occur in less well-resourced settings such as northeastern Thailand. 9 For those who survive the initial illness, recurrent infection, either due to relapse or reinfection, is well documented. However, recurrent infection appears to be decreasing over time, perhaps related to antibiotic choice, duration, and adherence or other clinical practice changes. Recommended antibiotic regimens for melioidosis advise intravenous intensive therapy with ceftazidime or a carbapenem for at least 10-14 days (and longer depending on disease presentation) followed by at least three months of oral eradication therapy with trimethoprim-sulfamethoxazole (TMP-SMX). 1,8 Several clinical trials in northeastern Thailand have reported one year recurrence rates of 6% in 2004, 3% in 2014, and 2% in 2018. [10][11][12] While encouraging, these observations were made in the context of rigorous treatment trials that may not reflect usual practice. Additionally, little is known about other sequelae that may occur in survivors of melioidosis. Complications following hospitalisation for sepsis are common but have not been extensively studied in melioidosis. 13 Research in context Evidence before this study We searched PubMed with the terms ("melioidosis" or "Burkholderia pseudomallei") and "epidemiology" from the year 2000 to July 13, 2022. This search retrieved 712 results. These studies, almost all in English, indicated the incontrovertible presence of non-travel-associated melioidosis in south, east, and southeast Asia, Africa, the Caribbean, and central and south America. The largest studies were from Thailand and Australia and several randomised controlled trials (RCTs) indicated decreasing rates of recurrent disease; however, no large prospective multicenter studies evaluated other longterm sequelae after melioidosis. Multiple studies identified diabetes as a risk factor for melioidosis, but there were conflicting data about the association of diabetes or sulfonylurea medications with outcome. Added value of this study This prospective, multicenter, observational study provides some of the most comprehensive and novel data to date about patient characteristics and the persistently high case fatality rate of melioidosis in northeastern Thailand, where the prevalence of diabetes is increasing. The study shows that, in contrast to Australia, severity of illness and deaths from melioidosis remain unacceptably high. Most deaths occur early; patients with respiratory failure are at particular risk of death. The large majority of patients who are discharged alive following their hospitalisation survive one year although one in four will be readmitted to hospital. Recurrent infection, previously a significant concern, is now rare. While diabetes is a major risk factor for melioidosis and increasing in prevalence in Thailand, this study also demonstrates that patients with risk factors other than diabetes experience high rates of death and, in survivors, hospital readmission following a diagnosis of melioidosis. Implications of all the available evidence Melioidosis is a frequently overlooked tropical disease yet new evidence points to broader endemicity of the infection than previously recognised. In the setting of a progressive increase in diabetes in global tropical populations, melioidosis is likely to become more common. Our study indicates that outside of high-resource regions, the infection remains acutely lethal even with provision of antibiotics, source control, and organ support measures at referral hospitals. Therefore, a heightened focus on prevention, rapid diagnosis and early treatment of melioidosis is of paramount importance. Therefore, we conducted a prospective, multi-center study with one year of follow up to define the current characteristics of melioidosis patients and prevalence of short-and long-term outcomes in northeastern Thailand. Methods Study design, participants, and outcomes Enrollment into the study was conducted at nine hospitals in northeastern Thailand between 22 July 2015 and 31 December 2018 (Appendix, Page 3). Hospitalised patients at least 15 years of age with microbiologicallyconfirmed melioidosis were prospectively enrolled and followed throughout their hospital stay. Survivors were contacted at one, two, four, six, eight, ten, and twelve months after enrollment (until 31 December 2019). Clinical data were abstracted from the medical record and from the patient or surrogate decision-maker interview into a standardised case report form. Follow up interviews and systematic surveys were administered to surviving patients (Appendix, Page 3). The primary outcome measures were death at one month (defined as 28 days) and death at one year following enrollment. Secondary outcome measures were post-discharge death, readmissions to hospital, self-reported condition, and recurrent infection during the year of follow up. Definitions Clinical definitions are provided in the Appendix, Page 3. Recurrent infection was defined as the second episode of a clinical sample culture growing B. pseudomallei after the patient received complete treatment for the first episode. If paired bacterial isolates were available, pulse-field gel electrophoresis using SpeI as a restriction enzyme 14 was performed to determine whether the recurrent isolate was the same pattern as the initial isolate. If so, the recurrence was defined as relapse; otherwise, the recurrence was defined as reinfection. Statistical analyses Normally distributed data are reported using mean and standard deviations. Non-normally distributed data are reported using median and interquartile range. Analyses of categorical data were performed using the Chi square or Fisher's exact tests. Analyses of continuous data were performed using the t test or rank sum tests. Information about missing data is provided in the Appendix, Page 4. Relative risks of outcomes were estimated using a modified Poisson model with robust standard errors to reduce overestimation of the error. 15 Competing risk regression to estimate the subhazard ratio and cumulative incidence of readmission postdischarge was performed considering death as a competing risk, assuming robust standard errors. Survival curves for overall survival and post-discharge survival were compared using the log rank test. P values less than 0.05 were considered significant. Analyses were performed using Stata SE 17.0 (StataCorp, College Station, Texas, USA). Ethics Informed consent was obtained from participants or their surrogate decision makers. The ethics committees of each of the nine study hospitals and the Mahidol University Faculty of Tropical Medicine approved the study (approval number MUTM 2015-002-01). The University of Washington Human Subjects Division issued a statement of non-engagement in human subjects research. Role of the funding source The funding source had no role in study design, collection, analysis, or interpretation of data, or reporting of results. The corresponding authors had full access to all the data and take responsibility for the accuracy of the study. Results 2574 individuals were screened and 1372 were enrolled (Fig. 1). Twenty patients were found not to meet enrollment criteria or had delayed enrollment so were withdrawn. Therefore, 1352 patients were analysed. Median time to enrollment after admission was three days (IQR 2-4). The baseline clinical characteristics of enrolled patients are shown in Table 1. The median age was 55 years (IQR 46-64) and 975/1352 (72.1%) of patients were male. The most common pre-existing comorbidity was diabetes, present in 951 (70.3%) of patients. While most patients had an identifiable underlying disease risk factor for acquisition of melioidosis, 177 individuals (13.1%) did not (Appendix, Page 5). 16 Sixty four patients (4.7%) had a prior history of melioidosis (Appendix, Page 6). The most common occupation (837/1352, 61.9%) was farmer and most patients reported environmental soil (873/1352, 64.6%) or water contact (718/1352, 53.1%) most days per week. The median duration of symptoms prior to hospitalisation was 5 days (IQR 2-10). 841 (62.2%) patients were referred from other hospitals. Major clinical presentations are shown in Table 2 and are characterised further in the Appendix, Pages 7-9. Lung infection was the most common presentation (565, 41.8%) and the most lethal, accounting for 59.7% (200/335) of all deaths at one month. Skin and soft tissue infections were less common (307/1352, 22.7%) and were associated with lower one month mortality (42/307, 13.8%). B. pseudomallei bacteraemia was identified in 1042 (77.1%) patients. Died by 1 month (n=335) Lost to follow up (n=23) Articles in the study hospital and 388/1019 (38.1%) surviving patients for whom disposition was known were transferred to another hospital. Of the 1010 patients alive one month after enrollment, 77 remained hospitalised at the study hospitals. 113/987 (11.5%) of patients alive one month after enrollment for whom one year vital status was known died over the subsequent 11 months; of these, 100/113 (88.5%) had been discharged from the study hospitals at the one month follow up point. Following discharge from the study hospitals, 141/1015 (13.9%) patients died during the follow up period (Fig. 2B). All-cause readmission within one year from enrollment occurred in 235 of 970 (24.2%) patients discharged alive providing data (Fig. 2C). 175/235 (74.5%) had one repeat hospitalisation; the remainder had 2-6 repeat hospitalisations. The most common reason was for treatment of infection; 43 readmissions were reported by patients as specifically relating to melioidosis (Appendix, Page 10). The median length of time to the first readmission was 34 (IQR 12-109) days (Appendix, Page 22). However, culture-proven recurrent infection was detected in only 17 of 994 (1.7%) patients. Over 85% of discharged patients contacted at one, two, and four months post-enrollment were taking TMP-SMX (Fig. 2D). Confirmed drug reactions were rare but two to three percent reported a rash (Appendix, Page 11). Over time, the number of surviving patients who reported clinical improvement increased (Fig. 2D). At one month, 542/895 (60.6%) percent reported symptoms, most commonly fatigue, and there was a downward trajectory in symptoms reported over subsequent months (Appendix, Pages 11,23). Median (IQR) time to recurrence was 258 days (191-284) following enrollment. All 17 patients with culture-proven recurrent infection had received intravenous antibiotics effective against B. pseudomallei during their original illness. Twelve patients (70.6%) took TMP/SMX through four months of follow-up, a proportion that was statistically no different than for patients who did not experience recurrence. Of the 17 patients with recurrent infection, the recurrent bacterial isolate was obtained for 11 individuals. Ten of these patients (10/11, 90.9%) had relapsed infection; the remaining individual had reinfection with a distinct isolate. Five of the 17 patients (29.4%) with recurrent infection died within 28 days of the recurrence. Risk factors for death within one month are shown in Table 3. In adjusted analyses, the only co-morbidities with increased risk of death were gout and liver disease. The adjusted relative risk (aRR) of death among patients presenting with lung infection was 1.80 (95% confidence interval (CI): 1.48-2.18). Patients who required vasopressor/inotropic support or mechanical ventilation had very high aRRs of death (5.84, 95% CI: 4.67-7.29, and 6.72, 95% CI: 5.10-8.85, respectively). Diabetes was associated with a decreased risk of death (aRR 0.61, 95% CI: 0.50-0.75). The absence of any identified clinical risk factor for melioidosis did not decrease the risk of death. Risk factors for death within one year were broadly similar to risk factors for death within one month although additional underlying comorbidities increased risk of mortality (Appendix, Page 12). A reduced risk of death in diabetic patients with melioidosis has been attributed to glyburide (glibencamide), a sulfonylurea medication with broad antiinflammatory effects. 17 In our cohort, glyburide use was rare; however, use of glyburide, glipizide, any sulfonylurea medication, or metformin was not Sum of the first column exceeds total number of patients as some patients had more than one presentation. Septic shock is defined as a requirement of inotropic or vasopressor agents during the hospitalization. Vital status was missing for seven patients at one month and 30 patients at one year. We assessed whether survivors of their hospitalisation had post-discharge outcomes that were associated with their risk factors, clinical presentation or with eradication therapy. Patients with risk factors other than diabetes were more likely to be readmitted than diabetics or individuals without risk factors (Appendix, Page 25). Clinical presentation was not associated with readmission. Skin/soft tissue infection was associated with reduced risk of death in crude competing risk analyses although this effect was moderated after adjustment for covariates (Appendix, Page 18). Taking TMP-SMX at one month and four months after enrollment was associated with significantly decreased risk of death at one year (aRR 0.32, 95% CI: 0.21-0.51 and aRR 0.38, 95% CI: 0.18-0.80; Appendix, Page 19). Discussion The main finding of this prospective, multi-center cohort study is that melioidosis remains a highly impactful disease in northeastern Thailand. One in four enrolled patients did not survive to one month and one in three did not survive to one year. During hospitalisation, patients frequently underwent diagnostic, drainage, or source control procedures, required organ support for septic shock and respiratory failure, and were managed in intensive care units. Most deaths occurred early. Among survivors to discharge, one in four patients experienced all-cause readmissions to hospital. These results underscore the considerable burden associated with melioidosis in low resource settings and highlight stark differences in outcome compared to highly resourced hospitals. 7 The findings suggest that strategies targeting early disease identification and management are likely to have the highest impact in mitigating this burden. The global distribution of B. pseudomallei is predicted to be broader than previously recognised, yet northeastern Thailand is considered hyperendemic for melioidosis. 9 B. pseudomallei is readily detectable in the soil and water 18,19 and a recent retrospective study determined the number of melioidosis cases in the region to be at least 1300 per year. 6 Clinicians at referral hospitals are familiar with the infection as reflected by the finding that the vast majority of patients in our study received intravenous antibiotics active against B. pseudomallei. There is a well-established referral system in the region for ill patients to allow them to access higher levels of care. Despite this, the case fatality rate in melioidosis remains unacceptably high. Notably, 324 patients with culture-proven melioidosis who were screened for this study died prior to enrollment. Had these patients been enrolled in our cohort, the overall one month case fatality rate would have approached 40%. Strikingly, this rate has not changed appreciably in northeastern Thailand since the introduction of ceftazidime to treat melioidosis in 1988. 20 It is instructive to compare the results of our study with those from a recently reported single center melioidosis patient cohort from Darwin, Australia. 7 Over the last five years, mortality from melioidosis in Darwin has decreased to 6%, over four times lower than our observed one month mortality rate of 25%. Patients in our study were more likely to be over 50 years of age, to be male, and to have diabetes. The absence of identified melioidosis disease risk factors in a minority of patients was comparable in both cohorts although the lack of identified risk factors did not reduce the risk of death in our cohort. In contrast, in the Darwin cohort, the relative risk of death among patients with no identified risk factors was 0.12 (95% CI: 0.04-0.37). 7 Clinical presentations were largely similar and dominated by lung infection in both studies, although our cohort had a greater proportion of patients with intra-abdominal infection, most commonly liver and spleen abscesses. Across almost all disease presentations, bacteremia and septic shock were more common in our cohort. About 40% of patients in our study had respiratory failure requiring mechanical ventilation and both septic shock and respiratory failure were very highly predictive of death. Additionally, 324 patients with melioidosis screened for the study died prior to enrollment. These data suggest that in northeastern Thailand melioidosis patients may present to referral hospitals with severe or advanced disease. Efforts focused on early identification and management of infection are therefore particularly important. Over 60% of patients were referred from other hospitals, where only a minority of patients received an intravenous antibiotic active against B. pseudomallei. Although we did not identify this as a risk factor for death in referred patients, maintaining high clinical suspicion for melioidosis in at risk patients (which necessitates awareness of melioidosis among front line clinicians), making a rapid diagnosis, and ensuring availability and administration of appropriate antimicrobial agents are essential tenets to improve outcomes. In addition, over one third of patients in our cohort required a surgical diagnostic or drainage procedure, capabilities that likely were less available at referring hospitals. Such procedural capacity is often an under-recognised necessity to obtain adequate source control and may be an actionable intervention to reduce mortality in severe infection and sepsis. Indeed, given the global burden of sepsis, these findings in our melioidosis cohort underscore the critical importance of continuing efforts to improve sepsis care worldwide. 21,22 Few studies have detailed the clinical course of melioidosis patients following hospital discharge. We found that nearly one quarter of patients discharged alive required readmission to hospital over the year, and one quarter of those patients were readmitted more than once. These findings are broadly concordant with studies in highly resourced settings indicating that readmission following hospitalisation for sepsis or pneumonia is common. [23][24][25] Most readmissions occurred within the early weeks after discharge indicating that this time frame may be a potential target for future interventions to reduce the need for rehospitalisation. These readmissions may not be directly related to melioidosis; they may instead reflect the frailty of individuals who acquire melioidosis, given the high frequency of comorbidities and disease risk factors in the population. Together, these observations indicate that surviving melioidosis patients are at high risk of further complications and require close follow up. Australian guidelines now suggest prolonged courses of intravenous therapy for specific disease presentations. 8 Given that most patients in this study were likely to receive shorter courses of intravenous antibiotics, at least as inpatients at the study sites, adoption of these approaches may have benefit in reducing hospital readmissions and late complications. The majority of patients in this cohort had diabetes, the main risk factor for melioidosis. Yet, to our knowledge the observed proportion of diabetics (70%) is one of the highest observed in any large cohort study of melioidosis to date. This is comparable to the proportion of diabetics observed in a recently reported randomised controlled clinical trial of melioidosis treatment in northeastern Thailand. 12 In contrast two trials in the region that concluded in 2003 reported 59% of patients with diabetes. 26 This points to an even greater role for diabetes in driving the number of melioidosis cases than previously noted. In line with this observation, the prevalence of diabetes in Thailand and throughout the world is increasing. 5,27 This may be due at least in part to dietary changes, physical activity patterns, and urbanisation. 5 Moreover, much diabetes in Thailand is undiagnosed and even among patients with known disease, glycemic control is worse in the melioidosis-endemic northeastern region of the country. 28 These observations have worrisome implications for the burden of melioidosis not only in Thailand but around the world. For example, south Asia, where melioidosis is being increasingly identified, is home to nearly two billion people with a prevalence of diabetes of about eight percent. 9,29 The association of diabetes with reduced risk of death is apparently paradoxical given that diabetes is the major risk factor for acquisition of melioidosis. However, prior studies have also reported that, among patients with melioidosis, diabetics have lower rates of death. 6,7,17 One proposed explanation is the antiinflammatory effect of the oral sulfonylurea glyburide (glibencamide). 17 We did not find that glyburide alone or sulfonylurea therapy was associated with a reduced risk of death; however, we may not have had adequate power to detect an effect of glyburide because few patients were taking this medication. Another possible explanation is that patients without diabetes have risk factors that put them at increased risk of death. We determined that diabetes does not alter risk of hospital readmission or death compared to non-diabetics without other melioidosis risk factors. In contrast, non-diabetic patients with other risk factors for disease have significantly increased risks of readmission or death. This finding argues against any protective effect of diabetes and instead reflects the fact that melioidosis is an opportunistic infection that causes poorer outcomes in individuals with more severe comorbidities. 7 Failure to achieve complete eradication of B. pseudomallei has been a long-standing concern in melioidosis. Following the intensive intravenous regimen, oral TMP-SMX therapy is currently recommended for 12 weeks. 12 We found that TMP-SMX therapy following hospital discharge was associated with decreased risk of death. Past randomised controlled clinical trials have reported one year recurrence rates that have decreased from 6% in 2004 to 2% in 2020 but these observed rates in clinical trials may not reflect actual practice. [10][11][12] Our large observational cohort confirmed that culture-confirmed recurrent infection, despite active surveillance, was comparably rare at 1.7%. Of the recurrent cases, as in previous studies, 30 most were relapsed infection. We did not identify any overt failures in antibiotic selection or adherence, risk factors for relapse. 30 Our observed recurrence rate parallels that from Australia 31 and probably reflects increasing local knowledge of melioidosis management. Over 60% of patients in our study were farmers and the majority of patients were exposed to soil and water on a regular basis, reflecting likely sources of infection. In the absence of a melioidosis vaccine, evidence-based guidelines such as wearing boots and drinking bottled water to reduce the risk of infection have been developed. 32 While prevention efforts are essential to reduce the burden of disease, 33 a multifaceted behavioural intervention in the northeastern Thailand did not demonstrate effectiveness in reducing the incidence of culture-confirmed melioidosis in diabetics. 34 Strengths of this study are its prospective, multicenter design, size, and long duration and high rates of follow up. We estimate that we enrolled roughly one quarter of all melioidosis patients in northeastern Thailand during the time frame of our study. 6 Limitations include the delay in enrolling patients until confirmed melioidosis, thus missing patients who died or were discharged early. We did not capture detailed management data from non-study hospitals. Our models may be impacted by unmeasured confounders, especially with regard to unappreciated risk factors for poor outcomes that may bias our results towards the null hypothesis. Our study was not designed to include a comparator cohort without melioidosis to permit the attribution of risk of melioidosis on outcomes nor do we have data about participants' symptoms prior to infection. As our study was performed entirely in northeastern Thailand, our results may not be generalisable elsewhere. In conclusion, in northeastern Thailand patients hospitalised with melioidosis experience high rates of bacteremia, organ failure and early death. While most patients are diabetics, those with other risk factors for infection are at highest risk of poor outcomes. Most patients discharged alive survive through one year although readmission is common. Recurrent disease is rare. In addition to prevention efforts, strategies that accelerate rapid diagnosis and intensify early clinical management of melioidosis, including prior to referral, are likely to have greatest impact in this region and in other resource-restricted settings. Data sharing statement Following publication, summary data, case report forms, and consent forms from this study are available upon request from the authors. Declaration of interests The project was funded by NIH/NIAID award U01AI115520 to Narisara Chantratita and T. Eoin West. The authors declare no competing interests.
2022-11-27T17:11:19.108Z
2022-11-25T00:00:00.000
{ "year": 2022, "sha1": "338cd3447d7f03b71107d98826d9155f24acd7af", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.lansea.2022.100118", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "79d268c076391c3dd92f9c98084cf10ce5406052", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
64411019
pes2o/s2orc
v3-fos-license
Quiet Eye : The efficiency paradox – comment on Vickers 1 Brooks Rehabilitation College of Healthcare Sciences, Department of Kinesiology, Jacksonville University, USA 2 Department of Applied Physiology and Kinesiology, Center for Exercise Science, University of Florida, USA * Corresponding author: University of Florida, Department of Applied Physiology and Kinesiology, FLG 132E, P.O. Box 118205, Gainesville, FL 32611-8205, USA, Tel: +1 352 2941478 Email: cjmj@hhp.ufl.edu Introduction The publication of Joan Vickers' seminal Quiet Eye (QE) papers (Vickers, 1996a(Vickers, , 1996b(Vickers, , 1996c) ) offered the promise of a widely generalizable, distinguishing psychomotor metric of expertise.A voluminous body of empirical and applied work has emerged over time, consistently supporting the QE as a reliable covert index of performance excellence (Vickers, 2016).In short, the QE has stood the test of time.Qualitative (Causer, Janelle, Vickers, & Williams, 2012;Wilson, Causer, & Vickers, 2015) and quantitative reviews (Mann, Williams, Ward, & Janelle, 2007) have reiterated the QE as a robust discriminator of expertise and precursor of successful performance.Despite extensive empirical support and widespread perceptual training programs, the underpin-Citation: Mann, D. T. Y, Wright, A., & Janelle, C. M. (2016).Quiet Eye: The efficiency paradox -comment on Vickers. Current Issues in Sport Science, 1:111. doi: 10.15203/CISS_2016.111This is a commentary on a CISS target article authored by Joan N. Vickers.For retrieving the whole target article including index of contents, editorial, main article, all peer commentaries and author's response: Hossner, E.-J.(Ed.) (2016).Quiet Eye research -Joan Vickers on target.Current Issues in Sport Science, 1:100.doi: 10.15203/CISS_2016.100nings of the QE period remain poorly understood, and in some ways, counterintuitive. The efficiency paradox Perhaps the most robust phenomenon in all performancerelated visual search research is the nearly ubiquitous finding that experts and expert performance are consistently characterized by an earlier onset and longer QE.From both scientific and intuitive perspectives, endorsement of a "longer is better" recommendation seems rather crude, and the principal mechanisms associated with this recommendation remain speculative.Simply stated, it seems illogical to expect that a longer is better adage is advantageous across performance situations CISS 1 ( 2016) October 2016 I Article 111 I 2 where efficiency is paramount.Research examining the many underlying attributes of expertise has generally concluded that experts are more efficient, effective, and accurate in recognizing task-specific patterns, more proficient at making decisions, maintain superior procedural and declarative information, have a profound reservoir of retrievable contextual cues, and possess an unparalleled ability to foreshadow events and outcomes (Holyoak, 1991, Stakes & Allard, 1993, Mann et al., 2007). If efficiency, strictly speaking, enables experts to perform greater, more detailed work in relation to the total energy expended, how then does the QE represent and/or enable efficiency?Is it simply because the QE acts to reduce the number of fixations and fixation locations during the moments leading up to performance execution?Furthermore, why is a prolonged duration of the QE period necessary for the expert advantage to emerge?We briefly explore this paradox in the context of the literature examining the relationships between QE and cortical efficiency, motor preparation, and emotion regulation. Cortical efficiency From a purely visuomotor perspective, the QE may serve to maximize efficiency, as reflected in cortical patterns indicative of elite performance (Janelle et al. 2000;Janelle & Hatfield, 2008).Research has consistently reported cortical quieting in the left hemisphere as compared to the right (at temporal, midfrontal, occipital, and parietal regions) when performing visuospatial and motor coordination tasks (e.g., Crews and Landers 1993;Haufler, Spalding, Santa Maria, & Hatfield, 2000;Janelle & Hatfield, 2008).Elite athletes generally make fewer fixations of longer duration, suggesting a level of information processing efficiency that permits more time to be spent on task-relevant cues and less time in search of these cues (Mann et al., 2007).As such, time to movement onset -otherwise said, decisionaction time -should be reduced in the expert.A prolonged QE may permit a similar advantage.Task-salient cues are prioritized during visual search, particularly during the final fixation.During this time, cortical resources are likely reallocated away from analytical processing and irrelevant sensory cues and toward the visuospatially dominant perceptuomotor processes that are critical for effective motor programming and execution.Why the efficiency paradox?Neural efficiency refers to the attainment of superior performance along with simultaneous spatial localization or a reduction in brain activity (Costanzo et al., in press).Studies of motor planning in expert golfers have demonstrated that brain activation during the pre-shot routine is radically different from that of less skilled performers (Mann, Coombes, Mousseau, & Janelle, 2011, Milton, Solodkin, Hlustik, & Small, 2007).The expert brain arguably uses less energy to cope with the task demands by converging activation on smaller brain areas and/or less global activation.Irrelevant brain processes are inhibited while essential brain regions exhibit elevated activity as needed, compared to that observed in less-expert performers.Incidentally, a link between cortical efficiency and the QE duration has been demonstrated (Mann et al., 2011).Although the experts were more proficient, it is unlikely we can argue they were more efficient based on the QE data reported. Motor preparation Conceptually, the QE period is thought to represent the time needed to organize the visual parameters and neural networks responsible for the orienting and control of visual attention (Vickers, 1996a(Vickers, , 1996b)).Vickers (1996aVickers ( , 1996b) ) has relied heavily on basic cognitive neuropsychological evidence to advance postulates on the cerebral architecture that underlies the QE period.Leveraging the early work of Posner and Raichle (1991), who proposed a three-component network for visual attention, Vickers suggested that the QE period has implications for motor preparation.The orienting network affords shifts in attention, while the executive network works to identify the most salient cues for goal directed behavior, and the vigilance network functions to support focused attention by enabling the orienting system and suppressing the processing of irrelevant stimuli.A secondary effect, therefore, of the vigilance network may be the reorganization of the neural networks responsible for maintaining visuospatial processing and the activation of the appropriate motor program.Preparatory activity in the milieu of sensorimotor alterations involves an integrated neural conduit linking perception to action (Toni & Passingham 2003). The QE appears to functionally represent the time needed to organize the neural networks and visual parameters responsible for the orienting and control of visual attention (Mann et al., 2007;Vickers 1996aVickers , 1996b)).Given this contention, we are again faced with the paradoxical notion that the QE period, a discernible measure of expertise, is consistent with the increased efficiency associated with expert performance.During the preparation and movement phases of skill execution, the visual attention centers (i.e., occipital and parietal cortex) propagate the necessary directives to the motor regions of the cortex (i.e., motor cortex, premotor cortex, supplementary motor area, basal ganglia, and cerebellum).Consequently, the cortical areas responsible for execution of a motor task may in turn benefit from the reallocation of resources during the QE period, allowing for the development of a more refined motor program that results in better performance and greater expertise levels.The question remains, whether the QE period is the cause or the effect of this reorganization, and why such parameterization should not occur more quickly for experts. Emotion regulation A large body of knowledge has emerged lending support to the debilitating effects of anxiety on performance, processing efficiency, and cue utilization.As an extension of this work, several researchers have suggested that the QE period may reflect the regulation of emotional states (Janelle et al., 2000;Mann et CISS 1 ( 2016) October 2016 I Article 111 I 3 al., 2011;Vickers, Williams, Rodrigues, Hillis, & Coyne, 1999) and the needed reinvestment of greater information processing to sustain performance.That is, the extended QE duration that is characteristic of experts may in fact represent the time needed to accommodate the detrimental effects of anxiety/arousal on the recruitment of task specific resources.Consistent across a variety of reports, the QE duration is influenced by modulations in cognitive stress, physiological arousal, or pressure.Importantly, QE duration has consistently been reported as longer for elite compared to subelite performers across conditions (Causer, Holmes, Smith, & Williams, 2011;Mann et al., 2007;Wilson et al., 2015).The notable differences in QE under adverse conditions and between skill levels supports an emotion regulation function, or a function that is, at minimum, susceptible to emotional reactivity.Apparently, efficiency in emotion regulation, which may indeed occur more quickly, does not speed the QE, but rather permits preservation of the processes that occur during an extended QE period. Implications Considering the collective evidence summarized here, a trend begins to emerge suggesting the QE may be representative of a covert pruning process that requires additional time to align the perceptual cognitive systems with the motor systems to execute a skill at its highest level.Why experts take more time to navigate the processes that are theorized to underlie the QE remains unknown.The "efficiency paradox", as we have called it, is perplexing.Moving beyond a superficial understanding of what the QE is, and what happens during the QE will require creative research designs, innovative approaches, and mechanistic manipulations.Exploration of remaining questions spurred by Vickers' seminal work will not only allow a more complete understanding of the QE, but will aid in advancing the knowledge base and training recommendations to facilitate the acquisition and refinement of expert performance across multiple performance domains.
2018-12-06T21:50:51.271Z
2016-11-04T00:00:00.000
{ "year": 2016, "sha1": "a8a3b711da0202d6fa115d80f6388eec027c036b", "oa_license": "CCBYNC", "oa_url": "https://ciss-journal.org/article/download/7535/10536", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "a8a3b711da0202d6fa115d80f6388eec027c036b", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Psychology" ] }
252184181
pes2o/s2orc
v3-fos-license
Comparison of Droplet Digital PCR and Metagenomic Next-Generation Sequencing Methods for the Detection of Human Herpesvirus 6B Infection Using Cell-Free DNA from Patients Receiving CAR-T and Hematopoietic Stem Cell Transplantation Purpose The aim of this study was to examine and compare the differences between droplet digital PCR (ddPCR) and metagenomic next-generation sequencing (mNGS) in the detection of human herpesvirus 6B (HHV-6B). Long-term monitoring of HHV-6B viral load in patients receiving chimeric antigen receptor-modified T-cell (CAR-T) therapy and hematopoietic stem cell transplantation (HSCT) can be used to identify immune effector cell-associated neurotoxicity syndrome (ICANS) and guide drug therapy. Methods Twenty-seven patients with suspected HHV-6B infection who had both mNGS and ddPCR test results were analyzed retrospectively, including 19 patients who received CAR T-cell therapy and 8 who received HSCT. The HHV-6B probe and primers were designed, and the performance of the ddPCR assay was evaluated. Subsequently, ddPCR was performed utilizing blood and urine. Data on clinical information and mNGS investigations were collected. Results The ddPCR test results correlated significantly with the mNGS test results (P < 0.001, R2 = 0.672). Of the 27 time-paired samples, ddPCR showed positive HHV-6B detection in 20 samples, while mNGS alone showed positive HHV-6B detection in 12 samples. ddPCR detected additional HHV-6B infections in 8 samples that would have been missed if only mNGS were used. In addition, the first HHV-6B infection event was detected at a median of 14 days after CAR T-cell infusion (range, 8 to 19 days). Longitudinal monitoring of HHV-6B by ddPCR was performed to assess the effectiveness of antiviral therapy. The data showed that with antiviral treatment HHV-6B viral load gradually decreased. Conclusion Our results indicated that ddPCR improved the HHV-6B positive detection ratio and was an effective adjunct to mNGS methods. Furthermore, the longitudinal detection and quantification of HHV-6B viral load in patients undergoing CAR T-cell therapy and HSCT may serve as a guide for drug treatment. Introduction In recent years, chimeric antigen receptor-modified T-cell (CAR T) therapy and hematopoietic stem cell transplantation (HSCT), including sequential CAR T-cell immunotherapy following autologous stem cell transplantation (ASCT), have achieved dramatic improvements in the treatment of patients with refractory/relapsed hematologic malignancies. [1][2][3] HSCT and CAR T recipients are very susceptible to infections due to the immune deficiency caused by B-cell aplasia and pretreatment with chemotherapy. Recent studies have highlighted the importance of human herpesvirus 6 (HHV-6) in driving infection. 4,5 Studies have shown that nearly 40% and occasionally up to 70% of HSCT recipients develop HHV-6 infection. 6,7 However, the time of onset and the incidence of HHV-6 after CAR T-cell infusion have not been adequately studied. HHV-6 is a β-herpesvirus first isolated from patients with hematologic malignancy. 8 HHV-6 shows broad cell tropism in vivo and induces lifelong latent infection in humans. 9 Patients with active HHV-6 infections often remain totally asymptomatic, while HSCT patients with HHV-6 encephalitis have variable outcomes, ranging from full recovery with no residual neurologic deficits to permanent disability and death. 7,10 Previous analyses have shown that mortality rates attributable to HHV-6 encephalitis are high. 11,12 HHV-6B reactivation can occur in severely immunocompromised patients and is well described in the HSCT population, but data for patients undergoing CAR T-cell therapy are lacking. CAR T recipients are highly susceptible to infections due to the immunodeficiency caused by prior chemotherapy treatment. 13 Two major toxicities resulting from CAR T-cell therapy are cytokine release syndrome (CRS) and immune effector cell-associated neurotoxicity syndrome (ICANS). 14 Patients undergoing CAR T-cell therapy may also be at risk for HHV-6B-associated encephalitis, which can be difficult to distinguish from ICANS. Beyond these factors, the management of HHV-6B infection and CRS/ICANS is different. HHV-6B infection requires immediate initiation of antiviral therapy, while CRS/ICANS can be successfully improved with interleukin (IL)-6 receptor inhibitors and corticosteroids. 15,16 Therefore, it is necessary to distinguish between HHV-6B infection and CRS/ICANS to conduct appropriate treatment during CAR T-cell therapy. Recently, droplet digital PCR (ddPCR) has been used to quantify nucleic acids and detect pathogens. [17][18][19] A wide range of human samples, including blood, plasma, serum, cerebrospinal fluid, urine, saliva and bronchoalveolar lavage fluid, can be utilized in this method. 20 Diagnosis of HHV-6B infection relies on the quantification of viral load in body fluids, and ddPCR is known to have a higher sensitivity (< 0.1%) than that achieved with currently used PCR strategies (0.5-5%). 21,22 mNGS is a high-throughput or massively parallel sequencing method capable of detecting a wide variety of pathogens ranging from viruses to bacteria to fungi and parasites. 23 It is also commonly used to detect HHV-6 infection. Compared to traditional blood cultures, mNGS significantly increases the diagnostic sensitivity of the patient's pathogens even after antibiotic treatment, while blood cultures are often negative. 24,25 Allnutt's research team has used both mNGS and ddPCR to detect HHV6 in samples from three independent repositories. Another study compared the differences in the detection of MYD88 p.(L265P) in cerebrospinal fluid by mNGS and ddPCR methods. 26 The aim of this study was to compare ddPCR and mNGS for the detection of HHV-6B in patients treated with HSCT and CAR T-cell therapy. To our knowledge, this application has not yet been investigated. In this study, considering the sensitivity limitations of mNGS and the time required, we applied ddPCR for longitudinal testing and quantification of HHV-6B viral load in patients receiving CAR T-cell therapy and HSCT, which may help distinguish HHV-6B encephalitis from ICANS and guide drug therapy. Study Design and Participants In this study, the general workflow for detecting HHV-6B included mNGS and ddPCR. First, mNGS has been widely used in the clinical microbiological diagnosis of patients with hematological malignancies. If the patient exhibited a confirmed HHV-6B infection, dynamic monitoring of HHV-6B viral load was followed by ddPCR. Participants presented in this retrospective study were highly suspected of having HHV-6B infections (N = 27) and had both mNGS and ddPCR testing performed with informed consent between November 2019, and February 2022 at Tongji Hospital of Huazhong University of Science and Technology in Wuhan, China. Specimens and Clinical Data Collection Ten milliliter peripheral blood (PB) samples were collected upon receipt of informed consent and loaded into EDTA-K2 anticoagulant tubes. Once the sample was taken, the tube was inverted to mix the blood with EDTA in the collection tube https://doi.org/10.2147/IDR.S379439 DovePress Infection and Drug Resistance 2022:15 5354 to prevent clotting. Five milliliter urine samples were loaded into urine collection tubes. The collected samples were stored at 4° and transferred to −80° within 8 hours. Clinical data, including data regarding age, sex, treatment history, neutropenia, CRS and therapeutic responses, were extracted from the medical records. Dynamic changes in serum IL-6 and HHV-6B were recorded by clinical monitoring. Cell-Free DNA Extraction cfDNA was extracted from samples with the QIAamp Circulating Nucleic Acid Kit (Qiagen), and carrier RNA was added before lysis. After extraction, cfDNA was quantified using a Qubit fluorometer 3.0 and a highly sensitive DNA detection kit (Invitrogen, Life Technologies, Carlsbad, CA, USA). The extracted cfDNA was stored at 20°C until ddPCR. Design of Primers and Probes Two sets of specific primers and probes targeting the conserved regions of the target gene HHV-6B and the internal reference gene sequence of the human nuclear RNase P protein POP4 were designed using Primer Express 3.0.1 (Thermo Fisher Scientific, MA, USA). The TaqMan minor groove binding probes were labeled with FAM fluorophores and VIC fluorophores, respectively (Applied Biosystems, Foster City, CA, USA). Then, they were synthesized by Shanghai Biological Engineering Corporation Ltd. (Shanghai, China). The sequences of the primers and probes used in the ddPCR are given in (Table 1). Generation of the Plasmid and Verification of Primers and Probes The plasmid was generated by joining the HHV-6B target sequence to the pUC-57 plasmid vector, and then the product was amplified. The restriction enzymes BamHI (catalog number R0136S; NEB, USA) and HindIII (catalog number R0104S; NEB, USA) were used to linearize the plasmid to obtain the standard while retaining the intact HHV-6B target sequence. The plasmid standard was then diluted, and 5-fold serial dilutions ranging from approximately 10,000 copies/ µL to 3.2 copies/µL were made for ddPCR validations. Standards at each concentration were tested in duplicate under identical conditions to test quantitative linearity. Detection of HHV-6B by Droplet Digital PCR The total volume of the ddPCR mixture was 20 μL, and ddPCR was performed using a QX200 Droplet Digital PCR System (Bio-Rad). Droplets were then analyzed on the QX200 Droplet Reader (Bio-Rad) and analyzed using Quantasoft software version 1.7.4 with a user-defined threshold. Thresholds were determined manually for each experiment according to negative controls that included twelve healthy samples. Droplet positivity was determined by the fluorescence intensity; only droplets above a minimum amplitude threshold were counted as positive. The copy number of the cfDNA in samples was reported as copies/μL DNA and then converted into copies/μg DNA. DNA Library Construction, Sequencing, and Bioinformatic Analysis mNGS is used to detect pathogenic microorganisms. More than 3 mL of peripheral blood or more than 1.2 mL of cerebrospinal fluid was collected, loaded in collection tubes, and then stored at room temperature. Plasma was isolated for nucleic acid extraction. DNA libraries were constructed through DNA fragmentation, end-repair, adapter-ligation and Table 1 The Sequences of the Primers and Probes Used in ddPCR 5355 PCR amplification. Agilent 2100 (Agilent Technologies, Palo Alto, California, USA) was applied for quality control of the DNA libraries. Qualified libraries were pooled. DNA nanoball was made and sequenced by BGISEQ-50 /MGISEQ-2000 platform (BGI, Shenzhen, Guangdong, China). 27 For bioinformatic analysis, computational subtraction of human host sequences mapped to the human reference genome (hg19) using the Burrows-Wheeler Alignment, 28 then removed low-quality reads and generated high-quality sequencing data. The remaining data were classified by simultaneously aligning to Pathogens Metagenomics Database, consisting of bacteria, fungi, viruses, and parasites. The classification reference databases were downloaded from NCBI (ftp://ftp.ncbi.nlm.nih.gov/genomes/). RefSeq contains 4945 whole-genome sequences of viral taxa associated with human diseases. Criteria for Positive Detection of HHV-6B by mNGS The blood of healthy volunteers was tested in the same batch and used as a negative control. The background microbiology database is an internal database containing microorganisms that are present in more than 50% of the samples that appear in our laboratories. The criteria for positive microbial detection used in this study were as previously described. 29 HHV-6 detection was considered positive if the read number was among the top 10 and the fold change ≥ 10 times. Statistical Analysis Statistical analysis was performed by using GraphPad Prism 8.0.2, IBM SPSS Statistics for Windows version 26.0, and Adobe Illustrator CS6 was employed for figure editing. Pearson's chi-square test and Kappa concordance test were used for statistical analysis, with results with P values of < 0.05 considered to be statistically significant. Evaluation of ddPCR Assay Performance HHV-6B plasmids standards were used to evaluate the accuracy and the limit of detection (LOD) of ddPCR detection. The designed HHV-6B-specific probe and primers showed high specificity in ddPCR (Supplemental Figure S1). HHV-6B plasmid standards were prepared by fivefold serial dilution at concentrations ranging from 10,000 copies/μL to 3.2 copies/μL, and each concentration was measured in triplicate. As expected, the viral copy of HHV-6B measured by ddPCR agreed well with the expected concentration. The results showed that ddPCR had good linearity in the detection range (R 2 = 0.998, P < 0.001), indicating that ddPCR using the probe-primer set for HHV-6B could accurately quantify HHV-6B copies ( Figure 1A). LOD of ddPCR was defined as the lowest concentration at which 95% of the samples exhibited positive results that could be reliably detected. A twofold serial dilution was used to generate the following concentrations: 80, 40, 20, 10, 5, 2.5, 1.25, and 0.625 copies/μL, and each standard sample was tested 3 times. From a concentration of 80 copies/μL to 2.5 copies/μL, the results were positive in 100% of the samples. The concentration of 1.25 copies/μL resulted negative results in 60% of samples, and the concentration of 0.625 copies/μL resulted negative results in 100% of samples in the ddPCR ( Figure 1B). The LOD in the ddPCR was thus five copies per reaction. Characteristics of Patients Twenty-seven patients with suspected HHV-6B infection who had both mNGS and ddPCR testing performed were analyzed retrospectively. The detailed clinical characteristics of the patients are shown in (Table 2). The median age was 43 years (range: 20-67 years), and 55.56% (15/27) of patients were male. It is worth noting that 48.15% (13/27) exhibited neurological symptoms, which indicates they should be carefully monitored for HHV-6B encephalitis. A total of 19 patients received CAR T-cell therapy, of whom 9 patients received HSCT followed by CAR T-cell therapy, and 8 patients received HSCT. Table S1-S2). Overall, a total of 27 time-paired samples were comparable. The correlation between ddPCR and mNGS detection of HHV-6B is shown in (Figure 2A 5357 were applied ( Figure 2B). In addition, 12 time-paired samples were double positive, and 7 time-paired samples were double negative. Therefore, the agreement rate was 70.4% and the kappa statistic showed a moderate agreement (kappa value = 0.438, P = 0.006). Time of Onset and Levels of HHV-6B After CAR T-Cell Therapy We further analyzed 10 patients who were continuously monitored by ddPCR for HHV-6B infection after CAR T-cell therapy. The clinical characteristics of the patients are described in (Table 3). The first HHV-6B infection event was detected at a median of 14 days after CAR T-cell infusion (range, 8 to 19 days). CRS events were detected within 7 days (range, 1 to 7 days). CRS events occurred earlier than HHV-6B infection after CAR T-cell infusion; three patients developed HHV-6B infection after the end of CRS, and seven developed HHV-6B infection during the presence of CRS ( Figure 3A). After HHV-6B infection was detected, all patients received antiviral therapy. Subsequently, HHV-6B was serially monitored by ddPCR to assess the efficacy of antiviral therapy. Longitudinal analyses demonstrated the rapid clearance of HHV-6B cfDNA from responding patients ( Figure 3B). Figure 4A). On day 8, she was delirious and agitated. At that time, the IL-6 level was 23.23 pg/mL (reference range: < 7 pg/mL). Levels at this time were significantly lower than those observed at the previous time point. After excluding the possibility of other causes of neurological symptoms, HHV-6B reactivation was suspected. At this time point, HHV-6B levels were determined to be 42,968.75 copies/μg by ddPCR. Then, the patient received antiviral therapy with foscarnet sodium and methylprednisolone pulse therapy. A decrease in the number of HHV-6B copies was observed, and her neurological symptoms improved. However, on day 13 after CAR T treatment, the patient reappeared with neurological symptoms, such as disturbances in consciousness and epilepsy. Given that the viral copy numbers of HHV-6B decreased to 24.44, the HHV-6B infection was considered to be under control. Since she was experiencing neurological symptoms again, we considered the possibility that the disturbances in consciousness and epilepsy could be caused by ICANS after CAR T-cell infusion. The IL-6 level was 125.4 pg/mL at that time, and it was significantly higher on day 13 than on day 8. Patient 10 was a 50-year-old female diagnosed with refractory DLBCL who received CAR20/22 T-cell cocktail therapy (day 0) at our center. After CAR T-cell infusions, she had grade 3 CRS with elevated IL-6 levels and without ICANS. The IL-6 levels gradually returned to baseline values 3 weeks after CAR T therapy ( Figure 4B). However, on day 15 after CAR T-cell infusion, she experienced disorientation regarding time and location. The IL-6 level at this time was 10.65 pg/mL. The IL-6 level was monitored continuously for one week, and the levels were mostly less than 10 pg/mL. Therefore, the possibility of ICANS was excluded after CAR T-cell infusion. After excluding the possibility of other factors, the patient's neurological symptoms were suspected to be due to HHV-6B activation after CAR T-cell treatment. Blood tests and cerebrospinal fluid tests were positive for HHV-6B at the same time point, and the levels of HHV-6B were48582copies/µg and 127,928 copies/µg, respectively. The number of HHV-6B copies were significantly reduced after one month of antiviral foscarnet therapy, and the patient's main symptoms completely disappeared. This result further confirmed that HHV-6B activation causes viral encephalitis, which triggers neurological symptoms in patients receiving CAR T-cell treatment. Discussion HSCT and CAR T-cell immunotherapy bring unprecedented therapeutic effects in patients with refractory/relapsed hematologic malignancies. However, HSCT and CAR T recipients are highly susceptible to infections due to the immunodeficiency caused by prior chemotherapy, prior intensive therapy, hypogammaglobulinemia, and cytopenia. The most common infectious events are viral, bacterial and invasive fungal infections. Viral infections are particularly common in late infections in CAR T recipients. 13,30,31 HHV-6B infection is a common type of viral infection after HSCT. 7 However, evidence regarding HHV-6 infection in CAR T recipients is limited to case reports and lacks sufficient clinical data. 10,32 This study is the first to investigate HHV-6B infection by detecting cfDNA from patient samples using ddPCR in patients undergoing CAR T-cell therapy or HSCT. HHV-6 infection can cause serious complications after HSCT. These patients are at high risk of developing reactivation within the first 4 weeks after cell transfer and subsequently developing life-threatening illnesses of the central nervous system (CNS) and/or bone marrow, two well-known sites of HHV-6 latency. 33 Previous analysis has shown that the mortality rate attributable to HHV-6 encephalitis is high, and many survivors exhibit cognitive sequelae. 34,35 HHV-6 reactivation was found to be associated with poor outcomes following allo-HSCT. 36,37 The possibility of treating the infection with antiviral drugs active against HHV-6 must be considered. Decreasing the HHV-6B copy number through antiviral treatment. However, patients 6 and 9 showed a significant re-elevation of HHV-6B copy number. Since patient 6 was discharged on day 24 of CART treatment in stable condition with no neurological symptoms and no fever, antiviral medication was discontinued. HHV-6B was rechecked on day 34 after CART and HHV-6B copy number was again elevated, with re-administration of antiviral drugs HHV-6B copy number gradually decreased to 0. Patient 9 was initially treated with foscarnet for 12 days and the HHV-6B copy number was reduced to 0. Curiously, HHV-6B copy number increased again with antiviral treatment. Considering the possibility of foscarnet resistance, foscarnet was discontinued and replaced by ganciclovir, and the patient's HHV-6B copy number decreased again. The antiviral compounds ganciclovir, foscarnet and cidofovir are effective in active HHV-6 infection, 38,39 but the indications for treatment and the conditions of use have not yet been officially approved. 9 More research is needed on therapy for HHV-6. Consequently, diagnostic procedures for monitoring infection should be implemented after initial diagnosis and therapy, if initiated. This process includes the serial quantification of virus replication and the detection of putative resistance to antivirals in case of therapeutic failure. The development of new technologies, such as ddPCR or mNGS, has greatly improved the sensitivity, specificity and precision of the detection of rare sequences. 40 In addition to the increased detection sensitivity and specificity, ddPCR and mNGS strategies have their own advantages and disadvantages in the analysis of cfDNA. mNGS can identify new genetic or epigenetic modifications and offers high multiplexing capabilities but is time-consuming and requires powerful informatics support. In contrast, ddPCR experiments are easier to set up and faster and present higher sensitivity and do not require complex informatics support for analysis. However, the use of this approach requires knowledge of the genetic or epigenetic changes to be detected and exhibits limited multiplexing skills. Immunocompromised individuals who develop fevers are routinely tested for bacterial, fungal, and viral infections. mNGS can detect a wider range of pathogens and is recommended for initial diagnosis to help patients screen for infectious agents. However, since mNGS is expensive and time-consuming, after screening the number of HHV-6B sequences, subsequent dynamic monitoring by ddPCR is more appropriate because it is cheaper, faster, and can detect low concentrations of HHV-6B cfDNA. Different methods are used to test for different periods of time, taking advantage of the respective strengths of mNGS and ddPCR to provide the best diagnosis and save money for patients. Many strategies have therefore combined the use of NGS and ddPCR for liquid biopsy analysis. 26,41,42 In this study, we found that the results of ddPCR detection of HHV-6B cfDNA from body fluid samples showed moderate agreement with the results of mNGS, which was consistent with the results of previous studies examining the correlation between NGS and ddPCR. 43 Moreover, ddPCR detected HHV-6B infections in 8 additional samples that would have been missed using mNGS methods alone. It was demonstrated that the ddPCR test detected more cases than mNGS assessment. ddPCR showed a higher sensitivity in the measurements of nucleic acids and is suitable for HHV-6B detection. Another major strength to be highlighted is that a wide range of human samples with low concentrations of DNA, including blood, plasma, serum, cerebrospinal fluid, urine, saliva and bronchoalveolar lavage fluid, can be used in the ddPCR assay. 20 Previous studies have clearly demonstrated the enormous potential of ddPCR to detect minute amounts of ctDNA in plasma. Targeting known tumor mutations in plasma using the ddPCR assay in early-stage breast cancer showed a sensitivity of 93.3%. 44 It can also be applied to tumor biopsy samples, even in formalin-fixed paraffin embedded (FFPE) tissues. 45 Patients receiving CAR T-cell therapy often exhibit a state of immunosuppression, and there is significant overlap in the symptoms associated with neurotoxicity and those associated with HHV-6 encephalitis. Further differential diagnosis is highly warranted. This study described two cases of HHV-6B reactivation after CAR T-cell therapy. One patient exhibited HHV-6 encephalitis and the other did not. We found that continuous monitoring of the HHV-6B cfDNA viral load by ddPCR was very helpful in distinguishing HHV-6B viral encephalitis from ICANS. In addition, the continuous monitoring of HHV-6B cfDNA viral load by ddPCR could guide the use of antiviral drugs to achieve a good curative effect. However, two sample sizes to distinguish HHV-6B encephalitis and ICANS has potential limitation, and we still need to expand the sample size to verify this result. Further studies are needed to describe the incidence and presentation of HHV-6 encephalitis in this patient population, as well as optimal treatment strategies and methods for assessing response to therapy. Conclusion Droplet digital PCR was a powerful complement to the use of mNGS methods in clinical application due to its higher rate of detection of HHV-6B positivity. Longitudinal testing and quantification of HHV-6B DNA copy number in patients receiving CAR T-cell therapy and HSCT may help distinguish HHV-6B encephalitis from ICANS and guide drug therapy. However, it should be better verified by expanding the sample size. Institutional Review Board Statement The studies involving human participants were reviewed and approved by Ethic Committee of Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology (TJ-IRB20160310). Informed Consent Statement The patients provided their written informed consent to participate in this study. This study was conducted in accordance with the Declaration of Helsinki.
2022-09-11T15:47:01.104Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "079fa069407ae5c8b2e16d80339ce309213ef5d8", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d141a83201bbbd0c2c6bb59888fb1e6d84cd60a3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263020991
pes2o/s2orc
v3-fos-license
Diagnosis of Latent Tuberculosis Infection CONTENTS Introduction ............................................. 7.2 Purpose................................................................ 7.2 Policy ................................................................... 7.2 Forms ................................................................... 7.2 Tuberculosis Classification System ..... 7.3 High-Risk Groups ................................... 7.4 Diagnosis of Latent Tuberculosis Infection ................................................... 7.7 Mantoux tuberculin skin testing ........................... 7.8 Candidates for Mantoux tuberculin skin testing .......................................................... 7.9 Administration of the tuberculin skin test ........... 7.12 Measurement of the tuberculin skin test ......... 7.13 Interpretation of the tuberculin skin test ............. 7.14 Interferon gamma release assays ...................... 7.16 Human immunodeficiency virus screening ........ 7.17 Follow-up activities ............................................. 7.17 Chest radiography .............................................. 7.18 Chest X-ray interpretation and treatment recommendations .............................................. 7.19 Work or school clearance .................................. 7.21 Introduction Purpose Use this section to understand and follow national and Alaska TB Program guidelines to  classify patients with latent TB infection (LTBI)  diagnose LTBI One of the recommended strategies to achieve the goal of reduction of TB morbidity and mortality is the identification of persons with LTBI at risk for progression to TB disease, and treatment of those persons with an effective drug regimen. 1 Evaluation and follow-up of contacts are covered in more depth in the Contact Investigation section 11.1. For information on treatment, refer to the Treatment of Latent Tuberculosis Infection section 8.1. For detailed information on the diagnosis and treatment of latent tuberculosis infection in children, refer to the Diagnosis and Treatment of Latent Tuberculosis Infection (LTBI) and Tuberculosis Disease in Children (under 16 years of age) section 9.1. Policy In Alaska, TB screening should be provided for:  Individuals with risk factors for LTBI (See Table 2, 7.6).  Persons with a greater risk to progress to active TB once infected with M. tuberculosis (i.e., those infected with HIV) 2  Persons who are contacts to an active TB case, as described in the Contact Investigation section 11.1. For roles and responsibilities, refer to the "Roles, Responsibilities, and Contact Information" topic in the Introduction 1.11. Forms All required and recommended forms are available in the Forms section of this manual 18.1. Tuberculosis Classification System The system for classifying tuberculosis (TB) is based on how the infection and disease develop in the body. Use this classification system to help track the status of TB in your patients and to allow comparison with other reporting areas. EVALUATION OF THOSE WITH LATENT TUBERCULOSIS INFECTION (LTBI) BASED ON RISK OF INFECTION, RISK OF PROGRESSION TO TUBERCULOSIS, AND BENEFIT OF THERAPY Diagnosis of Latent Tuberculosis Infection The diagnosis of latent tuberculosis infection (LTBI) has traditionally been based upon results of tuberculin skin testing (TST). However, whole-blood interferon gamma release assays (IGRAs), are now increasingly available and may be the preferred option for detecting LTBI in many situations. In December 2016, new Official American Thoracic Society/ Infectious Diseases Society of America/ Centers for Disease Control and Prevention Clinical Practice Guidelines: Diagnosis of Tuberculosis in Adults and Children were published. These recommendations suggest performing IGRAs rather than TSTs in all individuals 5 years of age and older who are likely to be infected with Mtb, have a low or intermediate risk of disease progression, and in whom it been decided that testing for LTBI is warranted. The preferential use of IGRAs is most strongly recommended for those who have a history of BCG vaccination or who are unlikely to return to have their TST read. Many experts also approve the use of IGRA in children over the age of 2 years. A TST remains an acceptable alternative for TB testing, especially in situations where IGRAs are not available, too costly, or too burdensome. Interferon Gamma Release Assays (IGRA) are not provided by or funded by the Alaska TB Program. Reference laboratories currently provide IGRA testing in many communities statewide. Mantoux Tuberculin Skin Testing The Mantoux method of tuberculin skin testing (TST) is used to detect infection with Mycobacterium tuberculosis. In general, it takes 2 to 10 weeks after infection for a person to develop a delayed-type immune response to tuberculin measurable with the Mantoux TST. 9 During the test, tuberculin is injected into the skin. The immune system of most persons with tuberculosis (TB) infection will recognize the tuberculin, causing a reaction in the skin. Repeated TSTs do not produce hypersensitivity. The size of the measured induration (a hard, dense, raised formation) and the patient's individual risk factors should determine whether TB infection is diagnosed. 10 Based on the sensitivity and specificity of the purified protein derivative (PPD) TST and the prevalence of TB in different groups, three cut-points are used by CDC for defining a positive tuberculin reaction:  Greater than or equal to 5 mm  Greater than or equal to 10 mm  Greater than or equal to 15 mm of induration 11 For more information on cut-points for the TST, see the "Interpretation of the Tuberculin Skin Test" topic in this section. Candidates for Mantoux Tuberculin Skin Testing The Mantoux TST can be administered to all persons, including pregnant women, 12 persons who have previously been vaccinated with bacille Calmette-Guérin (BCG), 13 and human immunodeficiency virus (HIV)-infected persons. However, persons with a documented prior positive TST do not need another TST, and the Mantoux TST should not be administered until four weeks after vaccination with live-virus vaccines. If the person being tested is a contact, follow the procedures outlined in the Contact Investigation section 11.1. Pregnancy Tuberculin skin testing is entirely safe and reliable for pregnant women, and pregnant women at high risk for TB infection or disease should be tested. Screen pregnant women for TB infection if they have any of the following conditions: Bacille Calmette-Guérin Vaccine BCG vaccines are live vaccines derived from a strain of Mycobacterium bovis. Because their effectiveness in preventing infectious forms of TB has never been demonstrated in the United States, they are not recommended as a TB control strategy in the United States, except under rare circumstances. They are, however, still used commonly in other countries. A history of BCG vaccination is not a contraindication for tuberculin skin testing, nor does it influence the indications for a TST. Administer and measure TSTs in BCG-vaccinated persons in the same manner as in those with no previous BCG vaccination. Diagnosis and treatment of LTBI should be considered for BCG-vaccinated persons with a TST reaction of equal to or greater than 10 mm induration, especially if they are  continually exposed to populations with a high prevalence of TB (e.g., some healthcare workers, employees and volunteers at homeless shelters, and workers at drug treatment centers)  born or have lived in a country with a high prevalence of TB; or  exposed to someone with infectious TB, particularly if that person has transmitted TB to others. 14 Evaluate these patients for symptoms of TB. If a patient has symptoms of TB disease, obtain chest radiography and collect sputum specimens. One advantage of the IGRA testing is that persons with BCG vaccination will not have a positive IGRA test from BCG vaccination. For this reason, IGRA testing is preferred over TST for this population if feasible. Anergy Testing Anergy testing is not routinely recommended in conjunction with TST for HIV-infected persons in the U.S. 15 Anergy testing is a diagnostic procedure used to obtain information about the competence of the cellular immune system. Conditions that may cause an impaired cellular immune system include HIV infection, severe or febrile illness, measles or other viral infections, Hodgkin's disease, sarcoidosis, live virus vaccination, and corticosteroid or immunosuppressive therapy. Persons with conditions such as these may have suppressed reactions to a TST even if infected with TB. However, there are no simple skin testing protocols that can reliably identify persons as either anergic or nonanergic and that have been proven to be feasible for application in public health TB screening programs. Factors limiting the usefulness of anergy skin testing include the following: Documented Prior Positive Tuberculin Skin Test Persons who have tested positive in the past and can provide documentation of their status should not have another TST. Instead, they should have a TB symptom assessment questionnaire administered to identify any symptoms of TB disease. 16 Persons who are symptomatic should receive a chest radiograph and evaluation including sputa collection. Routine chest radiographs are NOT indicated. Use the Tuberculosis Screening Questionnaire / Chest X-ray Interpretation Request to document history and TB screening assessment. Forms are available in the Forms section of this manual 18.1. See the "Work or school clearance" topic in this section for additional information on clearing individuals with prior positive TSTs for work or school 7.21. Live-Virus Vaccines The Mantoux TST can be administered safely in conjunction with all vaccines. However, the measles (MMR) vaccine-and possibly mumps, rubella, varicella, and live attenuated influenza vaccines-may transiently suppress the response to PPD. 17 Therefore, if a vaccine containing live virus (for example, measles, MMR, varicella, or live attenuated influenza vaccine) has already been given, the TST should be deferred until (or repeated) at least four weeks after the vaccine was administered. When giving the TST and the live virus vaccine, one of the following three sequences should be used: Administration of the Tuberculin Skin Test The TST should be placed by a healthcare worker who has received appropriate training and is following written protocols. How to Administer a Tuberculin Skin Test If the patient's written consent is required, obtain it per health department requirements. Inject air into the vial air space (not into the solution). Injection of air into the air space in the vial prevents creation of negative pressure within the vial, allowing the antigen to be withdrawn easily. Injecting air into the solution creates bubbles and may interfere with withdrawing the correct amount of antigen. 19 2. The injection should be placed on the palm-side-up surface of the forearm, about two to four inches below the elbow. Your local institutional policy may specify the right or left forearm for the skin test. The area selected should be free of any barriers to placing and reading the skin test, such as muscle margins, heavy hair, veins, sores, tattoos, or scars. 3. After choosing the injection site, clean the area with an alcohol swab by circling from the center of the site outward. Allow the site to dry completely before the injection. 4. Using a disposable tuberculin safety needle and syringe, inject 0.1 ml of PPD tuberculin containing 5 tuberculin units (TU) intradermally with the needle bevel facing upward. Because some of the tuberculin solution can adhere to the inside of the plastic syringe, the skin test should be given as soon as possible after the syringe is filled. Filled syringes should be kept cool and protected from light. If they are not used within an hour of being drawn up, they should be discarded. 5. The injection should produce a discrete, pale elevation of the skin (a wheal) 6 to 10 mm in diameter. Note: If a 6-to10-mm wheal is not produced, repeat the test on the opposite arm or the same arm, 2 inches from the original site. 6. Record the date and time of TST administration, location of injection site, dose, name of person who administered the test, name and manufacturer of tuberculin product used, lot number, expiration date, and reason for testing according to clinic or agency protocol. 20 R e v i s e d M a r c h 2 0 2 1 For questions or guidance regarding the interpretation of TSTs, call the Alaska TB Program at 907-269-8000. How to Interpret a Tuberculin Skin Test Use the table below to determine when a reaction is positive. When interpreting TST results, be aware of the following. Skin test conversions: For persons previously skin tested, an increase in induration of 10 mm or more within a two-year period is classified as a conversion to positive. False-negative reactions may be due to the following: See "Anergy Testing" under "Candidates for Mantoux Tuberculin Skin Testing" in this section 7.10.  Recent TB infection (within the past 10 weeks)  Very young age (less than 6 months of age, because the immune system is not fully developed)  Overwhelming TB disease  Vaccination with live viruses (e.g., measles, mumps, rubella, varicella, oral polio, or yellow fever). A health care provider will draw a patient's blood and send it to a laboratory for analysis and results. Diagnosis of Latent • Positive TB blood test: This means that the person has been infected with TB bacteria. Additional tests are needed to determine if the person has latent TB infection or TB disease. • Negative TB blood test: This means that the person's blood did not react to the test and that latent TB infection or TB disease is not likely. TB blood tests are the preferred TB test for: • People who have received the TB vaccine bacille Calmette-Guérin (BCG). R e v i s e d M a r c h 2 0 2 1 The advantages of IGRA tests, compared with the TST, are that results can be obtained after a single patient visit, and that, because it is a blood test performed in a qualified laboratory, the variability associated with skin test reading can be eliminated. 24 In addition, the IGRA tests are unaffected by past BCG vaccination and may eliminate the unnecessary treatment of patients with BCG-related false-positive results. 25 However, the IGRA test also has practical limitations that include the need to draw blood and to ensure its receipt in a qualified laboratory in time for testing. As with the TST, additional tests, such as chest radiography and bacteriologic examination, are required to confirm or rule out active TB disease. 26 Persons with a positive IGRA or TST result, regardless the presence or absence of symptoms and signs, must be evaluated for TB disease before LTBI is diagnosed. At minimum, a medical examination should be performed and a chest radiograph should be done to look for abnormalities consistent with TB disease. 27 Negative IGRA results should not be used alone Follow-Up Activities After testing, complete the following tasks: If the person has signs or symptoms of TB, evaluate for TB disease as described in the "Diagnosis of Tuberculosis Disease" topic in the Diagnosis of Tuberculosis Disease section. (5.11). Refer to Table 4: When to Suspect Pulmonary Tuberculosis in Adults. If the person is a contact, follow the procedures for testing and evaluation in the Contact Investigation section 11.1. If the person is a participant in two-step screening, see the topic titled "Two-Step Tuberculin Skin Testing" in the Infection Control section 17.11. If the TST result is newly positive, a chest radiograph should be obtained for the patient, as specified in the "Chest Radiography" topic in this section 7.18. Chest Radiography All individuals being considered for LTBI treatment should undergo a chest radiograph to rule out pulmonary TB disease. Asymptomatic patients whose most recent chest radiograph was taken more than 2 -3 months prior to starting treatment should have a repeat chest X-ray. The Alaska TB Program may be able to provide partial reimbursement for patients in need of a chest x-ray but without insurance or financial resources to cover the cost. Refer to Table 5 to determine when to obtain a chest radiograph and what follow-up is required for chest radiograph results. A posterior-anterior radiograph of the chest is the standard view used for the detection and description of chest abnormalities in adults. In some instances, other views (e.g., lateral, lordotic) or additional studies (e.g., computed tomography [CT] scans) may be necessary. PHNs or health care providers should request approval from the Alaska TB Program for partial reimbursement for a single view chest film (CPT 71010) for patients otherwise unable to pay before the x-ray is done. Call the Alaska TB Program at 907-269-8000 for an authorization number. For persons recently exposed to TB, follow the procedures for testing and evaluation in the Contact Investigation section 11.1 Chest X-ray Interpretation and Treatment Recommendations Treatment for LTBI should be prescribed by the patient's health care provider. For patients without providers or the financial resources to obtain care, the Alaska TB Program recommendations from the Clinical Consultation Summary for LTBI treatment may be used as a prescription for LTBI treatment. Such patients must be adequately evaluated and have a chest x-ray. The results of the TST or IGRA, review of symptoms, medical history and risk factors should be recorded on the Tuberculosis Screening Questionnaire / Chest X-ray Interpretation Request (18.1) and must be submitted to the Alaska TB Program with the chest radiograph for medical review and recommendations. Work or School Clearance Persons with newly positive TSTs or IGRAs may be cleared for work or school if they are low-risk by history, asymptomatic, and have a negative chest radiograph. Persons with prior positive TSTs may be cleared if they are asymptomatic. Routine chest radiographs are NOT indicated unless persons with prior positive TSTs or IGRAs become symptomatic for TB. Use the Tuberculosis Screening Questionnaire / Chest X-ray Interpretation Request (18.1) to complete history and symptom screening. If history, symptom screening and chest x-ray are negative, the patient may be cleared for work or school by the PHN or provider. Complete the Tuberculosis Screening and Clearance Card to document clearance. Tuberculosis Screening and Clearance cards can be ordered from the Alaska TB Program by calling 907-269-8000.
2018-02-10T08:23:01.780Z
2007-10-01T00:00:00.000
{ "year": 2007, "sha1": "92ea62ae5b12a9da607adb1f93b599cd97d4ecfc", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0970-2113.44374", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9fb11418f109c1c47ea0e088941b93df01211463", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
51928537
pes2o/s2orc
v3-fos-license
Behavioural and demographic correlates of undiagnosed HIV infection in a MSM sample recruited in 13 European cities Background Reducing the number of people with undiagnosed HIV infection is a major goal of HIV control and prevention efforts in Europe and elsewhere. We analysed data from a large multi-city European bio-behavioural survey conducted among Men who have Sex with Men (MSM) for previously undiagnosed HIV infections, and aimed to characterise undiagnosed MSM who test less frequently than recommended. Methods Data on sexual behaviours and social characteristics of MSM with undiagnosed HIV infection from Sialon II, a bio-behavioural cross-sectional survey conducted in 13 European cities in 2013/2014, were compared with HIV-negative MSM. Based on reported HIV-testing patterns, we distinguished two subgroups: MSM with a negative HIV test result within 12 months prior to the study, i.e. undiagnosed incident infection, and HIV positive MSM with unknown onset of infection. Bivariate and multivariate associations of explanatory variables were analysed. Distinct multivariate multi-level random-intercept models were estimated for the entire group and both subgroups. Results Among 497 participants with HIV-reactive specimens, 234 (47.1%) were classified as previously diagnosed, 106 (21.3%) as incident, and 58 (11.7%) as unknown onset based on self-reported status and testing history. MSM with incident HIV infection were twice as likely (odds ratio (OR) = 2.22, 95% confidence interval (95%CI): 1.17–4.21) to have used recreational substances during their last anal sex encounter and four times more likely (OR = 3.94, 95%CI: 2.14–7.27) not to discuss their HIV status with the last anal sex partner(s). MSM with unknown onset of HIV infection were 3.6 times more likely (OR = 3.61, 95%CI: 1.74–7.50) to report testing for a sexually transmitted infection (STI) during the last 12 months. Conclusions Approximately one third of the study participants who are living with HIV were unaware of their infection. Almost two-third (65%) of those with undiagnosed HIV appeared to have acquired the infection recently, emphasizing a need for more frequent testing. Men with the identified behavioural characteristics could be considered as primary target group for HIV Pre-Exposure Prophylaxis (PrEP) to avoid HIV infection. The increased odds of those with unknown onset of HIV infection to have had an STI test in the past year strongly suggests a lost opportunity to offer HIV testing. Electronic supplementary material The online version of this article (10.1186/s12879-018-3249-8) contains supplementary material, which is available to authorized users. Background In recent years a range of efforts and new initiatives have been implemented across Europe to increase Human Immunodeficiency Virus (HIV) testing among key populations and to reduce the number of undiagnosed HIV infections and late diagnoses [1]. Despite some progress in terms of increased testing uptake, a recent report from the European Centre for Disease Prevention and Control (ECDC) estimates that the proportion of undiagnosed HIV among Men having Sex with Men (MSM) in six European countries is 17% [2]. This falls short of the internationally agreed goal of diagnosing at least 90% of the people who are infected [3]. Furthermore, approximately one third of HIV diagnoses among MSM in the European Union (EU) are late (CD4 < 350 cells/μl at diagnosis). Reducing late HIV diagnoses would result in substantial individual (i.e. reduced morbidity and mortality) and public health benefits (reduced transmission and reduction of health care costs) [4,5]. In addition to adverse health outcomes, late diagnoses and late initiation of antiretroviral therapy are associated with increased risks for transmitting HIV unknowingly to sexual partners. In countries with unrestricted access to antiretroviral treatment undiagnosed HIV infections are thought to be the main sources of new HIV infections. Early diagnosis of HIV and successful treatment are thus important for the successful management of the disease in individual patients as well as major tools supporting the implementation of the WHO strategy for anti-retroviral treatment as prevention [3,6,7]. The level of undiagnosed infections is driven by HIV incidence on the one hand and by the testing rate on the other. Factors increasing HIV incidence are likely to contribute to the increased level of undiagnosed infections, even among frequent testers. Additionally, barriers to testing and low testing uptake may cause accumulation of infections, including late stage infections [8,9]. Little is known about the characteristics and sexual behaviours of people with undiagnosed infections. Some information can be retrieved from HIV testing sites from people newly diagnosed with HIV [10][11][12], although the demographic data collected and analysed are quite limited from such clinical sites [13,14]. People presenting for testing self-select and therefore such samples may be biased. Another way to collect such information is from longitudinal cohort studies including HIV-uninfected people at risk for HIV. However, such studies are time consuming, costly, and rarely conducted in Europe. In addition, participants may not be representative of the population at risk in real world settings. Bio-behavioural studies, such as the multi-city Sialon II study, may therefore be better suited to systematically collect information on people who are unknowingly infected with HIV [15,16] with more scientific rigour and fewer biases. In general, the risk of HIV infection among MSM is associated with sexual risk practices such as the lack of condom use, number of partners with whom condomless anal intercourse is practised, drug use associated with sex, and attending gay sex venues where risky sexual behaviours are part of the sub-culture [17]. However, such risk practices are driven by a complex set of intertwined factors, ranging from the personal level factors (e.g. age, personal skills and self-efficacy, mental health) to interpersonal factors (partner dynamics, communication and negotiation on sex practices), to community and service provision-related factors (social and sexual norms, perceived homonegativity in communities; access to testing and other medical services) and structural factors (policies and legislation). The intersection of these factors is further shaped by high HIV prevalence in subgroups of MSM [18]. Barriers to HIV testing have also been extensively described in the literature: testing for HIV is more likely for individuals who perceive themselves at risk for HIV and who anticipate personal benefits from testing, while fears of consequences of receiving an HIV diagnosis hinders HIV testing. The latter has been shown to be associated with fear of discrimination and personal rejection [19,20]. Research has also shown that multiple social-cognitive factors (e.g. knowledge, attitudes, perceived behavioural control) play a role in explaining testing for HIV and sexually transmitted infections (STI) among MSM [20]. In addition to patient-level factors, a review demonstrated the influence of health-systems and structural factors on uptake of HIV testing [8]. The present study used data from a large multi-city European bio-behavioural survey conducted among MSM within the framework of the European Public Health Project Sialon II. The analysis has two objectives -1) to identify factors correlating with early undiagnosed infections among testers, which are most likely driven by HIV incidence among repeat testers; and 2) to characterise groups that are not adhering to testing recommendations in order to properly inform appropriate testing campaigns targeted towards them. Study design and procedures Sialon II was a multi-site bio-behavioural cross-sectional survey carried out in 13 European cities. The cities were: Brussels (Belgium), Sofia (Bulgaria), Hamburg (Germany), Verona (Italy), Vilnius (Lithuania), Warsaw (Poland), Lisbon (Portugal), Bucharest (Romania), Bratislava (Slovakia), Ljubljana (Slovenia), Barcelona (Spain), Stockholm (Sweden), and Brighton (UK). In 2013/2014, MSM were recruited to participate in the survey using time-location-sampling (TLS) in community-based settings in nine European cities, and using respondent-driven-sampling (RDS) in social networks of MSM in four European cities (Bucharest, Bratislava, Verona, Vilnius). In TLS cities participants were recruited during 2013, in RDS cities recruitment started in 2013 and finished in 2014. Recruitment methods, study procedures, questions asked as well as sample collection and testing have been described in detail elsewhere [21]. The study protocol was approved by ethical review committees in all participating countries and by the WHO Research Project Review Panel (WHO-RP2) and the WHO Research Ethics Review Committee (WHO-ERC) before the data collection phase. The bio-behavioural survey data generated from the Sialon II project provided the opportunity to combine data on testing history and self-reported HIV test result, and to link them with the laboratory determined HIV status to identify men with an undiagnosed HIV infection at the time of the survey implementation. In this analysis, sexual behaviours and social characteristics of these men with undiagnosed HIV infection are assessed and compared with their uninfected peers. Measures Participants filled in a short questionnaire and provided either an oral fluid (in TLS cities) or blood (in RDS cities) specimen for HIV antibody testing. Based on self-reported HIV status and the HIV testing result of the collected specimen, participants were classified as HIV-uninfected (nHIV), previously diagnosed with HIV infection (pHIV), and HIV-infected but as yet undiagnosed (uHIV). As far as the time of HIV diagnosis is concerned, three different patterns can be distinguished: 1) early diagnosis, when testing is predominantly triggered by symptoms of acute HIV infection and/or awareness of transmission risk; 2) intermediate diagnosis, when testing is triggered by health concerns not immediately related to acute HIV disease or transmission risk awareness; 3) late diagnosis, often triggered by symptoms or health complaints associated with compromised immune status. These three patterns may be associated with different demographic and behavioural characteristics (see below). Since data on HIV testing intentions were not collected, we used HIV testing history to distinguish between a group with likely high testing frequency and incident HIV infection (uHIVincnegative test result reported within 12 months before the study specimen was collected) and MSM who were tested a longer time ago or never tested for HIV infection of unknown onset (uHIVunk). The uHIVinc subgroup may represent pattern 1 and partially pattern 2 testers, while the uHIVunk subgroup may represent the complementary part of pattern 2 and pattern 3 testers. The following questionnaire items were used to determine if the case had been previously diagnosed: a question whether and if yes, when the last HIV antibody test was performed and a question on the result of the last HIV antibody test. If these questions were not answered or information was inconsistent HIV status knowledge was classified as undetermined and respondents were excluded from the analysis. In addition, participants recruited in Sofia had to be excluded from the two test recency subgroup analyses because the answers were invalid due to an incorrect translation of the respective question. Lab testing of biological samples In line with the TLS protocols, oral fluid (OF) specimens were collected and tested for HIV antibodies using Genscreen HIV 1/2 version 2, BIO-RAD. A total IgG antibodies ELISA test Human IgG ELISA Kit 1 × 96, Quantitative / Immunology Consultants Laboratory was used for OF specimen testing suitability and quality control. All HIV-reactive specimens were re-tested with Vironostika HIV Ag/Ab, bioMérieux. Specimens tested positive with the first HIV ELISA test, but negative with the second were classified as negative. MSM who participated to the survey in cities where RDS was used as recruitment method received pre/ post-test counselling during the enrolment and follow up process. Blood samples were collected and serum extracted in line with the local standard procedures. Serum samples were tested with an HIV 4th generation ELISA/ CLIA screening test. A Western blot test was used to confirm positive cases. In case of a confirmed HIV positive result, a referral procedure was put in place in line with the local standard procedures to ensure linkage to care and proper case management. Secondary variables Based on published literature on factors associated with HIV acquisition risk, undiagnosed HIV infection or infrequent HIV testing and late diagnosis among MSM (as mentioned in the introduction), associations of uHIV, uHIVinc and uHIVunk status were analysed with: -Demographic variables such as age (calculated using the self-reported year of birth), education level (secondary school or lower, high school or postsecondary or university/ higher), migration status (native: born & living in the study country; emigrant: born in the study country & living abroad; immigrant: born abroad & living in the study country; visitor: born & living abroad); -Behavioural variables such as number of sexual partners and number of partners with whom condomless anal intercourse (AI) had been practiced in the previous 6 months, frequency of visiting gay sex venues in the last 3 months, type and number of drugs used during last AI (categorised as alcohol; cannabis; sexual performance enhancing substances: erectile dysfunction medication and inhaled amyl nitrite; party drugs: cocaine, ecstasy, amphetamines; chemsex drugs: GHB, ketamine, mephedrone, crystal meth); -Type of partners for last AI (steady, non-steady, more than one), self-reported HIV serostatus disclosure to the last AI partner, sexual role during last AI (top, bottom, versatile), condom use during last AI; -HIV and STI testing in the previous 12 months, and "outness" about sexual orientation towards relatives, friends, and co-workers. The self-administered questionnaire filled-in by the study participants is available as Additional file 1. Statistical analysis We conducted analyses of bivariate and multivariate associations of explanatory variables with uHIV, and the two subgroups uHIVinc and uHIVunk, using nHIV for comparison. For the two subgroups, the comparison group was also determined by their last reported HIV test date, i.e. the comparison group for uHIVinc was tested negative within the previous 12 months, and the comparison group for uHIVunk was never tested or tested more than 12 months ago. A multivariate multi-level logistic random-intercept model (random effect of study site) was estimated to account for the hierarchical structure of the data [22]. The multi-level analysis was conducted to identify factors associated with each subgroup separately and with the combined group. Predictors associated with the outcome variable with a probability < 0.05 were considered significant. The dataset used for the analysis presented in this manuscript is available as Additional file 2. The Stata syntax of the analysis is available as Additional file 3. Study sample A detailed description of the study sample has been published in the study report [23]. At most study sites, approximately 400 men had been recruited as requested by the study protocol, with exception of Bucharest, where only 183 participants were enrolled. There were significant age differences between study sites. The proportion of study participants tested for HIV in the last 12 months before completing the study questionnaire among those not known to have been diagnosed with HIV ranged between 35.5% in Bratislava and 66.2% in Barcelona (see Table 1). Formative research conducted in preparation of the bio-behavioural survey established that HIV testing sites, including sites providing free and anonymous HIV testing and rapid testing existed in all study cities at the time when study recruitment occurred [24]. Further qualitative assessments of gay-friendliness, accessibility and acceptability of available testing services were not conducted. HIV home tests and home collection tests were unavailable. A valid HIV test result was available for 4716 participants. The antibody test result was non-reactive for 4219 specimens, and reactive for 497 specimens (11.8%). From the 4219 participants with non-reactive specimens 4184 (99%) were classified as nHIV, 35 were classified as indeterminate due to conflicting or missing self-reported data on HIV infection status. From the participants with reactive specimens 234 (47%) were classified as pHIV, 102 (20.5%) as uHI-Vinc, and 49 (9.9%) as uHIVunk based on self-reported infection status and testing history. Twelve participants from Sofia with undiagnosed HIV infection could not be classified in these two subgroups. The remaining 100 (20.1%) participants with reactive specimens had to be classified as indeterminate based on questionnaire data due to incomplete information on testing history and status knowledge (e.g. non-response to the question on previous HIV test and/or test result). A weak positive correlation between the percentage of the participants tested for HIV by study site in the recent 12 months and the percentage of undiagnosed HIV in the study sites was observed (r = .275 -see Table 1). Undiagnosed HIV infections and associations with demographics and behaviours The distribution of all and of undiagnosed infections by age group is shown in Fig. 1. The percentage of undiagnosed infections from all prevalent infections is approaching 50% in age groups younger than 35 years-old and declines to less than 30% in older age groups. Table 1 shows the distribution of undiagnosed HIV infections by study sites. The proportion of study participants with undiagnosed HIV infection ranged from 0.9% in Stockholm to 9.3% in Bucharest. The overall proportion of undiagnosed HIV infections among men without a recent test result was almost one-third of the undiagnosed infections, ranging from 20.7% in Lisbon to 80% in Verona. The proportions of undiagnosed HIV among infrequent testers were consistently higher than 50% in the four cities Bratislava, Bucharest, Vilnius and Verona, in which RDS was used for recruitment. Table 2 shows the reported last test dates among study participants who did not report having HIV or a last HIV test within the 12 months before they were recruited to the Sialon study. Table 3 shows results of bivariate analysis of associations between potential explanatory variables and the outcomes 1) undiagnosed HIV infection, acquired within the previous 12 months -uHIVinc; 2) undiagnosed HIV infection of unknown onset -uHIVunk; 3) undiagnosed HIV infection irrespective of date of previous HIV test . Compared with HIV uninfected survey participants, men assigned to the uHIVinc group were more likely to be 25-44 years of age (compared to the reference age group [18][19][20][21][22][23][24], and showed higher odds for the use of drugs during last anal sex, they were less likely to have disclosed their presumed negative HIV serostatus to their last anal sex partner(s), more likely to have been versatile during their last anal sex encounter, and more likely to have had more than 10 partners in the last 6 months with whom they had condomless anal intercourse. Men assigned to the uHIVunk group were more likely to be older (age groups 35-44) than HIV-uninfected men who had not been tested for HIV in the last 12 months, to report any condomless anal intercourse in the last 6 months, and to have higher numbers of partners in the last 6 months with whom they had condomless anal sex, they were more likely to have been tested for and diagnosed with an STI in the last 12 months, and more likely to be an emigrant on home visit to his country of origin, but they were mostly inconspicuous in terms of substance use and most other potential explanatory variables. In multivariate analysis assignment to the uHIVinc group remained significantly associated with age 25-34, and versatility, lack of serostatus disclosure, and use of party and sexual performance enhancing drugs during the last anal sex event (see Table 4). The only factors remaining associated with uHIVunk in multivariate analysis were age 35-54, higher number of partners with whom condomless anal sex had been practiced in the last 6 months, and more frequent STI testing in the last 12 months. Education, migration status, outness, frequency of visiting gay sex venues in the last 6 months, partnership status, type of partner for the last anal intercourse, condom use during last anal intercourse, and sexual role during last anal intercourse were not significantly different between men with and without undiagnosed HIV infection. Discussion Approximately one third of the study participants who were living with HIV and for whom their HIV status knowledge could be assessed were unaware of being infected. This is much higher than proportions reported from some modelling studies or estimates reported to ECDC for Dublin Declaration monitoring [2,25,26]. This apparent contradiction is likely explained by an age related effect in our sample: as we can show in our analyses, the proportion of undiagnosed HIV is highly age-dependent. A large proportion of MSM living with HIV in the Western European countries, where the HIV epidemic amongst MSM started already in the 1980s, is already older than 40 years. These higher age groups are underrepresented among the visitors of gay venues that often cater to younger MSM clients. When the different age composition of the Sialon sample and the MSM population in modelling studies are considered, the results in terms of the proportions of undiagnosed infections are essentially comparable [own unpublished comparisons between modelling results of the German undiagnosed fraction and Sialon results for Hamburg]. Contrastingly, in Eastern European countries, where the HIV epidemic among MSM is more recent and the fraction of older infections in aging survivors is much smaller, the Sialon results are comparable with modelling studies [27]. Another aspect that needs to be considered when comparing Sialon II results with national modelling studies is that Sialon II was conducted in large cities while modelling studies include whole countries. Regardless, our findings underline that in many settings where MSM congregate and seek sexual partners, a considerable proportion of those who are living with HIV are unaware of their HIV status. Our analysis further shows that men with an undiagnosed HIV infection are a heterogeneous group of people. In our European multi-city sample, approximately two-third of those with undiagnosed HIV infection reported to have received a negative HIV test result in the previous 12 months, indicating the relatively recent acquisition of the infection and substantial incidence in this group. Moreover, this subgroup of men the three models estimate associations in three groups: uHIVincundiagnosed HIV in a group of men reporting a last negative HIV test result within the previous 12 months uHIVunkundiagnosed HIV in a group of men who never tested for HIV or whose last negative HIV test result is older than 12 months uHIVundiagnosed HIV in the combined group of men irrespective of the time of the last negative HIV test appears to test more frequently and be aware of risks. Taking this into account, the probability is high that many of them would have been tested again and diagnosed in the near future. It might also be that some of them tested in the HIV window period and received a false negative test result. To improve early HIV diagnosis in this group, men with these characteristics presenting for HIV testing should be offered laboratory testing with 4th generation HIV antigen/antibody tests to increase the probability to detect recent infections. If sufficient resources are available, even targeted PCR testing could be considered if this subgroup can be identified among the clients of the testing facilities, e.g. based on a combined symptoms and behaviours score [28,29]. The men with undiagnosed infection following a negative test within the past 12 months had high odds of having used recreational drugs during their last anal sex encounter and high odds of not discussing their HIV status with the last anal sex partner(s) [30]. Because viral load and transmissibility of HIV are very high during the phase of acute HIV infection [31][32][33], many of their recent sexual partners may have been at high risk for acquiring HIV infection if they engaged in condomless anal intercourse relying on an assumed negative HIV status. In the literature, the associations between repeat testing and risk behaviours are complex. Receiving a negative result may trigger different reactions from reassurance in safe practices to feeling lucky or invulnerable, or reinforce risky behaviour that is associated with a subsequent higher frequency of unprotected sex [34]. These findings clearly point to the need of recommending more frequent testing in selected groups of MSM, especially to those using recreational drugs. More importantly, the testers could be considered as primary target group for HIV pre-exposure prophylaxis (PrEP) to avoid HIV infection in the first place, as also suggested by other authors [35]. Approximately one-third of the men with undiagnosed HIV in the Sialon II sample infrequently test for HIV, although they tend to have multiple condomless anal sex encounters. Higher proportions were observed particularly in the four RDS cities, which may suggest that more hidden subgroups within the MSM populations were reached (see also Limitations). This, from a public health perspective, is an advantage of this sampling methodology compared to TLS method and probably to National HIV surveillance systems as well. While the study was not designed to answer the research question on identifying characteristics and behaviours of undiagnosed HIV-infected participants, only number of partners with whom condomless anal intercourse was practiced and more frequent STI testing was associated with the outcome variable (undiagnosed HIV infection) in this group. While age was significantly and independently associated with being undiagnosed in this group, more research will be necessary to characterize MSM living with undiagnosed HIV infection who do not test frequently for this infection in order to develop evidence-based interventions to increase test uptake. However, in the bivariate analysis we also found high odds for having been diagnosed with a STI during the last 12 months in this group. This strongly suggests that contrary to guidelines and recommendations HIV testing had not been offered or not been conducted in the context of these STI diagnoses. We are unable to determine whether this missed opportunity for an earlier diagnosis of HIV is related to a lack of discussion and disclosure of sexual orientation with the STI test and treatment provider or to a lack of compliance with testing guidelines by the STI treatment providers. Partnership status and type of partner for last anal intercourse were not significantly associated with undiagnosed HIV, suggesting that condomless sex within steady partnerships may not always be as safe as people tend to assume, particularly if HIV status has not been checked mutually and/or if condomless anal sex is practiced concurrently with non-steady partners. Limitations For correct interpretation of our findings it must be considered that we report on associations with undiagnosed HIV infections in a very specific group. Factors associated with undiagnosed HIV may partly be different from factors associated with transmission risk, because a part of those who acquire HIV will be diagnosed and detected early. For MSM who infrequently test for HIV it may be difficult to detect behavioural correlates for their infection risk because we asked for behaviours in the previous 6 months. The moment when these men acquired HIV may be longer ago and their behaviour may have changed. MSM who have never been tested for HIV may be underrepresented in our sample. Never tested MSM are often less integrated into gay communities and rarely visit gay venues; this explains why they would have a lower chance to be recruited in our study, at least when considering the cities where a TLS approach has been adopted to enrol study participants [36]. This means that our uHIVunk group may mainly represent pattern 2 testing (triggered by health concerns not immediately related to acute HIV disease or transmission risk awareness) and less pattern 3 testing. A further limitation is that HIV status knowledge was based on self-reports and some participants may have felt uncomfortable reporting their HIV status in the questionnaire. Underreporting of a positive HIV status would have weakened any association we found between being undiagnosed and other factors. Conclusions Our study findings reinforce the recommendations for healthcare provider-initiated HIV testing when certain indicator diseases such as STIs are diagnosed. The findings may also inform community-based low-threshold HIV testing strategies such as home-collection sampling and test promotion campaigns to reduce the proportion of the hidden HIV epidemic. Such strategies should include certain elements of information (e.g. on the sensitivity of different tests during acute HIV infection), focus on interpersonal skills and community norms (e.g. communication with sexual partners about serostatus) and highlight additional risks associated with recreational drug use, while recognising the diversity of MSM with undiagnosed HIV across Europe. In addition, novel strategies such as home-testing should be discussed in the light of safeguarding linkage to care [37]. Since data were collected in different European cities, the findings allow for a high degree of tailoring local prevention campaigns, i.e. developing targeted HIV and STI testing campaigns considering the local contexts in both community-based HIV testing and counselling and advice offered at such HIV testing sites [38,39]. More importantly, tailored strategies based on the established HIV testing patterns should be embedded within an overall combined prevention approach [40], which should include the addition of PrEP to the available effective prevention tools [40][41][42] for instance for those MSM reporting condomless anal sex with multiple partners in the last 6 months.
2018-08-13T01:24:53.050Z
2018-08-06T00:00:00.000
{ "year": 2018, "sha1": "cbd3c9881bd1270764d4f3b6cf5c57cb2c425150", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-018-3249-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cbd3c9881bd1270764d4f3b6cf5c57cb2c425150", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245769795
pes2o/s2orc
v3-fos-license
A Kuramoto Network in a Single Nonlinear Microelectromechanical Device This work presents a frequency multiplexed 3-limit cycles network in a multimode microelectromechanical nonlinear resonator. The network is composed of libration limit cycles and behaves in an analogous manner to a phase oscillator network. The libration limit cycles, being of low frequency, interact through the stress tuning of the resonator, and result in an all-to-all coupling that can be described by a Kuramoto model. Beyond the typically present cubic nonlinearity the modes in question do not require any special frequency ratios. Thus an interconnect free Kuramoto network is established within a single physical device without the need for electrical or optical coupling mechanisms between the individual elements. This work presents a frequency multiplexed 3-limit cycles network in a multimode microelectromechanical nonlinear resonator. The network is composed of libration limit cycles and behaves in an analogous manner to a phase oscillator network. The libration limit cycles, being of low frequency, interact through the stress tuning of the resonator, and result in an all-to-all coupling that can be described by a Kuramoto model. Beyond the typically present cubic nonlinearity the modes in question do not require any special frequency ratios. Thus an interconnect free Kuramoto network is established within a single physical device without the need for electrical or optical coupling mechanisms between the individual elements. Introduction.-Oscillator networks, where a number of oscillators are coupled in such a way as to observe the emergence of a network-wide dynamics, are a topic of intense ongoing scientific and technological investigation [1][2][3]. Amongst the many possible constructions of networks, variants of the Kuramoto network [4][5][6][7] that is characterized by a sinusoidal coupling ansatz exhibit a wide range of phenomena such as synchronization [7], phase transitions [8], chimeras [9], and spiral waves [10]. The interest in Kuramoto networks extends from basic research to engineering applications that include neuromorphic [11,12] and reservoir computation [13]. In this work we construct a frequency multiplexed limit cycles network, where the different modes of the same structure form the oscillators, and the required coupling is generated via the nonlinear mode-coupling present in micro-and nano-electromechanical systems (M/NEMS). In nonlinear mode-coupling, which essentially is a non-resonant four-wave mixing, one mode would experience a frequency shift due to the added average strain that is generated by the amplitude of vibration of another mode [24][25][26]. This average strain can be thought of as a (quasi)dc channel that transmits information about the amplitude of the different modes. Therefore, by relying on mode-coupling it is possible to reduce the essential elements of a network to the creation of the limit cycles only, and allow the mode-coupling to generate the necessary connections in a single physical device. * samer.houri.dg@hco.ntt.co.jp However, a straightforward implementation of a network in a multi-mode device with self-sustained oscillating modes is not possible without resorting to resonant four-wave mixing, i.e., internal resonance [27][28][29]. Internal resonance requires special frequency ratios that are difficult to scale to a large number of modes, and at the same time could result in strong coupling between the oscillating elements [30][31][32], which is undesirable for an oscillator network. To leverage the non-resonant nonlinear modecoupling as a foundation for connecting frequency multiplexed limit-cycles in a network imposes two conditions. First, the mode-coupling transmits information about the amplitude and not the phase, thus a scheme is required to establish an amplitude-phase coupling so as to enable the structural mode-coupling to transmit the phase information between the different modes. Second, the information needs to be transmitted in a dc or quasi-dc fashion, since the non-resonant mode-coupling is a time average effect. Here, by quasi-dc we mean no more than a perturbation order frequency component, i.e. ω quasi-dc ∼ O( ). Both of these requirements are fulfilled by using libration limit cycles [33]. Self-sustained libration (or libration limit cycles or librators) are limit cycles taking place in the rotating frame of a harmonically driven resonator. More intuitively, it is possible to think of a traditional limit cycle phase oscillator as a circular trajectory in the laboratory frame phase-space. Whereas in a librator, the limit cycle is created around a harmonic drive with a force (F d ), and a frequency (ω d ), such that the rotating frame phase space exhibits a limit cycle. This implies that these libration limit cycles, also approximated by a circular trajectory, are centered around the driven response of the system to F d , and not centered around the origin of the rotating frame. The off-center position of the libration limit cycle couples the phase and the amplitude, while the nature of the libration limit cycle is such that its frequency is equally a perturbation order frequency component, i.e, ω L ∼ O( ) [33]. Thus librators fulfill the two criteria for coupling limit cycles through nonlinear mode coupling. Therefore, whereas traditional networks involve a number of coupled nearly identical oscillators. This work studies networks that are formed by limit cycles that are nearly identical when considered in the rotating frame of their respective harmonic drives, and are coupled via the nonlinear mode-coupling. The dynamics of the librator are not equivalent to those of a forced limit cycle oscillator, as the creation of the librator requires making the forced response itself unstable. To construct a librator requires a loop that feedbacks the rotating frame components (i.e., the quadratures), as shown schematically in Fig. 1(a). It is possible to think of a librator as a self-sustained amplitude modulation (AM), where the carrier frequency is set by the driving term (F d , ω d ), and the modulating frequency is the self-sustained libration frequency (ω L ). Where ω L depends on the linear and nonlinear parameters of the system. The ability to synchronize librators to an externally injected tone has yet to be explored, let alone the ability to mutually synchronize coupled librators and establish a network. This is not necessarily as straightforward as synchronization of oscillators [34][35][36][37][38], since the librators are centered around varying frequencies (the modal frequencies) with no special frequency ratios. And the synchronization is to take place via the quasi-dc strain tuning around a quasi-dc frequency ω quasi−dc as shown schematically in Figs. 1(b), (c). Therefore, we explore the ability to synchronize librators, first to an injected tone (ω sync ), then to each other, before proceeding to construct a librator network. Model.-To account for the synchronizing influence, we modify the governing equation of the librator dynamics [33] to take on the following non-dimensional form where the subscript i identifies the mode number. x i is the modal displacement, and γ i , β i , α i are respectively the modal linear damping, nonlinear damping, and Duffing nonlinearity of the i th mode (note that the equation is normalized so that the natural frequency ω 0i = 1). sync and ω sync denotes the magnitude and frequency of the quasi-dc frequency shift due to an externally applied synchronizing tone (ω sync 1). F di and ω di are the amplitude and frequency of the i th mode driving force, and f i (t) is the modal feedback term that generates the libration limit cycle. We also introduce a detuning parameter δ i such that ω di = ω 0i × (1 + δ i ). Note that all the above parameters are considered to be perturbation order terms, i.e. γ i , β i , α i , sync , ω sync , F di , f i (t), and δ i ∼ O( ). Eq. (1) is treated using the rotating frame approximation (RFA), whereby the displacement is expressed as x i = (A i e iw di t + A * i e −iw di t )/2, and A i is the rotating frame complex amplitude of the i th mode. This complex amplitude is in its turn decomposed into a static component and a dynamic (libration) component, denoted by A 0i and A Li respectively. Where the static component A 0i is the response of a standard Duffing resonator to a driving force (F di , ω di ), and A Li is the libration limit cycle of the system in the rotating frame. A Li is considered to be centered around the static response A 0i . The application of the RFA leads to a dynamical equation, that readṡ where δ Li and γ Li are respectively the effective detuning and effective linear damping of the i th mode libration motion A Li . C Li is a complex constant that depends on the modal parameters, the driving force, and the feedback loop [33,39]. Note that since for the model in this work only the A Li A 0i regime is being considered, we dropped all nonlinear terms in A Li from Eq. (2), these include the terms that stabilize the magnitude of the limit cycle but play almost no role in setting its frequency (see supplementary materials for the derivation). To explore the possibility of synchronization it is first necessary to presume a periodic oscillation for the libration motion, thus A Li is supposed to take on the following form A Li = B 0i + (B i e iω Li t + B * i e −iω Li t )/2, where B i is the periodic component of the libration motion, and B 0i is a non-zero static component that results from the non-symmetric shape of the libration orbit, knowing that B 0i = U 0i e iφ0i , and B i = U i e iφi . For convenience, we also express the steady state driven response as A 0i = R 0i e iθ0i . In essence, the introduction of B Li and B 0i , amounts to a second rotating frame approximation, or a second order perturbation analysis, meaning the time scales associated with these dynamics are ∼ O( −2 ). By inserting the expansion of A Li into Eq. (2), developing, and collecting the relevant terms (see supplementary materials for details), it is possible to obtain the following Adler-like [5,34,35] phase locking equatioṅ where Ω i is an effective detuning parameter between the i th mode libration frequency and the synchronizing influence [39], P i is the synchronization forcing, and Φ Li represents an effective phase difference between the libration limit cycle and the synchronizing influence. For synchronization to take place requires thaṫ Φ Li = 0, while the terms Ω i , P i , and Φ Li need to be derived from the governing equations. These terms differ depending on whether synchronization is a result of an external forcing or mutual synchronization between interacting libration limit cycles. Injection locking.-First, we consider the synchronizing effect of an applied external force where two cases will be treated, these are ω sync ≈ ω Li , and ω sync ≈ 2ω Li . By retracing the steps for each one of these cases we derive expressions for the forcing parameter in Eq. (3), which for the former case takes the form P i (ω sync ≈ ω Li ) = ( sync /4) × (U 0i /U i ), and for the latter case is The locking range is calculated by settingΦ Li = 0 in Eq. (3), which gives a simple ± sync = 4Ω i relation for the case of frequency locking with ω sync ≈ 2ω Li . The case of ω sync ≈ ω Li is difficult to calculate analytically, as the locking range depends on the asymmetry of the orbit U 0i , which can only be determined by numerically integrating the governing ODE. However, it is possible to set an upper bound of U 0i = U i . Thus, ± sync = 4Ω i can be considered to be a bound on both synchronization scenarios. We experimentally investigate the synchronization dynamics of librator limit cycles using a piezoelectrically actuated GaAs MEMS clamped-clamped beam device that is 100 µm in length, 20 µm wide, and 600 nm in thickness, see [40] for more information on device fabrication. The device is placed in a vacuum chamber with a pressure of ∼ 1 mPa [39], excited electrically, and its vibrations are measured optically using a laser Doppler vibrometer (LDV). A dc voltage component (∼ -1 V) is constantly applied, which ensures that the actuation remains linear by avoiding the Schotkey behavior of the metal-semiconductor junction [32,40]. In addition, the applied dc component results in a constant strain in the structure that shifts the resonance frequency, this electrically controllable frequency tuning plays the crucial role of generating sync in Eq. (1) by modulating the dc voltage component. The device in question possesses two electrodes on each end of the structure length, see representation in Fig. 1(a), by equally actuating both electrodes (not shown in schematic) only odd modes are efficiently excited [29]. Three odd modes are accessible these are the first, third, and fifth out-of-plane flexural modes, respectively, with the following modal frequencies ω 01 = 2π × 321 kHz, ω 03 = 2π × 954 kHz, ω 05 = 2π × 2.325 MHz. These modes equally exhibit a Duffing-type nonlinearity, Fig. 2(a), and nonlinear damping. We generate libration limit cycles using feedback loops that are functionally equivalent to the ones shown in Fig. 1(a) (see supplementary materials for details on experimental setups). Once the limit cycles are established, we sweep the drive frequencies ω di (i.e., δ di ) and quantify the libration frequencies, i.e., ω Li , which are then plotted in Fig. 2(b). To study synchronization due to an external forcing, we choose the case of zero detuning, i.e., δ di = 0, and apply a weak tone on top of the dc bias. Naturally, when investigating external synchronization only one librator feedback loop is active at a time so as not to have mutually interacting librators. Since the injected signal frequency is on the order of the librator frequency and hence much smaller than the modal frequencies, i.e. ω sync ∼ ω Li ω di , its effect is to modulate the resonance frequencies of the structure, by modulating the dc bias, rather than to directly force the resonant modes, as depicted schematically in Fig. 1(b). Experimentally, the effect of the injected signal on the librator limit cycles is clearly visible in Fig. 2(c), where the phase space plot shows the locking of the trajectories to the phase of the outside signals. Thereafter, the frequency of the injected signal is swept producing a noticeable locking interval which is typically seen and expected from phase-locked systems [5,35,37], see Fig. 2(d). Subsequently, a 2-dimensional parameter sweep is undertaken, where the force ( sync ) and frequency (ω sync ) of the locking signal are swept, these 2D sweeps are shown in Fig. 2(e), for both the ω sync ≈ ω Li case and for the ω sync ≈ 2ω Li case. These sweeps demonstrate locking regions similar to Arnold tongues. These plots provide reassuring evidence of the validity of the perturbation analysis and the resulting Eq. (3), since by having plotted the detuning and forcing in normalized terms, we find that the boundary of the synchronization intervals are reasonably well delineated by the linear relation sync = ±4Ω i as predicted by the model. On a side note, it is interesting to remark that in absolute terms the frequency locking ratio, defined as = ω di /ω sync , is on the order of 1000. Mutual synchronization.-Having established the potential of librator limit cycles to phase-lock to an external source, we now investigate the ability of multiple librator limit cycles, each centered around a different mode, to interact and synchronize. In this case, structural mode-coupling that is present in nonlinear M/NEMS devices [24][25][26] acts as a stress tuning mechanism that provides a low frequency coupling channel between the librator limit cycles, see Fig. 1(c). Only quasi-dc mode-coupling is being considered, which implies that no resonant four-wave mixing should exist between the modes, i.e. 2ω i − ω j − ω k = 0 [41,42]. For mutual librator synchronization the parametric frequency tuning term in Eq. (1) is replaced by the standard mode-coupling terms [24][25][26], i.e. ij x 2 j + ik x 2 k + ... x i , where ij , ik · · · are the mode-coupling constants between the ith mode and modes j, k · · · . In the rotating frame of the ith mode, the mode-coupling terms reduce to . However, the modal amplitudes are no longer constant, as they are slowly modulated by the libration terms. Therefore, they can be written as | A j |=| A 0j + A Lj |, and | A k |=| A 0k + A Lk |, · · · . By placing the modulated amplitudes in the mode-coupling terms and developing, we obtain the following phase rela-tionship (see supplementary materials for derivation [39]) where j denotes all the modes that couple to the ith mode, Ω i is an effective detuning parameter, and ∆φ ij represents the phase difference between the two limit cycles, i.e. ∆φ ij = φ j − φ i . The constants k ij and ψ 0j are rather involved amplitude and phase parameters (see supplementary material [39]) that depend on the jth modal amplitude, mode-coupling, and other parameters. Note that | k ij |> 0, since mode-coupling can not be turned off. It is significant that Eq. (4) corresponds to a variant of the well-known Kuramoto model [4,[6][7][8]43]. Thus, the mode-coupling mechanics naturally give rise to a Kuramoto-type network in a multimode M/NEMS device, when librator limit cycles are excited around these modes. In order to investigate the behaviour of mulimode librator networks, we first start with the simplified case of only two coupled librators i and j. Thus, only two phase equations are necessary, those ofφ i anḋ φ j . By considering only the difference between the two equations, i.e., ∆φ ij =φ i −φ j , Eq. (4) can be rewritten as where ∆Ω ij is the frequency difference between the two free librators, and µ and ν are rearranged constants (see supplementary materials [39]). Equation (5) is superficially different from the classical two-coupled phase oscillators equation [5,37,[44][45][46] (with constant amplitude). The difference is due to the asymmetry in the coupling and the presence of the phase components ψ 0j , ψ 0i . Nevertheless, the fact remains, that Eq. (5) has only two possible outcomes, either the librators synchronize or they do not. Experimentally, we investigate this regime by generating two libration limit cycles around the modes of interest. Since the libration limit cycles frequencies depend, amongst other things, on the force detuning term δ i then by sweeping the latter the libration frequencies are adjusted until nearing a 1:1 ratio, upon which they should lock. An experimental example of synchronization between the mode 3 librator and mode 5 librator is shown in Fig. 3(a). In the figure, the drive frequency of mode 5 (and hence the libration frequency ω L5 ) is left unchanged, while the detuning term of mode 3 is swept. The extracted frequency difference and frequency ratio are plotted, where it is easy to identify the 1:1 locking range between the two librators. Interestingly, a miniature plateau corresponding to a 2:1 locking ratio is equally observed. The effect of frequency locking between the librators is also shown in the insets of Fig. 3(a), by tracing the in-phase components versus each other, i.e. X 3 vs. X 5 , where for a 1:1 locking ratio a simple circle is formed, for a 2:1 locking ratio a figure 8 is formed, and for unlocked librators the trace would simply fill a rectangle (not shown). By expanding these mutual synchronization measurements to 2-dimensional sweeps, where the drive frequencies of both modes are swept, and then plotting the resulting phase difference ∆φ ij , new features become apparent, as shown in Fig. 3(b). For one the 1:1 synchronization region is quite visible for all pairs of librators. Furthermore, higher order locking, e.g. 1:2, 2:1, 3:1, and 4:1, regions can be identified. These higher order locking regions are not directly predicted by Eq. (5) since the inclusion of higher order terms would be necessary to account for them. The results shown in Fig. 3 represent an effective confirmation of Eq. (5), through the demonstration of pairwise synchronization. Yet, this confirmation remains qualitative, since there are simply too many free parameters hidden in the terms Ω i , ψ 0j and k ij , in Eq. (4), which prevents the possibility of having an approximate quantitative bound as was done for Eq. (3). An undesirable effect is equally visible in Fig. 3(b), which is due to the fact that the resonance frequency ratio of modes 1 and 3 is ω 3 /ω 1 ≈ 3, thus a region of resonant energy transfer, or internal resonance, can be accessed [29,32]. In this work, such effect is undesirable, as it changes drastically the nature of the coupling, and the region where this resonant energy transfer takes place is avoided. We proceed to study the 3-node network dynamics by activating all three feedback loops simultaneously, and changing the dc from -1V to -1.5V while accounting for the slight shift in resonance frequencies. Seeing the substantial size of the parameter space, an exhaustive, or even systematic, sweep is impractical. Instead a linear search procedure is used to adjust the values of the detunings (i.e., δ 1 , δ 3 , δ 5 ), whereas all other experimental parameters are kept constant. The linear search has for objective the minimization of the libration frequencies spread. As the libration limit cycles frequencies are brought closer together, they transition from an unsynchronized state, to a partially synchronized state (2 modes synchronized), to a fully synchronized state, depending on their respective detunings. These states can be seen by plotting the modes' X-quadratures against each other in a 3D plot, as shown in Fig. 4(b). The effect of partial and full synchronization on the phase space trajectory is clearly observed, where the trajectory moves from filling a volume (unsynchronized), to the surface of a cylinder (partial synchronization), to a simple ring (full synchro-nization). Once the synchronization parameters are established, we explore a small volume in the δ 1 , δ 3 , δ 5 space around those parameters. To quantify the degree of synchronization we use the time average of the Kuramoto order parameterr, where r(t)e iΨ(t) = i e iφi(t) [4,6,62]. The results are shown in the 2D plot in Fig. 4(c). The 2D sweep shows regions of synchronization, identified by the bright color area. Surprisingly, the value ofr varies roughly between 0.4 for the unsynchronized case, and 0.7 for the synchronized case, whereas it should vary between 0 and 1 for those two cases, respectively. This is likely due to the presence of higher order libration terms as implied from the plots of the mean field (r(t), Ψ(t)) in Fig. 4(d). The mean field traces an elliptical trajectory unlike the traditional mean field representation on a circle. For comparison, the mean field of the unsynchronized cases are also shown in Fig. 4(d), unsurprisingly they show no pattern. The parameters Ω i , k ij , and ψ 0j can all be manipulated experimentally (to some extent) by changing the drive forces' detuning and magnitude (δ i , F di ) as well as the phase of the feedback loops (Θ i ). The presence of the phase term ψ 0j in Eq. 4 could lead to frustration in the system [8,43,63]. This was tentatively observed in this work, where a π/2 phase shift on the lock-in amplifier of the fifth mode resulted in the librators unable to synchro-nize, even after the linear search algorithm brought their frequencies to be practically overlapping. These controllable parameters therefore provide a valuable means to experimentally tailor the properties of the network to be studied. Conclusions.-To summarize, this work builds on the recently introduced MEMS librator to demonstrate the potential of the librator limit cycles to be synchronized to an outside force as well as to each other. This latter effect is mediated through the structural modecoupling, where the libration motion, being of low frequency, couples through the stress-tuning of the structure. The emergent network thus formed is best described by a Kuramoto model, despite the various limit cycles and their collective mean field taking place at largely different frequencies. This work therefore dispenses with the need for electrical or optical coupling mechanisms as well as provides experimental means to control the network properties. With a more streamlined experimental setup, e.g., replacing the lock-ins with RF power detectors, it would be possible to scale the number of nodes using simple ofthe-shelf MEMS devices and control electronics. Furthermore, the principles described in this work are equally applicable to other types of systems with Kerr-type nonlinearity, like optical resonators and microwave cavities. Libration circuit mode 5 Libration circuit mode 3 1. (a) Schematic representation of the experimental setup. The output from a LDV is passed through 3 lockin amplifiers, each set to one of the drive tones (details shown only for one loop). The output from the lock-ins, each representing an individual mode, are high-pass filtered and passed through gain circuits (g1,2,3(t)) then added to the drive force (F d1,d2,d3 ) and up-converted to their respective drive frequencies, combined, and used to excite the MEMS device. A signal trace of mode 1 is shown for clarification. The lock-ins outputs are sampled by a digital oscilloscope (not shown). (b) Schematic representation of the stress tuning-mediated forced synchronization due to an injected tone ( sync) around ωsync ∼ ωLi. (c) Schematic representation of mode couplingmediated interactions. The slow amplitude modulation, resulting from the libration motion of a frequency of ωL1,L2,L3 causes a quasi-dc stress tuning (mean field) that couples between the respective modes. The experimental data showing the difference between free running (colored) and locked (black) libration limit cycles. The dotted lines indicate the location of the axis of the phase space plane. (d) Locking range obtained for Fsync = 5.8 × 10 −6 (the vertical scale is multiplied by 10 for the 3rd and the 5th modes). (e) The measured synchronization tongues for ωsync ≈ 2ωLi (left side panels) and for ωsync ≈ ωLi (right side panels) for the first, third, and fifth modes respectively. The dashed white lines delineate the maximum locking range area, i.e. sync = 4Ωi, as obtained from Eq. (3). Blue and green, indicate a locked and a running phase, respectively. The red lines indicate the data traces in (d). The mean field trajectory for the unsynchronized, partially-and the fully-synchronized cases shown in (b), blue, black , and red traces respectively. The fully synchronized case shows a closed trajectory that is however not circular, whereas the unsynchronized and partially synchronized case do not show any particular trajectory.
2022-01-07T02:15:36.474Z
2022-01-06T00:00:00.000
{ "year": 2022, "sha1": "90927073a95f6e22c3325aa77fedd1bdf6021469", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "90927073a95f6e22c3325aa77fedd1bdf6021469", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
196954725
pes2o/s2orc
v3-fos-license
REACTION OF [1,4]BENZODIOXINOPYRIDAZINES WITH SODIUM METHOXIDE AND AMINES . Condensation of 1 with 2 3 in the presence of NaH afforded a mixture of the isomeric benzodioxinopyridazine (3) and (4).The isolated 3 was treated with thiourea to give the thiol (5), which was methylated to give 6.Then the 6 was oxidized with KMnO 4 to give 7 in an overall yield of 66%.(Scheme The Reaction of 7 with Sodium Methoxide The reaction of 7 (1 equivalent) with sodium methoxide (1 equivalent) in MeOH at room temperature for Cyclization of 16, in the presence of sodium methoxide (1 equivalent) in refluxing MeOH for 30 min, afforded 9 in 77.4% yield. In the reaction of 3 and 4 with sodium methoxide, dioxin ring-opened pyridazines were not obtained under the reaction conditions employed by Ames et al. The nucleophilic displacement reactions of 3, 7 and 4 with sodium methoxide first occurred on the C-4a ring carbon in 3 and 7, and on the C-10a ring carbon in 4 to afford dioxin ring-opened pyridazines (14, 16 and 15) respectively, and then the cyclizations of 14, 16 and 15 proceeded to afford 9 and 10 respectively. The same reactions of 9 and 10 with methoxide ion proceeded both on the C-10a ring carbon in 9 and on the C-4a ring carbon in 10 to afford the same product (17).However, the direct replacement of chloro or methylsulfonyl substituent by methoxide ion did not take place. Scheme 4 The second one is the ring fission of [1,4] The third one is the cyclization of 2-hydroxyphenoxypyridazines (20, 22 and 21) to produce 19 and 18, respectively.The reactivity of amines (8) to 3 and 4 may be presumed to occur both due to the electron density of ring carbon bound to oxygen or chlorine atom and due to the bulk of amines.The secondary amines (8c -e) reacted with 3 and 4 to give direct chloro substituted products (18 and 19) in a 55 -84% yields, but dioxin-ring opened products (20 and 21) and cyclization products (19 and 18) in a poor (trace -14%) yields.However, the methylsulfonyl group in 7 was not directly replaced by amines, and the addition of amines to 3,4-double bond in 7 did not occur.This reactivity of 7 to amines (8) may be accounted for either by the presence of bulky methylsulfonyl group (steric effect) or by the relatively high electron density of C-4 ring carbon in 7 (electron-donating oxygen effect in catechol ring).Details of the reactivity of 7 are not clear yet.(Scheme 7) The structures of products (18, 19, 20 -22 and 23) were confirmed both by their elemental analyses, MS, IR, and 1 H and 13 C-NMR spectral data, as shown in Tables I -V.In their reaction with amines (8), 3 and 4 afforded dioxin ring-opened chloropyridazine (20 and 21), The separation of 18f and 19f from the reaction mixture was achieved as follows. TABLE I . Melting Points, Elemental Analyses and MS Data for which were purified by recrystallization from an appropriate solvent.The yield, elemental analysis, IR, MS, 1 H-NMR and 13 C-NMR spectral data for 18 -22 are summarized in Tables I -V. After the removal of amines under reduced pressure, the residue was extracted with CHCl 3 .The extract was washed with H 2 O, dried over Na 2 SO 4 , and concentrated to a volume of about 2 -5 mL.The concentrated CHCl 3 solution was purified by a column chromatography on SiO 2 with CHCl 3, and then CHCl 3 -MeOH (20 : 1).The first and second fractions respectively gave 18, and 19, and the third fraction gave 20 -22,
2019-04-06T13:04:28.158Z
2004-03-01T00:00:00.000
{ "year": 2004, "sha1": "ee1cf0759d6877b4173040bf92ca06dfd35f943c", "oa_license": null, "oa_url": "https://doi.org/10.3987/com-03-9961", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c70d504f685d5589c507022ad5c323559b64ce6e", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
5454993
pes2o/s2orc
v3-fos-license
Disease Mapping and Regression with Count Data in the Presence of Overdispersion and Spatial Autocorrelation: A Bayesian Model Averaging Approach This paper applies the generalised linear model for modelling geographical variation to esophageal cancer incidence data in the Caspian region of Iran. The data have a complex and hierarchical structure that makes them suitable for hierarchical analysis using Bayesian techniques, but with care required to deal with problems arising from counts of events observed in small geographical areas when overdispersion and residual spatial autocorrelation are present. These considerations lead to nine regression models derived from using three probability distributions for count data: Poisson, generalised Poisson and negative binomial, and three different autocorrelation structures. We employ the framework of Bayesian variable selection and a Gibbs sampling based technique to identify significant cancer risk factors. The framework deals with situations where the number of possible models based on different combinations of candidate explanatory variables is large enough such that calculation of posterior probabilities for all models is difficult or infeasible. The evidence from applying the modelling methodology suggests that modelling strategies based on the use of generalised Poisson and negative binomial with spatial autocorrelation work well and provide a robust basis for inference. Introduction For count data, the mean and variance are often related and can be estimated using a single parameter, as in the Poisson model, which is the most frequently used model for analysing disease mapping data. Under this model, the mean and variance of the dependent variable are assumed to be equal, conditional on any variables used to explain differences in the mean across primary sampling units (PSU). In practice, however, this assumption is often false, since the variance can either be larger or smaller than the mean, i.e., both overdispersion and underdispersion can exist in count data. Statistical methods for analysing spatial patterns of disease incidence or mortality have matured over the past decade or so [1][2][3][4][5]. Selection of the appropriate statistical approach for the analysis of correlated count data is important not only for variance estimation, but also for estimation of the mean [6]. The negative binomial and generalized Poisson (G-Poisson) distributions are frequently used to model count data with overdispersion by inclusion of a second parameter governing the variance specification. These distributions are of interest for modelling count data because they include the Poisson distribution as a special case, and over the range where the second parameter is positive, they are over-dispersed relative to Poisson with a variance to mean ratio exceeding 1. Relationships among these distributions are well known [7,8]. When a count dependent variable's assumed variance is a function of its mean, one source of overdispersion is due to an inappropriate probability model, for example selecting the Poisson model when the generalised Poisson or negative binomial distribution would better capture the variation [9]. Intra-PSU heterogeneity may induce overdispersion as follows: individuals comprising any population subgroup may differ in terms of characteristics that are known to influence the response and if these characteristics are not included in the set of covariates in a model specification then population heterogeneity across PSUs can lead to extra-Poisson variation in cancer counts [10,11]. Presence of overdispersion is a particular problem for the analysis of geographically correlated data. In addition to the misspecification of the mean function and/ or misspecification of the probability model, spatial autocorrelation is a third cause of overdispersion in geographically correlated count data [4]. For example neighbouring PSUs may tend to have populations that socially, economically and demographically are more alike than non-neighbours or cancer occurrence may have a tendency to cluster. The purpose of this paper is to consider the problem of modelling cancer counts when overdispersion is likely. We consider spatial regression to estimate the association between relative risk of disease and potential risk factors and map model predicted ratios in which counts in PSUs that are geographically close are assumed to have stronger correlation with each other than counts in PSUs that are geographically dispersed. The development of this work was motivated by our previous study of esophageal cancer (EC) incidence in the Caspian region of Iran during 2001-2005 [12,13]. This paper is structured as follows: in Section 2 we describe the Caspian cancer incidence data set from the Mazandaran and Golestan provinces of Iran and define the data structure and outcome probability models under consideration. This is followed by a description of Bayesian hierarchical models to be employed and an automatic Bayesian covariate selection procedure to evaluate and compare the proposed models. Section 3 presents the results of fitting and comparing the competing models to EC standardised incidence ratios (SIRs) in the Caspian region of Iran using a range of goodness of fit indices. Conclusions and further discussion are presented in Section 4. Esophageal Cancer Incidence Data in the Caspian Region of Iran Residents of Mazandaran and Golestan provinces of Iran constituted the study population. The aims of analysis were to determine the extent of spatial variability in risk for esophageal cancer in this area, and to assess the degree to which this variability is associated with socioeconomic status (SES) and dietary pattern indices. During the study period, there were 1,693 EC cases in a population of around 4.5 million people. Population and EC counts were available for the 152 agglomerations in the Mazandaran and Golestan provinces. Geographic coordinates for each agglomeration were also obtained that approximately reflected the geographical centroid of each agglomeration. The distances between agglomeration centres was measured in kilometres and ranged from 9 to 507 km. Figure 1a shows the geographic boundaries of wards, cities and rural agglomerations within wards, in the two provinces. Adjustment of incidence rates for differences in the age structure of agglomerations was accomplished by calculating SIRs with a 2003 population reference. Figure 1b shows strong spatial aggregations among SIR, with a tendency for higher EC rates in the eastern and central agglomerations and lower rates in the west. Explanatory variables relating to SES were available for each of the 152 agglomerations and to diet for each of the 26 wards [14]. Factor analysis was used to summarise SES and diet variables into a few uncorrelated factors: for SES: "income", "urbanisation" and "literacy", with lower values indicating greater deprivation; and for diet: "unrestricted food choice diet" characterized by high intake of foods generally thought to be preventive against EC and "restricted food choice diet" with positive loadings on risky foods. Estimates of the percentage of the population in each ward with diet factor scores in the highest tertile (3rd) were used in regression models. For socio-economic components, factor scores related to each agglomeration were used in the regression model as a continuous covariate. Further details on how the factors were created and defined for diet and SES can be found elsewhere [14]. Log linear models are often used to describe the dependence of the mean function on k covariates, X 1 , …, X k . A general form for this type of model for J geographically-defined units (areas) is given by: where Y j is the count for area j and E j denotes an "expected" count in area j that is assumed known, X j = (1, X j1 , …, X jk ) is a 1 × (k + 1) vector of area-level risk factors, β = (β 0 , β 1 , …, β k ) is a 1× (k + 1) vector of regression parameters and θ j represents a residual with no spatial structure (so that θ i and θ j are independent for i ≠ j). Model & Data Structure The raw data are in the form of disease counts, Y j , and population counts, N j in region j. The expected count when adjusting for the age structure of an agglomeration, E j , was obtained by age-standardisation. Then, using the theoretical relationship ( ). Equation (1) is equivalent to a model for agglomeration level SIRs. Poisson, generalised Poisson and negative binomial distributions are considered for modelling counts at the agglomeration level and for each of these distributional assumptions, non-spatial, neighbourhood-based and distance-based spatial correlation structures are compared. These analysis approaches are now described in detail. Distributions for Disease Counts The Poisson model is given by: The Poisson distribution has mean and variance E(Y j ) = V(Y j ) = λ j . The negative binomial, NB, distribution can be constructed by adding a hierarchical element to the Poisson distribution through a random effect ε j , specifically: for y j = 0, 1, 2, 3, …, where ϑ > 0. The resulting probability distribution function marginal to ε j is: The negative binomial model has the property that the variance is always greater than the mean and ϑ is the parameter of extra-Poisson variation with large values of ϑ corresponding to variability more like the Poisson distribution. As ϑ →∞ the distribution of Y j converges to a Poisson random variable. The generalized Poisson, G-Poisson, model with parameters λ and ω is defined as [9]: (4) for y j = 0, 1, 2, 3, … and has E(Y j ) = λ j and V(Y j ) = λ j (1 − ω) −2 . For ω = 0, the generalized Poisson reduces to the Poisson distribution with mean λ j . Bayesian inference is based on constructing a model m (which encapsulates distributional assumptions and covariate relationships with outcome), its likelihood , and the corresponding prior distribution , where γ m is a parameter vector under model m and Y is the outcome variable vector. We use the following hierarchical structure on model parameters: (5) where f(m) is the prior probability for entry of covariates in the specification of the linear predictor part of the bigger model m within a class of one of the three probability assumptions above. The maximum total number of candidate models given k covariates (considered additively, i.e., no interactions) is 2 k . The usual choice for the prior on model m is the uniform distribution over the covariate parameter space M = {β 1 , …, β k }. We used this uniform distribution because the prior can be thought of as noninformative in the sense of favouring all candidate models equally within the same probability model class. Hierarchical Models for Relative Risks Model (1) is a non-spatial model in the sense that it neither recognizes the distance-based relationships among the J agglomerations, nor in area j allows for any neighbourhood-based effects between adjacent areas that would mean counts in one area might be related to counts in adjacent areas. Suppose the variability in the {Y j } j = 1, …,j follows a spatial model that incorporates assumptions about the spatial relationships between areas. We then extend (1) as: where the new parameter ϕ j , represents a residual with spatial structure with ϕ i and ϕ j , i ≠ j, modelled to have positive spatial dependence. Two approaches are used for modelling the J-dimensional random variable ϕ: distance-based and neighbourhood-based spatial correlation structures. In the distance-based approach the multivariate normal distribution MVN(µ, τΣ) is specified for ϕ, where µ is a 1  J mean vector, τ > 0 controls the overall variability of the ϕ i and Σ is a J × J positive definite matrix. If d ij denotes the distance between centroids of agglomerations i and j, then we specify: In this specification ν > 0 controls the rate of decrease of correlation with distance, with large values representing rapid decay, and is a scalar parameter representing the overall precision parameter. The parameter κ ϵ (0,2] controls the amount by which spatial variations in the data are smoothed. Large values of lead to greater smoothing, with κ = 2 corresponding to the Gaussian correlation function [15]. The distance-based parameters are jointly referred to as . Besag et al. [16] propose modelling the spatial components via a conditional autoregression (CAR) as ~N(0, , describing the spatial variation in the heterogeneity component so that geographically close areas tend to present similar risks. One way of expressing this spatial structure is via Markov random fields models where the distribution of each ϕ i given all the other elements {ϕ 1 , …, ϕ i -1, ϕ i + 1 , …, ϕ J } depends only on its neighbourhood [17]. A commonly used form for the conditional distribution of each ϕ i is the Gaussian: where the prior mean of each ϕ i is defined as a weighted average of the other ϕ j , j ≠ i, and the weights π ij define the relationship between area i and its neighbours. The precision parameter σ ϕ controls the amount of variability for the random effect. Although other possibilities exist, the simplest and most commonly used neighbourhood structure is defined by the existence of a common border of any length between the areas. In this case, the weights π ij in Equation (8) are constants and specified as π ij = 1 if i and j are adjacent and π ij = 0 otherwise. In that case, the conditional prior mean of ϕ i is given by the arithmetic average of the spatial effects from its neighbours and the conditional prior variance is proportional to the number of neighbours. Specification of Priors In order to be consistent across models with specification of prior belief, the prior distributions imposed on common parameters were the same and non-informative priors were used. A Gamma(0.001, 0.001) prior distribution was used for ϑ in the negative binomial distribution, and a Beta(0.5, 0.5) prior for ω in the generalized Poisson distribution. The unstructured components were given independent prior distribution describing the non-spatial heterogeneity. The hyperparameters σ θ , σ ϕ and δ are defined below. Specification of Hyperpriors In the highest level of the hierarchy prior distributions were specified for the prior precisions for hyperparameters σ θ , σ ϕ and δ. The estimation of relative risks can be highly dependent on the choice of prior parameters [3] and within a class of Gamma priors, the Gamma(0.5, 0.0005) distribution has been suggested as a sensible choice [2] and was adopted here for the parameters σ θ and σ ϕ . For the δ parameters a Gamma(0.001, 0.001) prior was used for and uniform distributions Unif(0.05, 1.95) and Unif(0.05, 20) were used for κ and ν respectively. Gibbs Variable Selection, GVS Candidate models can be represented as , where ψ is a set of binary indicator variables ψ g (g = 1, …, k), where ψ g = 1 or 0 represents respectively the presence or absence of covariate g in the model, and α denotes other structural properties of the model. For the generalised linear models in this study, α describes the distribution, link function, variance function and (un)structured terms, and the linear predictor may be written as: (9) We assume that is fixed and we concentrate on the estimation of the posterior distribution of β within the class of probability models defined by α The prior for (β,ψ) is specified as . Furthermore, β can be partitioned into two vectors β ψ and corresponding to those components of that are included ψ g = 1 or not included ψ g = 0 in the model. Then, the prior may be partitioned into a "model" prior and a "pseudo" prior [18]. The full posterior distributions for the model parameters are given by: and we assume that the actual model parameters β ψ and the inactive parameters are a priori independent given ψ. This assumption implies that and . The Gibbs sampling procedure is summarized by the following three steps [19]: (1). Sample the parameters included in the model from the posterior: (2). Sample the parameters excluded from the model from the pseudoprior: (3). Sample each variable indicator j from a Bernoulli distribution with success probability ; where O g is given by: where denotes all terms of ψ except ψ g . The algorithm is further simplified by assuming prior conditional independence of all β g for each model ψ. Then, each prior for consists of a mixture of true prior for the parameter and a pseudoprior . As a result: We considered a normal prior and pseudoprior for the resulting in: and: where are the mean and variance respectively in the corresponding pseudoprior distributions and Ʃ g is the prior variance when covariate g is included in the model. The Normal prior assumption and Equation (13) result in a prior that is a mixture of two Normal distributions: (14) Using priors Equation (14) and Equation (9) gives the following full conditional posterior: (15) indicating that the pseudoprior, does not affect the posterior distribution of model coefficients. When no restrictions on the model space are imposed a common prior for the indicator variables β g is f(ψ g ) = Bernoulli (0.5) [20]. The Gibbs sampler was begun with all ψ g = 1, which corresponds to starting with the full model. Consider Ʃ as the constructed prior covariance matrix for the whole parameter vector β when the multivariate extension of prior distribution (14) is used for each β g . Zellner's g prior framework was used to define prior variance structure for Ʃ [21]. The choices = 0 and with p = 10 were made as they have also been shown to be adequate [18]. The pseudoprior parameters and S g are only relevant to the behaviour of the MCMC chain and do not affect the posterior distribution [20]. Because α is assumed fixed in our study and we have k covariates a set of 2 K competing models are considered and the posterior probability of model is defined as: Bayesian model averaging (BMA) obtains the posterior inclusion probability of a candidate regressor, , by summing the posterior model probabilities across those regressors that are included in the model. Within the disease mapping context, usually the aim is prediction. In such cases, prediction should be based on the BMA technique, which also accounts for model uncertainty [22]. Whatever the final intention is (prediction using BMA or selection of a single model) we need to evaluate posterior model probabilities. Fully Bayesian Estimation The Markov chain Monte Carlo method (MCMC) was employed to obtain a sample from the joint posterior distribution of model parameters, automatically generating samples from the marginal posteriors and hyperparameters. It has been suggested that the Gibbs sampler is run for 100,000 iterations for GVS after discarding the first 10,000 iterations for the burn-in period [23]. In our analyses, a total of 500,000 runs with every tenth posterior draw after a burn-in of 50,000 runs was used. The inference of every parameter was thus based on 45,000 posterior samples. Convergence to the posterior distribution was assessed using time series scatterplots, correlograms and the Gelman-Rubin convergence statistic as implemented in WinBUGS and CODA/BOA [24,25]. Comparison of Model Performance Mean absolute deviance (MAD), mean-squared predictive error (MSPE), pseudo-R 2 [26], deviance statistic [27], Moran scatter plot [28] and absolute deviance residuals versus fitted values [29] were used for estimating the goodness of fit (GOF) and prediction performance of the competing models. Posterior mean of λ j were used as the plug-in estimate of to calculate all the goodness of fit measures discussed in this paper. Pseudo R 2 is calculated for model comparison and takes values between zero and one. It is based on , however since R 2 increases as more parameters are added to a model regardless of their contribution pseudo is defined as Pseudo where d.f. for degrees of freedom equal J minus the effective number of free parameters [26]. To assess the prediction performance of the models their mean-squared predictive error and deviance statistic are reported. Mean-squared predictive error is defined as and mean absolute deviance as . The deviance statistic, D= 2{ } provides evidence of overdispersion as follows: If the deviance index ( ) is much greater than 1 this suggests overdispersion. Rules of thumb on the size of the critical threshold vary from 1.2 or 1.3 to as large as 2.0 [30]. The absolute deviance residuals ( ) were plotted against the corresponding fitted values. For a satisfactory specification of the variance function this plot should show a running mean that is approximately straight and flat. A Moran scatterplot depicts standardised Pearson residuals on the horizontal-axis versus the spatial lag of the standardised Pearson residual on the vertical axis. The spatial lag averages the effects of the neighbouring spatial agglomerations. By construction, the slope of the line in the scatterplot is equivalent to the Moran's I coefficient [31]. If the slope is positive it means that there is positive spatial autocorrelation and a negative slope indicates a "checkerboard" spatial pattern. Results The methodology described in Section 2 was applied to the esophageal cancer data from Mazandaran and Golestan. Automatic Bayesian Model Averaging The GVS methodology involved covariate selection conditional on the probability distribution and spatial autocorrelation type. With five SES and dietary factors there were 32 covariate models, and hence variable selection was made over the 32 models for a specified probability model type. Posterior summaries of the parameters of interest for the candidate models containing all five covariates are presented in Table 1. The posterior summaries of regression coefficients for models with spatial structure are broadly similar to the nonspatial models. However, 95% credible intervals for regression coefficients in the models that included spatial structure are wider than corresponding intervals in nonspatial models, reflecting the inter-agglomeration correlation being taken into account by the spatial model approaches. The estimated marginal posterior probabilities were calculated commencing with GVS for all the covariates. Then the covariates were ranked according to the marginal posterior probabilities and factors with marginal posterior inclusion probabilities lower than 0.2 were eliminated, using a rule of thumb [32]. With this approach the following covariates were omitted: unrestricted food choice for non-spatial and neighbourhood-based regressions, and literacy and unrestricted food choice for distance-based regressions. In a second stage, GVS was used again only on the selected covariates from stage one and the subsets created by combinations of these covariates were ranked according to the model posterior probabilities. BMA of these reduced models was used for prediction purposes. Posterior model probabilities of the top two covariate subsets are presented in Table 2. As Table 2 shows, only income, urbanisation and restricted food choice appeared in the top two covariate subsets. The income and urbanisation factors appeared in all models in at least one of the top two subsets, although the ranking and subsets' posterior probabilities were slightly different. Urbanisation did not appear in the top two subsets for negative binomial regression with either of the two spatial autocorrelation structures. Table 3 illustrates marginal posterior inclusion probabilities for the top covariate subset of candidate model structure. Table 3. Marginal posterior inclusion probability for the top candidate models (covariate subsets): "IN" stands for independence, "N" stands for neighbourhood-based and "D" stands for distance-based structure. Table 4 reports the results for the goodness of fit measures used for model comparison based on reduced models that correspond with the covariate subset 1 models in Table 2, retaining only the variables with marginal posterior inclusion probabilities greater than 0.2. Prediction Performance The pseudo R 2 suggested that approximately one third of the total variation in esophageal cancer counts was explained by each of the subset 1 models with slight improvement for joint independence and spatial models. Figure 2 shows the scatterplot of the observed counts against the model predicted counts; consistent with the pseudo R 2 values the scatterplots show better model fit for spatial models. Table 4. Goodness of fit measures: "IN" stands for independence, "N" stands for neighbourhood-based and "D" stands for distance-based structure. For MSPE and MAD the prediction performances of all spatial models are relatively similar but these spatial models perform better than corresponding non-spatial models. These criteria also suggest that negative binomial and G-Poisson models with neighbourhood-based autocorrelation were preferable to the other models. Figure 1c shows the model adjusted cancer rates from neighbourhood-based negative binomial regression. Assessing Overdispersion The deviance statistic is reported in Table 4 to provide evidence of overdispersion. Poisson models clearly show overdispersion, as do independence structures in the generalised Poisson and negative binomial models. The deviance measure divided by the d.f.-1 is less than 2 for the generalised Poisson and negative binomial models with spatial correlation structures. Figure 3 presents the absolute deviance residuals plotted against the corresponding fitted values. Figure 3 shows an upward trend, indicating that the assumed variance function is not increasing sufficiently fast with the mean. The running mean for trend is overly sensitive to the points at the extremes, so we suggest concentrating on the central part of the graphs. The plots demonstrate that all models do reasonably well while it is hard to distinguish between the competing models on the basis of this index. The Moran Scatterplots Moran scatterplots in Figure 4 suggest that there is positive spatial autocorrelation in the Pearson residuals in non-spatial models. However the scatterplots for regressions with neighbourhood-based and distance-based structures in Figure 4 suggest that residual spatial autocorrelation is no longer a problem. Discussion Bayesian techniques are recognised as powerful tools in disease mapping but little is known about how these methods compare when applied to real data. Reviews and comparison of Bayesian hierarchical and/or non-hierarchical methods suggested for the analysis of aggregate count data in the context of disease mapping and spatial regression can be found in [2,4,[33][34][35]. Our study aims were to assess the risk factors of EC cancer using an automatic Bayesian covariate selection procedure, and to compare prediction performance of the competing models using three distributions for modelling count data to deal with overdispersion and three spatial correlation structures to take account of intra-and inter-agglomeration variation. In conclusion, the use of joint models that include both spatial and nonspatial random effects gave a better picture in terms of model goodness of fit and prediction performance. Generalised Poisson and NB models also performed better than Poisson regression. Overall, generalised Poisson or NB models with conditional autoregressive (CAR) correlation structure seemed to provide the most satisfactory basis for inference. Two spatial structures were considered in our models: the neighbourhood-based autocorrelation structure that borrows strength from neighbouring agglomerations and the distance-based autocorrelation structure that borrows strength from agglomerations over an effective range. The use of the spatial term resulted in more conservative estimates by explicitly modelling the positive inter-agglomeration correlation of the SIRs, compared with the models that ignored this inter-agglomeration correlation. A nonspatial random effect was included along with spatial random effects to take into account agglomeration heterogeneity. The nonspatial term is especially important in CAR structure, because if the majority of the variability is nonspatial, inference for the CAR model might incorrectly suggest that spatial dependence was present. Results from a simulation study have indicated that if the data are truly independent, a model with CAR random effects and no nonspatial random effects leads to very poor efficiency in the estimation of regression coefficients [36]. In model selection the uniform prior distribution on model space is typically used by setting . When using the variable selection indicators , this prior is equivalent to specifying independent Bernoulli prior distribution with inclusion probability equal to . Although prior may be considered noninformative in the sense that it gives the same weight to all possible models it has been shown that this prior can be considered as informative since it puts more weight on models of size close to k/2 supporting a priori overparameterised and complicated models. This is especially problematic when k is large [37,38]. When meaningful prior information about ψ is unavailable, as is usually the case, perhaps the most reasonable strategy would be a fully Bayes approach that puts weak hyperior distributions on ψ. The potential drawback of this procedure is the computational limitation of visiting only a very small portion of the posterior when k is large yielding unreliable estimates of ψ. We defined the inclusion indicators as for three reasons: First, our set of covariates was small (k = 5) and it was very unlikely that this choice of prior affects BMA. Second, to minimise any possible tendency towards overparameterised models we implemented a two stage modelling strategy and eliminated covariates with small inclusion probability at the first stage. Third, MCMC computations for fully Bayesian models potentially impose high computational costs. By choosing conventional empirical Bayesian method we aimed to retain useful features of Bayesian variable selection in a pragmatic way. In this paper we have compared Poisson, generalised Poisson and NB distributions for modelling count data when overdispersion is a problem. Results indicate that the Poisson distribution is not adequate to model cancer SIRs in our data setting. The negative binomial and the generalized Poisson distributions are more appropriate than the Poisson distribution. The negative binomial distribution and the generalized Poisson distributions are quite similar for the range of parameters in our study. It must be emphasized that for count data with small counts, various discrete distributions can fit the data sufficiently well [39]. When competing models exist, the information criterion such as Akaike information criterion (AIC), Bayes information criterion (BIC) and deviance information criterion (DIC) may be useful to select a single "best" model for final inference. However, these standard regression techniques and selection methods do not address the uncertainty associated with model specification. In contrast BMA considers a set of models with all available covariates. Then, it deals with the uncertainty in model form in the estimated parameters, which enables one to average across all models considering the posterior probabilities. Moreover, using the Gibbs sampler to search the model space for all possible models is efficient, due to limited number of covariates. We considered BMA in order to control the model uncertainty with respect to covariates. The advantages of using the BMA approach to account for model uncertainty have been assessed for several different classes of models [40][41][42]. Results from those studies showed that BMA improves predictive performance, by factors ranging from modest to substantial. Regarding the model uncertainty, we have considered only one component: which independent variables to include in the model. There are other components also such as uncertainty about functional forms of the independent variables, which can also be addressed by application of Bayesian methods but there is no evidence from prior work that this has led to improved predictive performance [43][44][45], and as such it was not attempted here. Conclusions The objectives of this study were to evaluate and compare the generalised Poisson and negative binomial models with the Poisson model commonly used for analysing count data. The results indicate that: (i) models with joint independence and spatial random effects were superior to the models with an independence random effect alone; (ii) models with alternative distributions that accommodate overdispersion performed better than Poisson regression. Using a spatial random effect term has the advantage of allocating the overdispersion to spatial and non-spatial components, recognizing the inherently spatial nature of the data. It was found in the case study that generalised Poisson or negative binomial models with conditional autoregressive correlation structure seemed to provide the most satisfactory basis for inference. The methodology presented is not specific to our example and can be applied in a variety of settings to produce more informative results than simple Poisson regression modelling.
2014-10-01T00:00:00.000Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "817f01d4601243152110c3debccce36ac0c19068", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijerph110100883", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "816307d5ce1e61ca72f8956b8c86c9c3f5645e4e", "s2fieldsofstudy": [ "Environmental Science", "Geography", "Medicine" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
33716407
pes2o/s2orc
v3-fos-license
Navigating conflicting laws in sexual and reproductive health service provision for teenagers Background The South African legal and policy framework for sexual and reproductive healthcare provision for teenagers is complex. Objective The article outlines the dilemmas emanating from the legal and policy framework, summarises issues with implementation of the legal and policy framework in practice, and summarises recent changes to the law. Methods In-depth analysis of the legal and policy framework. Training workshops with a purposive sample of nurses and other healthcare providers in the Western Cape. Findings Tensions between consent and confidentiality imposed by the Termination of Pregnancy Act, the Children’s Act, the National Health Act and the Criminal Law (Sexual Offences and Related Matters) Amendment Act render conflicting obligations on healthcare providers. Healthcare providers’ experiences with service provision in this context show that the conflicting roles they inhabit render their service provision to teenagers more challenging. Conclusion Healthcare providers need to learn about their legal obligations surrounding adolescent sexual and reproductive health services. Introduction In October 2013, the Constitutional Court delivered judgement in the so-called Teddy Bear Clinic Case, which challenged the constitutionality of provisions of the Sexual Offences Act (Criminal Law [Sexual Offences and Related Matters] Amendment Act 32 of 2007) relating to adolescents. The provisions in question directly implicated sexual and reproductive health (SRH) care providers because they criminalised a very wide range of consensual sexual activity between adolescents aged 12-15 years, including kissing on the mouth, hugging, sexual touching and sexual intercourse. These provisions also created mandatory requirements for 'anyone' with knowledge of consensual sexual activity to report this to the police, who were required to refer the case to the National Prosecuting Authority for a decision on how to proceed. Because the group of mandatory reporters is so widely defined, SRH care providers fall within this ambit. This means that, when faced with a teenager who wants to access contraception or other SRH services, healthcare providers are faced with a tricky choice between providing services and reporting the teen. The intent of these provisions in the law was to protect teens from unwanted or ill-advised sexual activity, but in practice their implementation was much more problematic (illustrated, e.g. by the much-publicised Jules High School case that saw three teenagers prosecuted for consensual sexual activity). The crux of the Teddy Bear Clinic's challenge to these sections of the law was that these provisions harmed the very adolescents they intended to protect. This argument was based on the fact that the sexual activity in question is developmentally age-appropriate and that criminalising such behaviour bars access to information for teenagers, unnecessarily exposes them to the criminal justice system, and potentially damages teenagers' understanding of sexuality, as well as their opportunities to develop a healthy attitude towards their body and sexuality. The review article provides an overview of the South African legislative framework that governed the provision of SRH services for adolescents between the age of 12 and 15 years (until July 2015) and highlights the apparent conflicts amongst these laws and policies. It analyses the dilemmas for healthcare providers, summarises the implications of the Constitutional Court judgement for providers and teenage patients and sets out the changes to the law brought about in July 2015 by the Criminal Law Sexual Offences and Related Matters Amendment Act 5 of 2015 (hereafter referred to as the SOA Amendment Act). Lastly, it presents strategies to provide healthcare providers with guidance when providing SRH services to adolescents. The article presents data from a qualitative study undertaken by the Gender, Health and Justice Research Unit of the University of Cape Town between 2012 and 2014 to understand healthcare workers' experiences in understanding and implementing the legal framework on adolescent SRH care. The legal framework: Conflicting laws and policies The provisions that were challenged in the Teddy Bear Clinic Case are part of the regulatory framework that shapes a particularly tricky aspect of SRH care service provision: services for adolescents. South Africa's 1996 Constitution and Bill of Rights protect the right to make decisions regarding reproduction and the right to access healthcare services for both adults and children. Several laws breathe life into these constitutional rights and make them accessible to children. The Choice on Termination of Pregnancy Act 92 of 1996 (Republic of South Africa 1996) allows that women and girls of any age can request a termination of pregnancy (TOP) up to 12 weeks, and the National Health Act 61 of 2003 (Republic of South Africa 2003) mandates that all information concerning a patient (of any age) is confidential. The Children's Act 38 of 2005 (Republic of South Africa 2005) states that that children from the age of 12 may not be refused condoms and contraceptives and that such provision must be kept confidential. All these laws, as well as the Sexual Offences Act 32 of 2007 (Republic of South Africa 2007), which says that children may only freely consent to sex at 16 years of age, regulate aspects of teenagers' access to health care and sexual and reproductive rights and also specify obligations and responsibilities of healthcare workers who provide SRH services to teenagers. We discuss each act in detail below. The Choice on Termination of Pregnancy Act One of the first pieces of legislation passed under the postapartheid government, the Choice on Termination of Pregnancy Act 92 of 1996 (the CTPA), allows that any pregnant woman or girl can request a TOP up to 12 weeks of gestation (for more information, see note 1 in the clarification of terms at the end of the article), without consultation or approval by a doctor or nurse. The Act explicitly states that it applies to 'any female person of any age', and the courts have only limited this by adding the requirement that a child seeking an abortion also be able to provide informed consent. Minors are therefore not required to consult their parents before having an abortion (although healthcare providers should advise them to do so), and healthcare facilities may not deny children the service should they choose not to do so. Although preabortion (and post abortion) counselling is available to patients, they are not required to undergo counselling in order to access an abortion. Structuring access in this way is intended to remove barriers to seeking help − for example, where pregnant minors may have been sexually abused by their father or guardian or where they are simply too afraid to speak to their parents about the issue. The law is very clear that abortions in the first trimester are the autonomous decision of the pregnant woman or girl and are not subject to any conditions or requirements other than the pregnant woman's or girl's informed consent. The CTPA also stipulates that the identity of a woman who has requested or obtained an abortion shall remain confidential at all times unless she chooses to disclose that information herself. Facilities that provide terminations of pregnancies are only required to keep records of the number of abortions they perform and to forward this information to the National Department of Health on a monthly basis. The National Health Act The National Health Act 61 of 2003 (the NHA) deals with a wide range of health-related issues and is relevant in the context of SRH service provision because it addresses healthcare providers' duties and patients' rights. The NHA recognises that an informed decision about a medical procedure can only be made if the patient has been given all relevant information on the procedure's benefits and potential risks. The Act therefore upgraded the ethical principle of patient confidentiality to a binding requirement for the provision of healthcare services (for more info, see note 2 in clarifications). Patient confidentiality and informed consent are key principles of medicine − they underpin a trusting relationship between provider and patient and help ensure that patients feel comfortable in accessing preventative or curative health services and health information. Breaches in confidentiality are problematic because they prevent patients from accessing care, which not only negatively affects the patient's own health but may also put others at risk − especially in a context of communicable and stigmatised health issues such as HIV and/or AIDS or (other) SRH concerns. Confidentiality and trust are critical issues in SRH services for teens, as this group are in need of credible information, but are often nervous about disclosing sexual activity or using available SRH services. The Children's Act The Children's Act 38 of 2005 (which was finally promulgated in its entirety, along with the accompanying Regulations, in 2010) introduced a number of critically important principles in respect of children. The Act introduced the so-called 'best interests of the child' principle, which states that 'a child's best interests are of paramount importance in every matter concerning the child', and that this standard must be applied in all matters concerning the care, protection and well-being of a child (see clarification 3). The Act also dropped the age of consent for most health-related decisions to 12 years in an effort to strengthen the autonomy of children in making decisions that affect them (including about their health). In terms of SRH services, the Act has a strong public health framing that views access to contraceptives (see clarification 4) as being in the best interests of children by allowing sexually active teenagers to protect themselves from unprotected sex and sexually transmitted diseases, including HIV. The right to contraception, however, is intended to go hand in hand with appropriate sexuality education. The Act views healthcare providers as being well placed, and adequately trained, to detect these needs and to provide the requisite care and education. The Children's Act also protects children's rights to confidentiality about the provision of contraceptive services, as well as in terms of their health status more generally. This confidentiality is limited, however, by requiring certain professionals (amongst them healthcare providers) to report cases where they reasonably believe a child is a victim of abuse (see clarification 5) to either the provincial Department of Social Development, a designated child protection organisation, or a police officer, in order for the matter to be investigated and, where necessary, for the appropriate measures to be taken to protect the child from further harm. Failure to report the abuse of a child carries a penalty of a fine and/or imprisonment up to a maximum of 10 years. The Criminal Law (Sexual Offences & Related Matters) Amendment Act The Criminal Law (Sexual Offences & Related Matters) Amendment Act 32 of 2007 (the SOA) redefined sexual offences under South African law − changing the definition of rape by making it gender neutral and widening the ambit of what constitutes rape and also introducing a range of other statutory offences that address nonconsensual sexual acts with adults and children. The SOA set the age of consent to sex at 16 years − consensual sexual activity with people over 16 years old is therefore not a crime. The Act also mandated that children under the age of 12 are not able to give legally valid consent to sexual acts. In terms of the SOA, any sexual acts with a child under the age of 12 years were therefore regarded as nonconsensual (even where the child apparently consented) and were considered serious offences. (The SOA was amended in July 2015 by the SOA Amendment Act, which is discussed below.) The law is applicable to children over 12, but the law for children younger than 16 is much more complex. The 2007 SOA created a special category of offences related to consensual sexual activity with children over 12 years, but under the age of 16. Nonconsensual sexual activity with anyone, of any age, is always a crime. The clauses in question − Sections 15 and 16 of the 2007 SOA − listed the offences of statutory rape (for penetrative acts) and statutory sexual assault (for nonpenetrative acts), even where both parties were children in this age group (see clarification 6). For these children, even though they consented to the sexual activity, that consent was not legally valid because of their age. These clauses covered a wide range of behaviours including direct or indirect contact between the mouth of one person and the genital organs, anus, female breasts or mouth of another person. Many of these behaviours are considered developmentally appropriate for children in this age group, for example, kissing, sexual touching and penetration with a finger or sex toy (WHO 2012; Monasterio et al. 2010). These complexities are illustrated in Table 1. In addition, Section 54(1)(a) of the 2007 Sexual Offences Act obligates anyone who has knowledge of the commission of a sexual offence against a child to report it to the South African Police Services immediately (see clarification 7). Section 54(1)(b) further places criminal sanctions for anyone who fails to report these offences. In fact, under the law a person who fails to report knowledge of a sexual offence can be liable to a fine or to imprisonment for up to 5 years, or both. Importantly, the reporting obligation under the SOA is different from the reporting obligation under the Children's Act, which gives mandatory reporters (including healthcare providers) a range of places to report sexual abuse of a child, amongst them social workers or other designated child protection organisations. Under the SOA, knowledge of a sexual offence has to be reported to the police. Whilst the purpose of these clauses in the 2007 SOA were to encourage the protection of children by placing everyone under an obligation to report abuse, the broad application of these provisions were widely viewed as encouraging interference with the privacy of children for reasons other than those that could be construed as being in the child's best interests (McQuoid-Mason 2011). Providing healthcare services -balancing consent and confidentiality The various provisions described above create an inherent tension between consent and confidentiality. In practice, these conflicting laws mean that SRH care providers, when seeing a teenager who wants to access contraception or other SRH services, are faced with a tricky choice between providing services, support and counselling for the teenager about their choices and reporting the teenager to police and social workers in order to enforce certain aspects of the law. These tensions between consent and confidentiality are summarised in Table 2. In the healthcare setting, much of the confusion that ensues from this complex legal framework relates to consent -at which age a teenager is able to consent to either sex and/or SRH services. Although a child only legally becomes an adult in law at the age of 18 years, the SOA states that children can legally consent to having sex from the age of 16. Children are entitled to SRH services such as contraceptives and HIV testing from the age of 12 (or younger) to encourage access to professional advice and health services. Because it is difficult to prevent teenagers from having sex, the law ensures that they can at least access preventive health services that protect them from sexually transmitted infections and teenage pregnancy. 16 of the SOA, arguing that whilst these provisions intended to protect teenagers from unwanted or ill-advised sexual activity, their implementation has been highly problematic and has not always resulted in the 'best interest of the child' being upheld. The applicants argued that the provisions of the SOA actually harmed adolescents, because consensual sexual activity, as outlined under the two sections, is appropriate for the levels of development of adolescents. The applicants further argued that the provisions were particularly punitive for girls in that if consensual sex resulted in pregnancy the healthcare provider who provided the girl with prenatal care or an abortion would be required to report the girl to the police and charges may result. The applicants argued that Sections 15 and 16 were unconstitutional because they infringed on children's rights to dignity, privacy, bodily and psychological integrity. Additionally, the applicants argued that the sections infringed on children's right to have their best interest treated as being of paramount importance in all matters concerning them. Therefore, the court had to decide if it was constitutional that children faced criminal sanctions for developmentally appropriate, consensual sexual behaviour in order to delay sexual activity and reduce the risks associated with it. The Constitutional Court agreed with the applicants' argument and declared the provisions unconstitutional, because they imposed criminal liability on children under the age of 16 and contravened the 'best interest of the child' principle. The court gave the legislature 18 months to amend the SOA and criminal prosecutions against children under this law were stopped whilst the Parliament made changes to the law. The Department of Justice and Correctional Services offered opportunities for public submission and Confidentiality provision Law A child's right to confidentiality in respect of sexual reproductive health services is limited where a medical practitioner reasonably believes that the child has been abused or neglected. Children's Act A child has the right to confidentiality in respect of information concerning her health status, treatment or stay in a health establishment except where records need to be disclosed in the best interest of the child, for a legitimate purpose or in the scope of a health practitioner's duties. National Health Act A child does not have the right to confidentiality in respect of sexual activity as healthcare professionals are obligated to report knowledge of a sexual offence. Age of consent Law General medical treatment Children can consent to medical treatment without the consent of a parent when they are over the age of 12, and have sufficient maturity and the mental capacity to understand the benefits, risks and social implications. Children's Act HIV test Children aged 12 years and above can consent to an HIV test without their parents' consent. Children under 12 can consent to an HIV test without their parents' consent if they have sufficient maturity and the mental capacity to understand the benefits, risks and social implications. Condoms Children over the age of 12 years may not be refused condoms by a healthcare provider or condom seller. Contraceptives Contraceptives other than condoms may be provided to a child aged 12 years and above without their parents' consent. A medical examination must be done and proper advice given to the child. Children's Act Sex A person may legally consent to sexual activity (penetrative or nonpenetrative) at 16 years of age. (Note that this has not changed under the 2015 SOA Amendment Act). (2007) Termination of pregnancy Any pregnant woman or girl can request a termination of pregnancy up to 12 weeks of gestation, without consultation or approval by a doctor or nurse. This means that there is no age restriction for a TOP, and girls can consent without their parents. This is to ensure that any woman or girl who needs this service can access a termination of pregnancy. Termination of Pregnancy Act SOA, Sexual Offences Act; TOP, termination of pregnancy. http://www.curationis.org.za Open Access public comment during 2014 and early 2015 on the proposed amendments to the Bill. Numerous child-focused nongovernmental organisations advocated that the legislature should place the regulation of teenage sexual activity within the purview of health, social welfare and education rather than criminal justice. These organisations also advocated for the use of the 'best interests of the child' standard in amending the SOA to clarify children's rights to privacy and confidentiality for service providers and to simplify the legal framework on SRH service provision for teens. The Criminal Law (Sexual Offences & Related Matters) Amendment Act 5 of 2015 The Firstly, it decriminalised consensual sexual activity between teenagers who are both between the ages of 12 and 15 years (regardless of the age difference between them). Secondly, it decriminalised consensual sexual activity where one teenager is between 12 and 15 years old, and the other between 16 and 17 years old as long as the age difference between them is less than 2 years. These changes apply to both statutory rape (Section 15, relating to penetrative consensual sexual acts) and statutory sexual violation (Section 16, relating to nonpenetrative consensual sexual acts). The 2015 SOA Amendment Act has not changed the age of consent to sexual activity, which remains 16 years old − a fact that is pointed out in the preamble to the Amendment Act. The preamble also underlines the importance of 'discouraging adolescents from prematurely engaging in consensual sexual conduct which may harm their development, and from engaging in sexual conduct in a manner that increases the likelihood of the risks associated with sexual conduct materialising'. Clearly, healthcare workers have a critical role to play in this regard. Bridging policy and practice In practice, however, the judgement has not − and will not − substantially changed the complexities of service provision in practice until the amendments are made to the legal framework, and these changes are communicated to frontline service providers. We were therefore interested in better understanding how service provision happens in this context and in learning about healthcare providers' strategies and experiences in providing SRH services to teenagers. We spoke to 28 healthcare workers across the Western Cape and have hosted a series of workshops with healthcare providers and stakeholders with experience in children's law, public health, SRH rights, as well as representatives of local and national government. The results will be published elsewhere (Muller et al. 2016), and we will therefore only provide a short summary. In general, we found that healthcare providers did not know what their obligations under the different Acts were. Many therefore rather explained to us 'what we do here'. A particularly confusing area was the age of consent to sex, and the ages of consent to contraceptives and termination of pregnancies. Most healthcare providers were not aware that under the SOA, they were obliged to report consensual sex between 12-16 year olds to the police (under the provisions of the 'old' SOA that had not yet been overturned). Instead of referring to the police, many healthcare providers rather referred teenagers to a social worker. It became clear that nurses play complex and often contradictory roles when our results also clearly show that nurses play a complex and sometimes contradictory role when providing SRH services to adolescents. Whilst they are service providers who offer advice, support, counselling and care together with information about safe and healthy sexual behaviour on the one hand, they are also mandated to report knowledge of illegal sexual activity, and sexual abuse and violence, and thus act as law enforcers. As a result of these conflicting duties, nurses struggled with confidentiality for their patients. In cases where the teenage patient was brought to the clinic by a family member, these conflicting roles are aggravated even more for nurses. Nurses, therefore, were caught between protecting confidentiality and acting in the best interests of the child. Nurses' own values and attitudes (often as mothers themselves) were an important factor in understanding how they provide services to teens. What emerged most clearly is that more guidance is needed for healthcare providers holding these competing roles and conflicting responsibilities. To this extent, we have summarised three hypothetical scenarios to provide an overview of the legislative framework and obligations, as well as recommendations coming from the service providers we worked with (see Table 4). Recommendations to improve teenage sexual and reproductive health service provision Our research clearly shows that healthcare providers need to be better equipped to navigate the different obligations that they have under the existing legal, professional and ethical frameworks. We therefore have the following recommendations to improve adolescent reproductive health service provision. Better training of healthcare providers on the legal context in which they provide sexual and reproductive health services One of the outcomes of our project are guidelines for all healthcare workers providing SRH services to adolescents, which we have developed in collaboration with the Department of Health. These will be available through Department of Health and through our Unit. Because the reality of short-staffed and over-burdened services often makes it impossible to send people to trainings, healthcare providers who want to skill themselves independently to understand the legal context and changes in the law can do so at their own pace and in their own time. Ongoing advocacy Given that Sections 15 and 16 of the 2007 SOA have been amended, we need continued advocacy to ensure that the revised provisions are adequately disseminated to healthcare workers and other stakeholders. In addition, we need ongoing advocacy and tools to ensure that the changes in the law are translated into practice so that healthcare workers properly understand the reality of teenage sexuality and are able to provide adequate protection from sexual violence whilst at the same time not condemning consensual sexual exploration as part of a healthy developing sexuality. Updated plain-language guidelines for the provision of services to teenagers are an important part of these efforts. We also believe that the voices of healthcare providers are critical to law reform and other policy process to ensure that the frameworks that govern service provision take into account the context in which health services are offered and understand the potential roles that healthcare providers will have to play in the enforcement of legislation. Scenario Acts and obligations Recommendations Girls (aged 12-16 years) seeking termination of pregnancy TOP Act: No minimum age for a TOP, no parental consent needed. SOA: Children can only consent to sex from the age of 16. Sex with children under 12 must be reported to the police. All nonconsensual sexual activity with children (regardless of age) must be reported to police. Currently no provisions for consensual sex between children aged 12 to 15 years (pending legislative reform). Children's Act: Age of consent to medical treatment is 12 years (with sufficient maturity). Counsel the girl on her options in a nonjudgemental way. If you have doubts about the sexual partner or the voluntariness of the relationship, try to address these in the counselling session or refer to a social worker. Do not make this a condition for obtaining the TOP. If you have grounds to come to a conclusion of abuse, refer to a social worker or report to a designated child protection organisation or the police. Inform the girl of these steps. In accordance with the girl's decision, refer for TOP or for antenatal care. Teenagers (aged 12-16 years) seeking condoms, contraception or HIV testing services Constitution, National Health Act: Everybody has the right to access health care, including sexual and reproductive health care. No parental consent is needed. Children's Act: Children above 12 years of age may not be refused condoms and contraception. Provision must be kept confidential. SOA: Children can only consent to sex from the age of 16. Sex with children under 12 must be reported to the police. All nonconsensual sexual activity with children (regardless of age) must be reported to police. Currently no provisions for consensual sex between children aged 12 to 15 years (pending legislative reform). Counsel teenagers on contraceptive options and safer sex in a nonjudgemental way. If you have doubts about the sexual partner or the voluntariness of the relationship, try to address these in the counselling session or refer to a social worker. Do not make this a condition for obtaining contraceptives. If you have grounds to come to a conclusion of abuse, refer to a social worker or report to a designated child protection organisation or the police. Inform the teenager of these steps. Two male teenagers (aged 12-16 years) in a relationship seeking advice on safer sex All of the above legislation applies equally to same-sex and opposite-sex relationships. Constitution: No discrimination based on sexual orientation (Bill of Rights 9[1]). Educate yourself and your colleagues about same-sex practices and HIV/STI prevention. Ensure that your clinic stocks condoms for anal sex, lubricant and dental dams. Provide services in a nonjudgemental way. If you lack the knowledge to counsel these teenagers, refer to an LGBTI health service (see resources at the end of the article) and seek training for future consultations.
2017-11-07T00:38:25.796Z
2016-02-25T00:00:00.000
{ "year": 2016, "sha1": "bf32f5f64804bbe6c218b30c6689b4d368c58006", "oa_license": "CCBY", "oa_url": "https://curationis.org.za/index.php/curationis/article/download/1565/1917", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bf32f5f64804bbe6c218b30c6689b4d368c58006", "s2fieldsofstudy": [ "Law", "Medicine" ], "extfieldsofstudy": [ "Political Science", "Medicine" ] }
4374452
pes2o/s2orc
v3-fos-license
Inferior turbinectomy: what is the best technique? surgical procedure with excellent out-come for many with Inferior turbinectomy: what is the best technique? ଝ Turbinectomia da concha inferior: qual a melhor técnica? Turbinectomy is a surgical procedure with an excellent outcome for many patients with nasal obstruction resistant to clinical treatment. Surgery on either the inferior or the middle turbinate can result in quite satisfactory results. However, the best surgical approach to the inferior turbinate is a matter for discussion and, to date, there is no gold standard technique that can be applied to all cases. Regardless of technique and equipment, creating more space for the passage of air and minimizing complications are the desired goals. Procedures range from simple submucosal cauterization associated with lateral fracture to resection, to a greater or lesser extent. The materials and equipment used include very sharp scissors or more sophisticated equipment such as microdebriders and radiofrequency ablation. There is abundant literature on the subject, but comparative studies among the different approaches are scarce. There are several reasons for this, and the wide variability in nasal anatomy among individuals is undoubtedly one of the factors limiting the choice of a single technique for all cases. But then, how can one choose the best technique? It depends on each case. More specifically, it depends on the anatomy of the inferior turbinate (whether the hypertrophy is more related to bone or mucosa), the extent of the hypertrophy (whether it is more anterior or posterior), the response to previous interventions, available equipment, and the surgeon's skill. Another point to be considered is how to preserve the nasal physiology. The following are some essential clinical and physiological concepts that should be considered by the surgeon: turbinectomies but still complain of nasal obstruction or even that their obstruction worsened after the surgery. 1 -The symptom of nasal obstruction is poorly correlated both with rhinoscopy and imaging findings, and with specific nasal permeability tests such as rhinomanometry and acoustic rhinometry. A patient can complain of obstruction and have a normal examination, or the opposite can occur, i.e., the examination is abnormal, and the patient has no complaint. 2 -Under normal conditions the head of the inferior turbinate in the nasal valve region represents approximately 50% of intranasal airflow resistance. Decreasing the head of the inferior turbinate results in significant increase in airflow. -The correlation between the nasal area and airflow is exponential. Small increases in area generate large increases in airflow. 2 -Thermoreceptors present in the nasal vestibular skin and nasal mucosa are also responsible for the sensation of adequate breathing (cold thermoreceptor activation through the trigeminal nerve). 3 In brief, the choice of surgical technique should take into consideration, in addition to the local anatomy, nasal physiology concepts. Breathing well through the nose requires adequate air space and nasal sensation. 2,3 The main function of the nose is the modification of inspired air. For that to occur, it is necessary that air enter the nose; however, the presence of viable mucosa and nasal tissues is also vital for this function to occur. For these reasons, we have developed a relatively simple technique, which we currently call ''Five-minute turbinectomy'', due to the mean time it takes to be performed. 4,5 The technique is based on the turbinoplasty described by Mabry 6 in 1988, with the difference of using cutting forceps and resecting the inferior turbinate head. The four essential physiological principles for the approach are: EDITORIAL -Small area increase = large increase in airflow (exponential correlation between area and airflow); -Removing more does not mean greater improvement (no correlation between breathing sensation and transnasal airflow area); -Nasal sensation depends in part on the nasal mucosa of the inferior turbinate. After local vasoconstriction, the head of the inferior turbinate is resected with cutting forceps. This is the basic difference between this technique and Mabry's turbinoplasty. The next step is the elevation of a medial mucosal flap, of greater or lesser extension according to each clinical situation. Subsequently, the resection of the bony portion and the inferior and lateral mucosa are performed and, finally, the medial flap is repositioned. A comparative study between this technique and the classical partial turbinoplasty with scissors showed similar preliminary outcomes in relation to nasal obstruction relief. 5 In conclusion, the surgical management of the inferior turbinate should be individualized, according to the patient's clinical situation, and the attending physician should know all available techniques and use them as required. Herein, experience-based medicine seems to be more important than evidence from randomized clinical trials. The surgical techniques used should prioritize improvement in air space and nasal function.
2018-01-18T04:01:56.570Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "80762e5dcae7dcf7a68bc4ea9b420d1e02300603", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.bjorl.2017.12.003", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f539b6a2392d2c4990d358de33064fb6608dd760", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
20176327
pes2o/s2orc
v3-fos-license
Consensus about managing gastrointestinal and cardiovascular risks of nonsteroidal anti-inflammatory drugs? In a recently published article in BMC Medicine, Scarpignato and colleagues present the results of a consensus conference that addressed several aspects of the management of pain in patients with osteoarthritis. The main areas covered include the relative safety in regard to gastrointestinal and cardiovascular adverse events of non-selective ‘traditional’ non-steroidal anti-inflammatory drugs (NSAIDs) versus cyclooxygenase-2 selective NSAIDs. The role of co-therapy with proton pump inhibitors in enhancing gastrointestinal safety is also reviewed. This commentary focuses on two areas that the consensus conference addressed, i) the whole length of gastrointestinal tract risk profile of the various NSAIDs (not just the ulcer risks in stomach and duodenum); ii) more recent information, but still some uncertainties, about the cardiovascular risks associated with the two classes of NSAID in general, and naproxen in particular. Please see related article: http://dx.doi.org/10.1186/s12916-015-0285-8 Background As life expectancy in many countries increases into the 80s and beyond, degenerative joint disease is creating an increasing burden for patients and healthcare systems. For osteoarthritis especially, non-steroidal anti-inflammatory agents (NSAID) remain the most effective option for pain relief, short of surgical alternatives such as joint replacement [1]. However, gastrointestinal (GI) ulcers and their complications are well-known NSAID side effects that are more prevalent in the elderly and are, at times, lifethreatening [2]. The recognition that NSAIDS damage the stomach and duodenum (at least partly) by blocking the mucosal production of protective prostaglandins catalyzed by cyclooxygenase (COX)-1 [3] led to the development of COX-1-sparing NSAIDs. These selectively inhibit COX-2, which mediates synthesis of pro-inflammatory prostaglandins. The strategy has been successful: highly selective COX-2 inhibitors do reduce (but do not eliminate) the risk of GI ulceration [4,5]. However, an unanticipated risk that surfaced in several randomized studies was an increase in adverse cardiovascular (CV) events in patients taking COX-2 inhibitors for months or years [6][7][8]. The European Medicines Agency responded promptly, stating, in 2005, that 'COX-2 inhibitors must not be used in patients with established ischaemic heart disease and/or cerebrovascular disease …' [9]. On the other hand, the US Food and Drug Agency (FDA), in the same year, declined to make such a limiting statementnoting that it was unclear whether COX-2 inhibitors carried a greater vascular risk than the older non-selective NSAIDs (nsNSAIDs), that further research was required, and that in the meantime warnings about the possibility of increased CV risk with all NSAIDs should be included in drug labeling [10]. Thus, clinicians and their patients face some dilemmas about how to balance the GI and CV risks, especially in patients known to be at increased risk for both, as occurs in many elderly patients. Cryer, in a submission on behalf of consumers to a 2014 FDA hearing that was contemplating a labeling change based solely on CV risk, emphasized the need for a comprehensive risk assessment: 'In the process of addressing cardiovascular health in the setting of NSAID use, which we applaud, we […] would not want you to inadvertently increase the risk of other untoward outcomes associated with NSAIDs, such as GI and renal toxicities' [11]. It is timely to review the area; thus, in a recent article published in BMC Medicine, Scarpignato et al. [12] report on a recent consensus meeting that has updated earlier guidelines using more recent information. Discussion Scarpignato et al. [12] used a modified Delphi approach to gauge levels of agreement and opinions on the level of evidence for nine statements about various aspects of NSAID use. These ranged from efficacy for pain relief, a comparison of GI risks with different NSAIDs, to a comparison of the CV risks with different NSAIDS. The panel was an international multidisciplinary group. It is perhaps a pity that their meeting was now more than three years ago, but the authors updated their literature search in the interval for this publication. The result is a helpful distillation of expert opinion on the areas covered. This commentary will focus on two aspects. Firstly, the consensus statement looks more comprehensively than others at the GI risks of NSAIDS from top to bottom of the GI tract. While the life threatening complications of NSAIDS (including low-dose aspirin) arise mainly from ulcers in the stomach and duodenum, it is increasingly recognized that small intestinal ulceration is also common as one cause of iron-deficiency anemia in NSAID users and, occasionally, of frank GI hemorrhage. Looking at GI risk in its totality, statement 4 from the consensus conference reads in part: 'NSAID use is associated with increased risk of adverse events throughout the entire GI tract.' The levels of agreement and of supporting evidence were both high. There is good evidence, summarized in the consensus paper, that proton pump inhibitors (PPI) substantially reduce the risk of upper GI ulceration and complications of both nsNSAIDs and COX-2 inhibitors. However, it is not surprising that current evidence (rated level B by the conference) indicates that PPIs do not protect against ulceration in the near neutral pH milieu of the small intestine and colon. The second issue worthy of comment is the conclusion the consensus group reached about whether some NSAIDs are safer than others from the standpoint of CV risk. Statement 8 reads: 'The risk of CV events associated with celecoxib use is similar to that associated with the use of most ns-NSAIDs.' Eighty-four percent of the panel agreed strongly or moderately, although only just over half the panel rated the level of evidence as high. They did not endorse earlier strong recommendations from bodies such as the American Heart Association and American College of Gastroenterology that naproxen should be the NSAID of choice for patients with high CV risk [13][14][15][16]. Instead, the treatment-guidance algorithm they propose allows either naproxen or low-dose celecoxib as the preferred agents in patients with high CV risk, adding in a PPI to either if patients are judged to also be at high GI risk. As Scarpignato et al. [12] indicate, the evidence about whether naproxen has a lower CV risk has been conflicting; there is some pharmacokinetic basis to suspect it might. Aspirin exerts its prolonged anti-platelet effect because it irreversibly acetylates platelet cyclooxygenase [17]. However, other nsNSAIDs are reversible inhibitors of the enzyme, so their platelet inhibitory effect disappears as their plasma levels dissipate [18]. Naproxen is one of the longer acting nsNSAIDs, with a plasma elimination halflife of about 14 hours [19]; a small study of volunteers given a single dose of 1,000 mg found platelet aggregation still reduced after 24 hours in 60% of cases [20]. Thus, it is plausible that twice daily dosing may offer some protection against thrombotic events. Seemingly in support of this, a recent large meta-analysis (the CNT collaboration) found that a coxib, diclofenac, or ibuprofen increased the rate of major vascular events by about a third (not quite significant for ibuprofen), but naproxen did not [21]. However, a meta-analysis is only as strong as its component parts, and a particular weakness of the CNT meta-analysis was that it had to indirectly compare the effects of the different drugs. That is to say, studies of drug A versus drug B were combined with studies of drug A versus placebo to estimate relative risks for drug B versus placebo. As Scarpignato et al. [12] note, and a recent FDA hearing concluded, the best evidence should come from a large randomized controlled trial in arthritis patients at high CV risk; such a study is now nearing completion [22]. The Prospective Randomized Evaluation of Celecoxib Integrated Safety versus Ibuprofen Or Naproxen (PRECISION) trial, which was mandated by the FDA, has recruited more than 20,000 such patients. The last patient follow-up is scheduled for the end of 2015, so results can be anticipated during 2016 [23]. This study should give useful, real-world information to increase the evidence base for managing the high CV risk arthritis patient. Conclusions Scarpignato et al. [12] have produced a valuable summary of the current state of knowledge about the GI and CV risks of both nsNSAIDs and COX-2 selective drugs, which will be helpful for clinicians managing patients with osteoarthritis. As they emphasize, there are still uncertainties regarding CV risk profiles of commonly used NSAIDs, and results of some ongoing research directed at this are anticipated.
2017-04-20T03:18:10.386Z
2015-03-19T00:00:00.000
{ "year": 2015, "sha1": "1bc580eeede8a8dc02bd037a9ec50bf3bd56559a", "oa_license": "CCBY", "oa_url": "https://bmcmedicine.biomedcentral.com/track/pdf/10.1186/s12916-015-0291-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "06aeaab38f04b09994c352d3100513963fe0a2da", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
199574179
pes2o/s2orc
v3-fos-license
Dermatofibrosarcoma Protuberans Re-excision and Recurrence Rates in the Netherlands Between 1989 and 2016 were treated with excision, 4% with Mohs micrographic surgery, and 9% otherwise or unknown. Linked pathology data were retrieved for 1,677 patients. Half of all excisions (847/1,644) were incomplete and 29% (192/622) of all re-excisions were incomplete. The cumulative incidence of a recurrence was 7% (95% confidence interval (CI) 6–8) during a median follow-up of 11 years (interquartile range (IQR) 6–17). After Mohs micrographic surgery ( n = 34), there were no recurrences during a median follow-up of 4 years (IQR 3–6). Due to the high rate of incomplete excisions and recurrences after excision, this study supports the European guideline, which recommends treating dermatofibrosarcoma protuberans with Mohs micrographic surgery in order to decrease the rate of recurrence. the included patients from the NCR registry were linked to PALGA. In order to have at least 2 years of follow-up, PALGA data were retrieved only for patients who were diagnosed with a DFSP before 1 January 2014. Follow-up of the patients started on the day of the first primary DFSP diagnosis and ended on the day of death or emigration, or the last date of NCR-PALGA linkage, which, for this study, was performed on 1 February 2015. Conclusions from the PALGA pathology reports were reviewed manually (WK, EIVC, LH, CBVL) and scored on the following variables: diagnosis (DFSP, possible DFSP, other), immunohisto-chemical staining with CD34 (positive, negative, not performed), anatomical location (according to ICD-O-3), type of Standardized incidence rates were calculated using the European standard popula- tion (2013) (15). Descriptive statistics were used to report the baseline characteristics of patients, DFSP, treatment and study outcome. In order to estimate the number of surgical procedures during follow-up (i.e. including the first surgical treatment of the primary DFSP and all re-excisions and/or recurrences), the mean cumulative count was calculated, which is equal to the sum of the cumulative incidences of all surgical procedures (16). To estimate the probability of the first DFSP recurrence during follow-up, a cumulative incidence curve (CIC) was calculated, which takes the competing risk of death into account (17). Statistical analyses were performed using STATA (version 15), SAS 9.4 statistical software (SAS Institute Inc., Cary, NC, USA), R statistical software version 3.4.1 (www.r-project.org). p -values < 0.05 (2-sided) were consi-dered statistically significant. D ermatofibrosarcoma protuberans (DFSP) is a rare soft tissue tumour that originates from a translocation of chromosomes 17 and 22, resulting in tumour cell proliferation of fibrohistiocytic lineage (1).Unlike most skin cancers, DFSP is a non-UV-related skin cancer (1).The overall standardized incidence rate in the Netherlands and the USA is 4 per 1,000,000 person-years (2)(3)(4).Men and women are equally affected, and the peak incidence age is between 20 and 50 years (5)(6)(7).Although DFSP occurs mostly in adult patients, it rarely occurs in children until 20 years old in the USA (1.0 per 1 million) (8).DFSP is commonly located on the trunk (50%), proximal extremities (20-30%) or head and neck (10-15%) (5)(6)(7).It presents as an asymptomatic, slowly growing, skin-coloured indurated plaque.Al though DFSPs rarely metastasize, they grow in a locally invasive manner into subcutaneous fat, muscles and sometimes bone (5,6,9).Clinically, and with imaging tests (e.g.magnetic resonance imaging (MRI) or computed tomography (CT)), DFSP are difficult to delineate because the tentacle-like invasion into subcutaneous tissue is often greater than suspected.As a result, multiple surgical procedures may be required to ensure complete clearance of DFSP. Until 2015, DFSP guidelines were lacking and, in the Netherlands, the majority of DFSPs were treated with standard excision.The European consensus-based interdisciplinary guideline, which has been available since 2015, recommends treating DFSPs with Mohs micrographic surgery (MMS) in order to reduce the assumed high recurrence rate after standard excision (10). To date, outcome data for management of DFSPs are based on small cohorts of patients, with limited information on those lost to follow-up (6,11).Previous studies report a wide range of rates of re-excision (3-81%) and recurrence (0-46%) of DFSP (6,7,9,12,13).This nationwide cohort study of DFSP with long-term follow-up aims to determine the rate of re-excision and recurrence, which is needed to inform patients, clinicians, and health policymakers in planning optimal treatment strategies and surveillance schedules. Patients This cohort study included all patients with a histologically confirmed DFSP in the Netherlands between January 1989 and December 2016 (Fig. 1).Data were obtained from the Netherlands Cancer Registry (NCR), which has collected data on all newly diagnosed cancer patients in the Netherlands since 1989.Registration is based primarily on notification by the nationwide network and registry of histopathology and cytopathology (PALGA), which contains all pathology reports of all Dutch pathology laboratories.Completeness of NCR incidence data on cutaneous malignancies is 93% (14).All data used for this study from the NCR (i.e.patients' sex and age, DFSP location, type of treatment and physician) were collected from the medical records of hospitals by specially trained NCR employees.Tumour localization and morphology were registered according to the International Classification of Diseases for Oncology (ICD-O-3).Location of the primary tumour was categorized into face/scalp/neck (C44.0-C44.4),trunk (C44.5),arm/shoulder (C44.6), leg/hip (C44.7),genital (C51.0,C51.9, C63.2) or other (C44.8,C44.9).Vital status and date of death or emigration of the included patients were obtained by annual linkage with the Dutch Municipality Registers. Study outcome The outcome of interest was the rate of incomplete excisions and recurrences of DFSPs.The NCR registers DFSP only at the time of first primary diagnosis.Therefore, to detect all re-excisions and recurrences during follow-up, the included patients from the NCR registry were linked to PALGA.In order to have at least 2 years of follow-up, PALGA data were retrieved only for patients who were diagnosed with a DFSP before 1 January 2014.Follow-up of the patients started on the day of the first primary DFSP diagnosis and ended on the day of death or emigration, or the last date of NCR-PALGA linkage, which, for this study, was performed on 1 February 2015. Conclusions from the PALGA pathology reports were reviewed manually (WK, EIVC, LH, CBVL) and scored on the following variables: diagnosis (DFSP, possible DFSP, other), immunohistochemical staining with CD34 (positive, negative, not performed), anatomical location (according to ICD-O-3), type of specimen (biopsy, diagnostic excision, wide local excision, re-excision, MMS, Breuninger surgery, other, unclear), histological clearance (yes, no, unknown, not applicable in the case of diagnostic biopsies), invasion into muscle (yes, no, possibly), fibrosarcomatous changes (yes, no, possibly) and clinical excision margins (in mm) (1).Invasion into muscle, immunohistochemistry for CD34, fibrosarcomatous changes and clinical excision margins were missing for 50-99% of cases and therefore not included in the final analysis. All pathology reports with uncertain DFSP diagnosis (i.e. when the pathologist was in doubt about the diagnosis or if the pathology report was unclear) were excluded from the analyses (n = 297).Incompletely excised DFSP included DFSP that histologically invaded the inked surgical margin.Local DFSP recurrence included histologically proven DFSP that occurred at least 4 months after the previous pathology report, because it was assumed that re-excisions would occur within this period. Statistical analysis Annual incidence rates were calculated by sex, age groups and body sites per 1,000,000 person-years from 1989 to 2016, using the annual population size acquired from Statistics Netherlands (https://opendata.cbs.nl/statline/#/CBS/en/). Standardized incidence rates were calculated using the European standard population (2013) (15).Descriptive statistics were used to report the baseline characteristics of patients, DFSP, treatment and study outcome.In order to estimate the number of surgical procedures during follow-up (i.e.including the first surgical treatment of the primary DFSP and all re-excisions and/or recurrences), the mean cumulative count was calculated, which is equal to the sum of the cumulative incidences of all surgical procedures (16).To estimate the probability of the first DFSP recurrence during follow-up, a cumulative incidence curve (CIC) was calculated, which takes the competing risk of death into account (17).Statistical analyses were performed using STATA (version 15), SAS 9.4 statistical software (SAS Institute Inc., Cary, NC, USA), R statistical software version 3.4.1 (www.r-project.org).p-values < 0.05 (2-sided) were considered statistically significant. Incidence and treatment of the first dermatofibrosarcoma protuberans A total of 1,890 patients were diagnosed with a DFSP in the Netherlands between 1989 and 2016 (Table I).Both the crude and European standardized incidence rate of DFSP were 4.2 per 1,000,000 person-years (Table II).The incidence rate of DFSP was stable between 1989 and 2016.Incidence rates were comparable for men and women.Half of the 1,890 patients with a DFSP were men (49%) and overall median age at diagnosis was 41 years (IQR 31-41).DFSP were most commonly located on the trunk (45%) followed by arm/ shoulder (24%), leg/hip (16%), head and neck (13%) and genital area (1%) (Table I). The majority of the 1,890 patients with a primary DFSP were treated with excision www.medicaljournals.se/acta(87%).Data from the NCR on the first primary DFSP showed that more than half of the 1,890 patients (56%) underwent a single excision, whereas 25% underwent 2 excisions and 6% underwent 3 or more excisions.Only 4% of patients underwent MMS as a primary treatment or as additional treatment after excision, and 1% were not treated at all.Non-surgical treatments included postoperative radiotherapy (6%) and or other types of treatment, such as tyrosine kinase inhibitors (1%).The majority of first treatments for DFSPs were performed by surgeons (38%), while dermatologists treated only 11% of DFSPs.The other DFSPs were treated by plastic surgeons (6%), or general practitioners (2%), or by physicians who worked in a multidisciplinary team (13%), or it was unknown (30%). Re-excisions For 1,677 patients who were diagnosed between 1989 and 2013, linked pathology data were retrieved from PALGA (Table III).Patient and tumour characteristics were similar to patients without linked pathology data (data not shown).Of the 1,677 patients, 35% underwent a single surgical treatment for a primary DFPS during a median follow-up of 11 years (IQR 6-17).Half of all patients (51%: (588+180+78)/1,677) underwent multiple surgical treatments.The number of surgical treatments was unknown for 14% (n = 240) of all patients.Of all 1,644 pathology reports of DFSP excisions, 32% (n = 524) were completely excised, 52% (n = 847) were incompletely excised and histological clearance was unknown for 17% (n = 273) of all reports.Of all 662 pathology reports of DFSP re-excisions, 61% (n = 401) were completely excised, 29% (n = 192) were incompletely excised and histological clearance was unknown for 69 reports (10%).The mean cumulative count of surgical treatments per patient was 1.4 (95% CI 1.3-1.4)after a follow-up of 6 months, and remained stable thereafter (Fig. 2). DISCUSSION This large nationwide cohort study of patients with DFSP shows that the efficacy of excision is poor given the high rate of patients who underwent multiple surgical excisions (51%) to clear all tumour cells.This study also showed that 10% of all patients experienced at least one recurrence during a median follow-up of 11 years (IQR 6-17). Incidence and treatment of the first dermatofibrosarcoma protuberans In concordance with other studies, the ratio of incidence rates for men and women was 1:1.The majority of DFSPs occurred among young people (median age 41 years), and the most common location was the trunk (45%) (5,6).The majority of DFSP excisions were performed by surgeons.This is probably due to the referral pattern of general practitioners in the Netherlands, who tend to refer patients with a sarcoma or a relatively large tumour to surgeons.Ideally, these patients are referred to dermatologists in specialized centres where multidisciplinary experts work together in order to plan optimal treatment strategies. While the European guideline recommends treating DFSPs with MMS, this study shows that only 4% of all DFSPs were treated with MMS (10).The low percentage of patients treated with MMS is due to the introduction of the Dutch guideline in 2015 (while the cases were included between 1989 and 2016) and only in a single university medical centre have DFSPs been treated with MMS since 2008. Only a few cases were treated with postoperative radiotherapy in our study, probably because it is still unclear whether radiotherapy is effective in slowly growing tumours, such as DFSP.Also, only a few cases were treated with tyrosine kinase inhibitors (imatinib), probably because systemic treatment for DFSP is indicated only for metastasized tumours or for tumours that could not be treated surgically, which is rarely the case for DFSPs (18,19).We observed that, in our large population-based sample, 51% of DFSPs were re-excised and 10% recurred.Rates of re-excision and recurrence vary widely between studies; between 3-81% and 0-46%, respectively (6,7,9,12,13).This variation is most likely due to the small cohort size of the studies (range 14-451) (6,11), and to the heterogeneity of included patients regarding anatomical locations (e.g. head and neck only vs. all body sites), surgical treatments used (e.g.wide local excision vs. MMS), clinical excision margin size (e.g.small vs. wide), physician (e.g.surgeon, plastic surgeon, dermatologist), methodology of collecting follow-up data (e.g. from the patient files, patients consultation by phone or doctor's visit), length of follow-up (few months up to several years) and numbers of patients lost during follow-up (often not specified). The observed DFSP re-excision rate of 51% is much higher than the known re-excision rates for basal cell carcinoma (BCC) (7-30%) (20) and squamous cell carcinoma (SCC) (0-25%) (21,22).Multiple aspects contribute to the high re-excision rate for DFSP compared with BCC and SCC.First, DFSP is a rare tumour and therefore physicians may be less familiar with the clinical recognition and delineation of the extent of a DFSP.Secondly, physicians who are experienced in treating DFSP also find it difficult to delineate the extent of a DFSP preoperatively because of the subcutaneous tentacle-like invasion, which might be invisible to the naked eye both clinically and on imaging tests (e.g.MRI or CT).Thirdly, DFSP does not grow in a symmetrical manner around the clinically visible centre.Therefore, a clinically tumour-free margin even up to several centimetres around the clinically visible tumour centre often results in histologically tumour-positive margins on one side of the tumour, while on the other side healthy tissue is unnecessarily excised. Recurrences Although our observed recurrence rate of DFSP during a median follow-up period of 11 years (IQR 6-17) of 10% is within the range of known recurrence rates for BCC (12%) (23), SCC (10%) (21,22) and melanoma (12%) (24), a recurrence rate of 7% is clinically relevant (21)(22)(23)(24).It is most likely that histopathological missed residual tumour continued to grow and presented in time as a recurrent DFSP.DFSP might be absent on the evaluated slides, while still being present in the patient, because, with the standardized bread loaf technique, only a few vertical slides through the excised specimen are examined, representing only a small portion of the true excision margins. Although this study presented only 34 patients who were treated with MMS, none of the patients developed a recurrence during a median follow-up of 4 years (IQR 3-6), which is in line with other studies.A possible lack of aggressiveness of DFSPs treated with MMS compared with DFSPs treated with standard excision, cannot explain this finding, because only a single university centre performed MMS for all DFPs treated in their centre since 2007.Other university centres performed standard excision for DFSPs.There were thus no referral patterns that could explain this finding.Therefore, our results suggest that MMS is an appropriate treatment for DFSP (25)(26)(27)(28). The observation that the majority of DFSP recurrences occurred within the first 5 years of follow-up is in line with the literature (5,6) and implies that follow-up of at least 5 years is reasonable, especially because of the difficulty of distinguishing a nodal origin from scar tissue or from a recurrence. Strengths and limitations Strengths of this study are the use of nationwide cancer registry data, which resulted in a large number of cases of DFSP, a robust data-set to detect re-excision and recurrence rates using the nationwide pathology database, and the long-term follow-up period (up to 26 years).Limitations include a lack of information concerning high-risk features for most pathology reports, such as invasion into muscle and fibrosarcomatous changes.Another limitation is that 17% of the pathology reports of primary excisions and 10% of the pathology reports of re-excisions did not contain conclusive information on histological clearance.Therefore, the rate of incomplete excisions and recurrence of DFSP was probably underestimated. Conclusion This study reports a high rate of incomplete excisions of DFSP (51%) and a clinically relevant high recurrence rate (10%) during a median follow-up of 11 years.Multiple surgical procedures can lead to poor functional and cosmetic outcomes for patients, with higher costs to society.This study shows that there is a need to improve the quality of care for DFSP, and the results support the current European guideline, which recommends treating DFSPs with MMS instead of excision (10). Fig. 2 . Fig. 2. Mean cumulative count of surgical treatments of dermatofibrosarcoma protuberans (DFSP), which were diagnosed between 1989 and 2013 and followed-up until 2015 using data from the Dutch nationwide pathology database.The majority of surgical treatments occurred within the first 6 months (vertical line). Fig. 3 . Fig. 3. Cumulative incidence curve of the first recurrence with 95% confidence interval of dermatofibrosarcoma protuberans, which were diagnosed between 1989 and 2013 and followed-up until 2015 using data from the Dutch nationwide pathology database.The majority of recurrences occurred within 5 years of follow-up. Table I . Characteristics of patients diagnosed with a primary dermatofibrosarcoma protuberans (DFSP) in the Netherlands between 1989 and 2016 according to data from the Netherlands Cancer Registry (NCR) aOthers included, e.g.tyrosine kinase inhibitors.Percentages were rounded. Table II . Incidence rates standardized to other standard populations ESR: European standardized incidence rate; WSR: World standardized incidence rate.
2019-08-15T13:05:18.131Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "d3cc64e2858350d1709492fc0049b1c870bf12dc", "oa_license": "CCBYNC", "oa_url": "https://www.medicaljournals.se/acta/download/10.2340/00015555-3287/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1b6d5b9c048f74d79740483aa7fa7034b87bfb5c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
155786470
pes2o/s2orc
v3-fos-license
Eclipsed mitral regurgitation successfully treated with a combination of surgical and pharmacological therapies: a case report Abstract Background Eclipsed mitral regurgitation (MR), which is characterized by a transient and reversible massive functional MR, usually causes recurrent episodes of acute pulmonary oedema in patients with a preserved left ventricular ejection fraction. The pathophysiological mechanism and optimal treatment of eclipsed MR are not yet fully understood. Case summary A 72-year-old woman was hospitalized with cardiogenic shock and takotsubo cardiomyopathy. After hospitalization worsening dyspnoea again appeared, and urgent transthoracic echocardiography revealed severe MR, which spontaneously resolved in a few minutes. At this point, eclipsed MR was detected for the first time. Diagnostic examination revealed that the eclipsed MR was caused by a left ventricular afterload increase. Ultimately, the patient began medical therapy and underwent mitral valve replacement. The subsequent clinical course was favourable. Discussion This case illustrates the importance of early intervention for eclipsed MR. A combination of surgical and pharmacological therapies can serve as one treatment option for an eclipsed MR. Introduction Eclipsed mitral regurgitation (MR) was first reported by Avierinos et al., 1 since when only a few cases have been reported in the literature. [2][3][4] Characterized by transient and reversible massive functional MR, the clinical presentation of eclipsed MR is usually recurrent episodes of acute pulmonary oedema in patients with a preserved left ventricular ejection fraction (LVEF). Since it is a transient phenomenon rarely reported, the diagnosis is probably underestimated. In addition, the pathophysiological mechanism and optimal treatment of eclipsed MR are not yet fully understood. Learning points • Eclipsed mitral regurgitation (MR) can cause a life-threatening condition, such as cardiogenic shock; therefore, early intervention should be considered. • A combination of surgical and pharmacological therapies can serve as one treatment option for an eclipsed MR. Case presentation A 72-year-old Japanese woman was transferred to our hospital with dyspnoea. Over the past 2 years, she had been admitted four times for heart failure requiring intensive treatment. Her medical history included paroxysmal atrial fibrillation, hypertension, and chronic kidney disease. Seven months prior, a catheter ablation was performed because paroxysmal atrial fibrillation was suspected as a trigger of the heart failure. However, after the catheter ablation, she had repeated episodes of heart failure. On arrival at the hospital, her blood pressure was 83/34 mmHg and pulse rate 98 b.p.m. The percutaneous oxygen saturation under room air was 70%. She was dyspnoeic with poor perfusion as evident by cold extremities and poor capillary refill. Respiratory system auscultation revealed coarse crackles and wheezes bilaterally. No heart murmurs were heard. Jugular venous distention and mild peripheral oedema were observed. Arterial blood gas analysis showed metabolic acidosis with a pH of 7.11 (normal range 7.35-7.45). An electrocardiogram showed significant ST elevation in leads V2-V6 ( Figure 1A). A chest X-ray demonstrated acute pulmonary oedema ( Figure 1B). A transthoracic echocardiogram revealed apical and mid-ventricular severe hypokinesis with an estimated LVEF of 20%, and her MR was mild. Emergent coronary angiography did not demonstrate any significant coronary artery disease. No left ventriculography was performed because her serum creatinine was 1.50 mg/dL (normal range 0.4-1.2 mg/dL). Right heart catheterization revealed a pulmonary artery pressure of 45/27/36 mmHg (normal range 15-30/4-12/9-19 mmHg), pulmonary capillary wedge pressure (PCWP) of 33 mmHg (normal range 4-12 mmHg), and cardiac index of 1.28 L/ min/m 2 (normal range 2.5-4.0 L/min/m 2 ). She was diagnosed with cardiogenic shock and acute heart failure caused by takotsubo cardiomyopathy. Intra-aortic balloon pumping (IABP) was initiated to stabilize her haemodynamic state. On Day 3, her haemodynamic condition improved and IABP was removed successfully. A transthoracic echocardiogram demonstrated complete recovery of the left ventricular function. However, 8 days later worsening dyspnoea again appeared, and a physical examination revealed a 3/6 systolic murmur at the cardiac apex. Urgent transthoracic echocardiography revealed severe MR with tenting of the mitral leaflets, which was absent on admission. In addition, severe tricuspid regurgitation was observed, and the estimated right ventricular systolic pressure increased to 68 mmHg. A few minutes later, the MR spontaneously resolved and was only graded as mild with a normal leaflet coaptation. At this point in time transient and severe MR, termed 'eclipsed MR' in a few past reports, was suspected for the first time as the cause of the repeated heart failure episodes. We decided to perform a diagnostic examination to investigate the cause of the eclipsed MR in the catheterization laboratory. During the test, the PCWP and central arterial pressure were monitored. At baseline, MR was mild and PCWP was in the normal range (Figures 2A and 3A). Initially, a coronary spasm provocation test was performed. Coronary angiography showed no significant stenosis after a methylergonovine infusion, and the haemodynamic parameters remained unchanged. Next the handgrip manoeuvre was performed, but this could not induce any sufficient increase in the central blood pressure. We decided to use norepinephrine, which selectively increased the systemic vascular resistance and afterload. After the norepinephrine infusion (0.07 mg/kg/min), the central arterial pressure increased dramatically. A transthoracic echocardiogram in the supine position revealed severe MR, which was absent before the test. Echocardiographic characteristics changed significantly ( Table 1). A transoesophageal echocardiogram was performed immediately and revealed severe MR with extreme apical tenting of both leaflets, resulting in a total lack of coaptation ( Figure 2B). In addition, dynamic MR led to dramatic changes in the cardiovascular haemodynamics. Right heart catheterization revealed large V waves in the pulmonary wedge pressure ( Figure 3B). With the decreasing central arterial pressure, the MR was spontaneously found to be mild on the echocardiogram. Other investigations were also performed. Renal echography revealed the absence of any renal artery stenosis. Pheochromocytoma was ruled out by a normal plasma catecholamine level. Based on the aforementioned findings, we diagnosed that the eclipsed MR was caused by a left ventricular afterload increase. With the diagnosis and aetiology confirmed, the patient began medical therapy including beta-blockers and angiotensin-converting enzyme inhibitors aimed at decreasing the cardiac afterload. However, 6 days after starting the medications, she redeveloped severe dyspnoea and cardiogenic shock with severe acidosis secondary to eclipsed MR. Since previous reports also have described that eclipsed MR often recurs despite optimal medical therapy, 3,5 we considered that pharmacological therapy alone was difficult for management of the eclipsed MR. In addition, the eclipsed MR was considered to be more critical than we expected because the patient experienced a life-threatening condition repeatedly throughout the clinical course: in the worst case, this could lead to cardiac death. The severity of the clinical course urged us to undertake surgical therapy for our patient. After much discussion between the patient and our heart team, we decided to perform mitral surgery. On Day 43, the patient underwent mitral valve replacement using a bioprosthetic valve. The surgical findings revealed no organic changes in the mitral valve itself and mitral valve complex. The post-operative clinical course was favourable, and the patient was discharged uneventfully. Five months after the surgery, she was readmitted to our hospital for follow-up. As before, the haemodynamic parameters were monitored. We confirmed that the central blood pressure became more difficult to increase than previously after the same amount of norepinephrine infusion (0.07 mg/kg/min) ( Figure 2C). At that time, the MR was not exacerbated ( Figure 3C). One year and 3 months after surgery, the patient has been living with no recurrence of symptoms. Discussion Here, we have presented a case of eclipsed MR, which is characterized by transient and reversible severe MR. The clinical course of eclipsed MR usually includes recurrent unexplained acute pulmonary oedema. However, our patient repeatedly experienced a life-threatening condition, such as cardiogenic shock. Therefore, this case informed us that eclipsed MR is much more serious than might be expected, so early intervention should be considered when the cause of repeated heart failure episodes or cardiogenic shock is diagnosed as eclipsed MR. The pathophysiological mechanism of eclipsed MR is not yet fully understood. In past reports, some authors have suggested that coronary spasms or microvascular dysfunction represent the mechanism. It has been theorized that eclipsed MR appears when coronary spasms trigger transient left ventricular dysfunction and mitral apparatus ischaemia. 6 Yet in the present case, no coronary spasms were induced and the haemodynamic parameters were unchanged after the administration of methylergonovine. However, we were able to reproduce the eclipsed MR by the administration of pharmacological
2019-05-17T13:33:44.192Z
2019-04-21T00:00:00.000
{ "year": 2019, "sha1": "464666c65790eef7181f9ebbfb3da8e6725a7314", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/ehjcr/article-pdf/3/2/ytz039/28888751/ytz039.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "464666c65790eef7181f9ebbfb3da8e6725a7314", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
23841407
pes2o/s2orc
v3-fos-license
Communication rights from the margins: politicising young refugees’ smartphone pocket archives Politicising the smartphone pocket archives and experiences of 16 young refugees living in the Netherlands, this explorative study re-conceptualises and empirically grounds communication rights. The focus is on the usage of social media among young refugees, who operate from the margins of society, human rights discourse and technology. I focus on digital performativity as a means to address unjust communicative power relations and human right violations. Methodologically, I draw on empirical data gathered through a mixed-methods, participatory action fieldwork research approach. The empirical section details how digital practices may invoke human right ideals including the human right to self-determination, the right to self-expression, the right to information, the right to family life and the right to cultural identity. The digital performativity of communication rights becomes meaningful when fundamentally situated within hierarchical and intersectional power relations of gender, race, nationality among others, and as inherently related to material conditions and other basic human rights including access to shelter, food, well-being and education. empirical data offering a glance at unimaginable personal trajectories. Of particular urgency for the present intervention, they are replete with articulations of social injustices and appeals to human rights. Conceptually, these narrative practices can be conceived as distinct performative practices. Through cross-platform digital story telling practices young refugees share, like, post and 'type themselves into being' in various digital databases (Sunde´n, 2003). They create worlds and livelihoods through medium and platformspecific actions. The acts may have a certain political transformative potential, and in this article this potential will be conceptualised and empirically charted as the digital enactment of claims for communication rights. Although the EU is premised on the slogan of 'Unity in diversity', the margins of Europe reveal a different, harsh reality. Migrant groups have bitterly contradictory experiences: although the groups are quite similar in size -the number of expatriates is estimated to reach 56.8 million by the end of 2017 (Finaccord, 2014), while globally over 65 million people were forcibly displaced in 2015 (United National Higher Commissioner of Refugees, 2016) -those travelling irregularly commonly experience hardship in particular in contrast with voluntary migrants, particularly highly skilled expatriates. While e-passports, iris scans and on-board airplane wireless Internet facilitate the lifestyle of the global (cosmopolitan?) elites zipping in and out of Europe, Europe remains the deadliest migration destiny in the world for forced migrants. In 2016, alongside 387.739 arrivals to Europe, 5098 people 'died/went missing', and as of August 16, alongside 124,863 arrivals 2,410 have 'died/went missing' (IOM, 2017). Nearly 2.000 official entry ports and 60.000 km of land and sea borders are increasingly managed through digital technologies, and refugees experience 'smart borders' entirely differently from expats. At the Mediterranean Sea, their phone signals may be traced by drones and satellites that are part of the European Border Surveillance System. Upon arrival, they may be physically coerced to have their fingerprints scanned so that an algorithm can decide upon their futures on the basis of the European Dactyloscopy biometric database. There are also important similarities, both voluntary and forced migrants are increasingly 'connected migrants' (Diminescu, 2008), who live in one place and use mobile devices and social media platforms to conduct their lives across the world. Although digital divides alongside the axes of race, gender, age, class persist -refugees are increasingly connected refugees: the UNHCR estimates over two-thirds of refugee households living in urban settings -which is the case for most forced migrants in Europe -have access to an Internet enabled phone (2016: 14). Young refugees are connected migrants and actively contribute to transnational 'digital diaspora' formation (Gajjala, forthcoming 2018). This article makes an intervention in the media and migration literature by analysing digital practices and experiences of connected migrants through the analytic lens of communication rights. This normative and critical framework is oriented towards recognising agency, empowerment, dignity, family life and communicative freedom (Hamelink and Hoffman, 2008), which is of great importance to achieve a more nuanced understanding of the situation and experiences of forced migrants (Thomas, 2011). Connected migrants indeed may actively claim the right to engage in cosmopolitan, intercultural exchange but also to communicate, associate and preserve one's identity and family relations online -across borders and through devices and social media platforms they prefer. In particular, I operationalise communication rights from the perspective of speech-act theory. From this perspective, young refugees may become 'performative right claiming-subjects' (Isin and Ruppert, 2015: 10) through engaging in digital practices like posting status updates, photos and videos or having transnational Skype or Viber video conversations that appeal to or invoke human right ideals. Illustrative case in point is the 8-year-old Bana Alabed. Managing the social media account @AlabedBana with her mother, she famously tweeted and vlogged about her experiences growing up in the civil war in Aleppo, Syria. As a refugee living in Turkey she currently asserts herself as a peace activist and refugee rights advocate. For example, on World refugee day, June 20, she posted a video in which she plead the 'world to be honest with refugees' (AlabedBana, 2017; see also Halasa, Omareen and Mahfoud, 2014). Time Magazine recognised her global impact and visibility and ranked her among one of 'the 25 most influential people on the internet' (Time Staff, 2017). As Engin Isin and Evelyn Ruppert argue some digital acts can be seen 'as a kind of speech act and means of social struggle', and they hypothesise recognising the multiplicity of such acts may reveal rights 'as not static or universal but historical and situated' (2015: 10-11). So far however, the digital performativity of rights is theorised as a universal possible process. Although strongly theorised, communication rights and digital performativity of right claims are little empirically sustained in situated power-ridden experiences of particular individuals and communities. My intervention is aimed at the theoretical, methodological and empirical level, and the argument is structured according to these three aims. The first section, focused on 'law-in-books' (Rap, 2016: 147), addresses urgent conceptual gaps in the literature at the intersections of human rights, migration and ICTs. I seek to bring diverging fields into dialogue by rethinking communication rights from the perspective of the digital archives of refugee youth. Research on migration and digital media largely ignores human rights discourse as an analytic frame (notable exceptions are Costanza-Chock, 2014; Thomas, 2011;Witteborn, 2011), while human rights scholarship on migrants typically does not focus on the usage of digital technologies (Nicholls, 2013). Moreover, attention for the specific experiences of young people has only recently emerged in recent literature addressing either the intersections of migration and the Internet (Alinejad, 2017) or human rights and the Internet (Livingstone and Third, 2017). In the second section, I position my methodological reflections in response to previous research projects that have sought to empower refugee youth through external digital storytelling interventions. Aiming to combine creative, participatory and digital methodological techniques, I discuss how informants shared and co-researched with me their smartphones as personal pocket archives. The third section focuses on 'law-in-action' (Rap, 2016: 147) and presents new and rich empirical data on the experiences of 16 young refugees from various backgrounds living in the Netherlands. It offers an extensive empirical analysis of their digital performativity through the lens of making communication rights claims. Communication rights Introducing communication rights the time will come when the Universal Declaration of Human Rights will have to encompass a more extensive right than man's right to information, first laid down -in Article 19. This is the right of man [sic] to communicate. This is the angle from which the future development of communication will have to be considered to be understood (Jean D'Arcy, 1969). Jean D'Arcy introduced the notion of the right to communicate almost 50 years ago at the level of the United Nations. If attention for interaction and dialogue were to be institutionalised, it would result in a paradigm shift, as it would 'go far beyond what is addressed by the traditional freedom of expression' (Hamelink and Hoffman, 2008: 6). In response to D'Arcy's call, UNESCO formulated and adopted resolutions at general conferences and expert meetings, and the final report of the UNESCO appointed MacBride Commission advised: communication needs in a democratic society should be met by the extension of specific rights such as the right to be informed, the right to inform, the right to privacy, the right to participate in public communication -all elements of a new concept, the right to communicate. In developing what might be called a new era of social rights, we suggest all the implications of the right to communicate be further explored (UNESCO, 1981, p. 265). Scholars and activists have noted the difficulties defining and delineating communication rights, as a result of the changing communication media landscape, distinctively located communicative practices and the great variety of actors involved including governments, states, NGOs, corporations, activists and citizens. For example, the MacBride report recognised the following 'functions of communication' that serve individuals and communities: information, socialisation, motivation, debate and discussion, education, cultural promotion, entertainment and integration (UNESCO, 1981: 14). In sum, there is no such thing as a universal definition but rather communication rights offer a critical vocabulary to map agency, with a focus on dialogue and exchange. As a future looking orientation, it offers a 'scaffolding' to map 'communication deficits' (Thomas, 2011: 5) to address and change unjust communicative power relations. Legal underpinnings It should be noted that the right to communicate disappeared from the UNESCO agenda in the early 1990s. Despite subsequent NGO rallying for communication rights in the early 2000s, international law thus does not yet provide for communication rights. There are however several underpinnings and bases for this project in the International Bill of Human Rights ( Child (1989). Nonetheless, human rights discourse is remarkably lagging behind in accounting for changing media and ICT environments. For example, the provision of free speech (UDHR article 19) centres mostly on the public domain, whereas the contemporary social media environment warrants greater greater attention for the private sphere. Furthermore, the interactive dynamics of the contemporary communication landscape are not addressed. As Fisher and Harms note, 'the earlier statements of communications freedoms [. . .] implied that freedom of information was a one way right from a higher to a lower plane ' (1983: 9). Recent guidelines, for example, EU Human Rights Guidelines on Freedom of Expression Online and Offline argue that online practices are to be considered on par with offline dynamics: 'All human rights that exist offline must also be protected online' (European Council, 2014). However, as becomes clear from initiatives like WiFi4EU to provide Wi-Fi in public spaces across Europe (EU, 2016), the digital agenda of the EU remains principally concerned with stimulating citizen access to the 'digital single market' (European Commission, 2017). The scarce scholarly attention for rights in ICT saturated environments is currently dominated by 'digital rights' advocates and scholarship that foreground the provision, participation and protection of children in particular but pay little attention to social justice, interactivity or cultural specificity (Livingstone and Bulger, 2014). Furthermore, communication rights are part of broader set of social and cultural rights that have remained underdeveloped for political reasons. Some feared that recognition of communication rights of groups would lead to greater acknowledgement of the perspective of minority groups. In the context of Europe, the institutional recognition of diversity goes against the grain of the dominant focus on integration and increasingly the assimilation of newcomers. Rather than merely protecting vulnerable groups this would mean that communities originally hailing from outside Europe could potentially become empowered in gaining voice, agency and subjectivity by gaining a seat at the table of public deliberation. More broadly among the private sector in the West, there was additional general suspicion of 'new world information and communication orders', while actors in the Global South criticised it would only reify existing hierarchical flows of information, technology and values (Hamelink, 2004: 144-145). During fieldwork I recognised on the ground the ambiguous relation government officers and camp staff have towards Internet and social media use among refugees: some refugee camps now provide free Wi-Fi, while particularly detention centres that house people whose asylum claims have been rejected severely limit and surveil inhabitants' Internet use. Although not institutionalised, communication rights remain an important critical framework. In the next paragraph, the continued relevance and urgency of communication rights for forced migrants is elaborated. Drawing on the digital practices of forced migrants, who present themselves against the grain of human rights, societal perceptions and technology's intended usage, I turn to postcolonial critique and aim to provide 'alternative enunciations of human rights' (Sen, 2004: 324) by working towards a new interpretation of communication rights from the margins. Rethinking communication rights from the margins First I want to make clear what I mean with the margins: young refugees operate from the margins of European society and its human rights agenda, against dominant views of adults as well as against dominant conceptions of technology's intended users. First, forced migrants operate from Europe's societal margins, as non-voluntary migrants are typically 'othered', mistrusted and feared. In Europe, for the last 30 years, and particularly during the recent so-called 'refugee crisis' (2015-2016), governments and mainstream media have been mostly concerned with 'freedom from refugees' (Van Dijk, 1988: 184-185) rather than the freedoms of refugees as is evident from mainstream media portrayal in Europe (Zhang and Hellmueller, 2017). Second, the human rights of migrants are defined by how they are 'categorised' in governmental sorting processes and particularly forced migrants 'have been low -often invisible -on the international human rights agenda' (Grant, 2005). In the collision of the language of rights and the language of securitisation, the latter is gaining the upperhand, signalling a shift from cosmopolitan hospitability towards increased securitisation (Chouliaraki and Georgiou, 2017). Europe has shown it is increasingly concerned with the 'protection from refugees' rather than with the rights and 'protection of refugees' (Thomas, 2011: 71). Third, globally young people are dominantly perceived as hedonistic consumers of ICT's, their cultural production is often dismissed as a-political (e.g., vain selfie-takers and time-wasting vloggers) and their user practices as uncritical and unsafe (e.g., sexting, piracy) (Vickery, 2017). Finally, forced migrants also operate from technological margins, as is also evidenced by commonly used frames. Across Europe, there is an upsurge of 'digital islamophobia' (Horsti, 2017). Politicians, news media and extremist social-media users dismissed Syrian forced migrants carrying smartphones or taking selfies upon arrival on Greece shores as bogus asylum seekers. Dominant European expectations of asylum seekers as bedraggled and somehow unable to own nor handle advanced technologies, demonstrate dichotomous understandings of bodies that are naturalised into technology usage and non-European, non-white, non-middle class bodies that remain alienated from it (Leurs, 2016). Although there is growing attention from the private sector that aims to tap into the new lucrative market of the 'migration industry of connectivity (MIC)' (Gordano Peile, 2014) -with companies such as Lebara offering mobile telephony and money transfer services directly targeting migrants in Europe -overall migrants, and particularly forced migrants, are not the intended users of ICT's. Addressing communication rights from the margins requires a reconsideration of the conceptual toolkit itself. The jargon around rights and communication has historically revolved around two principle perspectives, whereas the 'right to communicate' (R2C) signals a top-down, directed approach shaped by institutionalised state and government directives including UNESCO, the term 'communication rights' was championed by social movements, activists and civil society, being plural it is seen as more receptive and open for re-negotiation from below (Hamelink, 2008;Thomas, 2011). The following statement is illustrative for the R2C agenda: 'the right to communicate unequivocally implies that marginalised people -women, refugees, displaced persons, migrant workers, people with disabilities, the poor, the disposed, must be empowered to express themselves in their own words' (Lee, 2004: 7). The statement projects marginalised people to be necessarily dependent on outsider interventions in order to communicate their own concerns. The perceived reliance on outsiders for minority communities to 'close the gap', address deficiencies, speak their voice and 'come up-to-speed' with the Global North has been rightly critiqued in the field of postcolonial studies. Paradoxical processes and experiences of voice, agency and visibility of subordinate groups have been subject to longstanding critical debate, I take these insights to ground and operationalise communication rights. In an attempt to move away from the 'parochialism of much Western legal theory and human rights discourse' (Twining, 2009: 3), I conclude this section by offering a partial overview of human rights discourse produced from the Global South to work towards a communication rights from the margins. In particular, I draw from human right activists and theorists who share social justice perspectives, such as opposing unjust power relations of imperialism, colonialism, racism and patriarchy. These insights can serve to compliment, nuance and critique the hegemonic Western human rights canon. Most notably, by questioning dominant modes of binary thinking the ethnocentric universality of much human rights discourse can be problematised. For example, Yash Ghai champions stronger acknowledgement of the Janus-faced character of most Bills of Rights rhetoric and practice, both universalism and relativism and individual liberalism and collective identities are to be balanced: 'It is clear that simple polarities, universalism v particularism, secular v religious, tradition v modernity, do not easily work, a large measure of ambiguity is necessary for the accommodations that must be made ' (2009: 144). Notwithstanding the paucity of literature -particularly in comparison to Anglo-European knowledge production -a great variety of perspectives has been articulated. For the purpose of this argument, I am taking Dembour's overview of human rights 'schools of thought' (2010) 2 to give examples of various non-western human rights schools of thought. First, publications in the 'natural school of thought' include, for example, the work on the future of Shari'a law by Abdullahi An-Na'im, a Northern-Sudanese Arab Muslim scholar. In this school of thought, rights are universal, given and are for every human being, but An-Na'im recognised they are translated and differentially applied in local contexts (2008). Second, 'deliberative scholars' approach human rights as the outcome of deliberation. Exemplary is Francis Deng's 'Talking it out: Stories in negotiating human relations'. In this book, he interprets the abstract values of universal human rights from the situated perspective of Ngok Dinka of the Sudan. In untangling 'the Dinka way', he highlights the constant interaction between 'tradition ' and 'modernisation' (2006). Third, 'protest scholars' like Upendra Baxi are concerned with fighting for human rights for vulnerable groups and those suffering, this is a perpetual and eventually universal struggle as is evidenced from her discussion of the global women's movement, the impact of globalisation and post-modernist critiques of universal human rights (Baxi, 2006). Fourth, illustrative for the 'discourse school' is the work of Makau Mutua. In his recent 'Human Rights Standards. Hegemony, Law and Politics' (2016) he writes from the 'hitherto underutilised perspective of the Global South' to reconstruct how norms were established in the human right canon. By highlighting which actors are insiders and which remain outsiders to the human rights discussion, he argues for participatory and inclusive 'norm-creating processes'. This is urgent because the current normative human rights 'regime' is grounded in non-negotiable 'abstract individual autonomy', thereby repeating cultural biases and affirming international order power asymmetries (2016: 8-11). The present article adds in particular to the discourse school, and is inspired by post-colonial scholarship that seeks to distinguish between various dimensions of human rights as shaped and experienced among various hierarchically situated communities. This body of work can be characterised by a shared commitment to moving beyond binaries, by drawing connections and attention for relationalities and grounding local/global historical, geographical, geopolitical and contemporary power relations. Post-colonial scholarship tries to write against, redo, reconsider and situate human rights (Dhawan, 2014;Mutua, 2016). As Raka Shome and Rada Hegde argue, the postcolonial discourse approach enables us to 'denaturalise communication' flows in situated contexts by championing recognition for people's 'multiplicity of trajectories ' (2002: 265). It is my aim to translate these incentives to develop an understanding of communication rights as digitally performed from the margins. Pradip Thomas, one of the few scholars writing on communication rights of displaced populations and refugees, recognises potential for social justice and cosmopolitanisation in the multiplicity of people's and communities' trajectories: 'communication rights affirms communication as communion, community, conviviality, the very basis of human dignity ' (2011: 82). However, participatory communication divides are reflective of other persistent divides, and communication rights are inseparable from other rights: 'the right to live human lives, the right to enjoy life, to live life' (2011: 71). Thomas suggests the following communication rights are of particular relevance for internally displaced people and refugees: . The right of displaced people to use their own vernacular, or alternatively a language of their choice to conduct their internal and external affairs . The right of displaced people to freedom of expression. This is a fundamental, foundational right . The right of displaced people to basic education and literacy in a language of their choice . The right of displaced people to practice a culture of their choice, and to have the freedom of religion . The right of displaced people to hold, impart and receive opinions through all media . The right of refugee groups to counter the willful misrepresentation of displaced people in the national media through appropriate representations at press and media councils or other regulatory bodies . The right of displaced people to have access to and control over their own media (Thomas, 2011: 82). Building on these concerns outlined by Thomas, I operationalise communication rights from the margins by focusing on digital practices as performative practices. Drawing on performative speech act theorists including J.L. Austin, Judith Butler and others, critical human right theorists have proposed the analytic lens of the 'performance of human rights' (Slyomovics, 2005: 10). Karen Zivi specifies that communication rights claims are distinct performances: they revolve around a set of ritualised communicative actions and practices we can engage in through which we (seek to) shape our world. We make such appeals for a variety of reasons, 'we make rights claims to criticise practices we find objectionable, to shed light on injustice, to limit the power of government, and to demand accountability and intervention' (Zivi, 2012: 4). Isin and Ruppert account for the specificity of performing rights claims on platforms such as Facebook, Twitter and Instagram, where right claims have 'to be brought into being repeatedly through acts (repertoires, declarations, proclamations) and conventions (rituals, customs, practices, traditions, laws, institutions, technologies and protocols' (2015: 25). Key for my operationalisation of communication rights from the perspective of digital performativity is the attention for the active process of making rights claims. Social justice oriented researchers like the feminist theorist Judith Butler emphasise that performativity is only moral and ethical when it accounts for the social conditions from which it emerges (2005). Thus, the micro-political agency of digital practices of young refugees may be located through the prism of performing human right claims. These practices are however only meaningful when fundamentally situated within hierarchical and intersectional power relations of gender, race, nationality among others, and as inherently related to material conditions and other basic human rights including access to shelter, food, well-being and education. Methodological considerations This article contributes to the emerging scholarly research focus of digital migration studies that seeks to understand the inter-relationships between the proliferation of ICT's and increasing global migration flows by studying migration 'in, through and by means of the internet' (Leurs and Prabhakar, forthcoming 2017). I draw from initial fieldwork findings from the ongoing participatory action research project 'Young connected migrants. Comparing digital practices of young refugees and expatriates in the Netherlands' (2016-2019). Focusing on the experiences of young refugees in particular, the analysis presented in this article revolves around the narratives and shared content of 16 key informants. This includes 7 young women and 9 young men between the ages of 15 and 34 years (on average 19 years old). Thirteen informants fled from various cities in Syria including Damascus, Aleppo, Homs, Douma and Al-Sweida. This group included young people who arrived in the Netherlands as unaccompanied minors, who travelled with families and those who joined through family re-unification schemes. In addition, 3 young people participated who hailed from Yemen, Kurdistan/Iraq and Guinea. Participants had lived in the Netherlands anywhere between 6 months and 2 years, and the majority of participants had obtained formal refugee status. Self-selecting, snowballing recruitment resulted in this wonderful group of interested young people. I am aware this group of participants is not representative of all young refugees in the Netherlands, let alone Europe and I am aware digital diasporas are extremely heterogeneous too. I do not intend to make universalizing claims, it is my aim to take serious personal accounts and distinctly situated personal experiences to understand better how young informants position themselves vis-a-vis dominant discourses of forced migration. I offer here an intermittent snapshot from my ongoing fieldwork. This group of informants participated in in-depth interviews, which took place upon the informant's own choice inside people's homes, in schools, and cafe´s and lasted between 30 minutes to 2 hours. These interviews were transcribed ad verbatim. Pseudonyms were chosen by informants themselves, and participants consented to the interviews. In addition, parents or guardians consented for those under 18. Based on the preference of the informants, interviews were held 1-on-1 or in duo's, and were conducted mostly in English and Dutch. To avoid reminding participants of their potentially traumatic asylum interviews, no formal translators were used. When informants could not express themselves in Dutch or English, translation into or from Arabic was made possible through informal peer translation of friends or classmates, or through online services like Google translate. Interviews were followed up by a digital ethnography, which involved participant observation and follow-up conversations through platforms including What'sApp and Facebook messenger. Furthermore, my analysis is indirectly informed by teaching a 3-month critical media literacy class to a group of 20 young refugees as part of their 2-year preparation to enroll in regular secondary education; and through my role in the last year as a sport, music and dance facilitator at a shelter for unaccompanied minor refugees. It was my intention to develop a 'non-digital-media-centric' (Pink et al., 2016) fieldwork approach, to achieve a relational and situated understanding of everyday online and offline experiences and practices. Aiming to accommodate adaptability, the interviews were organised to allow informants to direct the course of the conversations. In the beginning of the interviews, participants were invited to draw a paper and pencil Internet map, consisting of a spider diagram to map out the various platforms, applications and websites they considered important and frequently used. The conversation was then structured according to the applications included in the Internet maps. All participants owned or had shared access to smartphones (handheld devices that allow users to initiate/receive voice calls and send/receive messages but also to take photos, videos, use social media apps) (Gillespie et al., 2016). Second, the informants were invited to co-research their own smartphones as personal pocket archives. This approach was developed in collaboration with the Amsterdam Museum Imagine Identity and Culture (Boussaid and Boom, 2016), where we initiated the focus on the smartphone as a personal pocket archive by organising a meet and eat evening. During this evening we organised a dinner at the museum with young refugees and as a participatory focus group they were invited to show and discuss their social media presence which was projected on a large wall in the museum. This also allowed for young people to share their common experiences rather than having to discuss experiences with a white, highly educated, European male researcher like myself. In practice, participants were invited to select and annotate important photos, videos or audio files; from before they fled, during their travels and from their period of living in the Netherlands. The selection process was made productive to elicit narratives of identity, affectivity, rights and literacies. This way, informants have a greater say over their representation in comparison with traditional academic output. Research participants also became collaborators, for example, in the form of film directors and research interns in making short individual video portraits. In these video portraits, young connected migrants reflect upon and curate their own pocket archives photos, videos, music play list and app preferences. My approach differs from previous digital storytelling initiatives setup to empower young refugees (De Leeuw and Rydin, 2007;Lo´pez-Bech and Zu´n˜iga, 2017;Sawhney, 2009), in the sense that my ongoing study is oriented towards acknowledging the already existing personal archive of these young people. Previous projects typically mobilise external approaches based on Photovoice and other participatory media production philosophies to instruct vulnerable groups in order to generate specific narratives of agency, awareness and resilience. I took another ethical stance by taking digital self-representations that were not created for research purposes serious as an important site of alternative knowledge production. This decision originates from my wider moral commitment which is inspired by feminist ethicsof-care ideals. In my work I am reflexive about power hierarchies between researchers and informants, and I am striving towards non-exploitative research relations and take serious accountability and responsibility for research consequences. Informants play vital roles as members of the project team. Abdullah, a 22-year-old self-proclaimed 'computer geek' from Yemen conducted an internship with this project to support activities at the museum, while Karim collaborates in making short video portraits with participants. These professional experiences might improve their chances at securing new educational or employment opportunities. This is particularly urgent for vulnerable groups including young refugees who live precarious lives (Leurs and Prabhakar, forthcoming 2017). The empirical section below demonstrates there is urgency to take serious existing archives of user-generated-content in digital storytelling activism and scholarship. Claiming communication rights In this final empirical section, a selection of young refugees' digital practices is analysed as a performance of communication rights. Shifting from conceptual reflection of law-in-books to law-in-action, I offer empirically grounded bottom-up experiences and perceptions of human rights as narrated by young refugees on social media and smartphones. These insights are used to speak back to existing legal underpinnings related to communication rights and vice versa communication rights are taken as a scaffolding to map perceived communication (and other rights) deficits. Right to self-determination Young refugees often live precarious lives and long for a sense of normality. Frustrations of not being able to control one's life course, having to make do with harsh external circumstances, lengthy procedures and seemingly arbitrary decisions are another common thread in the personal pocket archives of the informants. Living at the whim of others can mean that one day everything changes, from unexpectedly having to move from one camp to another to getting good news. For example, early January 2017 Abdullah celebrated the positive decision about his asylum case by posting a statement in Arabic ' ' (Thank God after long patience. Today, I came to get to live and wish to make things easier for everyone) and English: 'Thanks God today I got a positive decision by the IND'. He felt that finally he got his autonomy needed to be able to build a new life in the Netherlands. In article 1 of the International covenant on economic, social and cultural rights (1966), it is stated that 'all peoples have the right of self-determination. By virtue of that right they freely determine their political status and freely pursue their economic, social and cultural development'. Karim powerfully narrates his experiences of living through war, violence and destruction prior to fleeing. His statement can be read as a powerful claim to the right of self-determination: I got so frustrated. I ended up sitting in a dark room. Alone. Staring at the walls. Doing literally nothing. Trying to get out of my loneliness by opening my Facebook on my phone. To you know see the news and these things. But it wasn't doing me any better. I was making it even worse. You may ask my why or how. Well. Every time I opened Facebook I saw people outside Syria. Not only Syrians, no, other people around my age. Celebrating their youth. Studying. Travelling. Having fun. Doing what normal people do around that age. Of course this world has seen many wars. But in our digital age we don't only experience the war in our country. But we also experience, from a distance, the peace in other countries. This is a quote taken from a 'TedX Youth' talk he gave late January 2017. The talk was filmed with a smartphone and broadcasted through Facebook Live, and the stream is stored on his personal profile page. So during power cuts, Karim would find himself alone, in a room lacking light. Through his smartphone he maintained a transnational 'connected presence' (Diminescu, 2008: 572) with loved ones and friends. It was through seeing photos and videos of his friends that he was continuously confronted with how everyday life had so fundamentally changed for the worse. Karim's engaging with these other possible lives could be read as a longing for cosmopolitanisation. However, through consuming this content and internalising this unattainable difference he was reminded of his own state of ontological insecurity. Miyase Christensen and Andre´Janson have theorised how the mediatisation of our everyday lives, in particular social changes in means of digital expressivity and transnational connectivity may foster 'cosmopolitan trajectories'. They however also add that cosmopolitanisation remains a site of 'symbolic power' (2015: 30-31). Karim's experience signals the vital urgency of situating everyday lived cosmopolitanism in uneven grids of symbolic power. Digital identities offer imaginaries of other possible lives, but they may also serve as harsh reminders of one's precarity and as if life is put on-hold. As a matter of fact, sad reminders of other possible lives he could be living strengthened Karim's conviction. In fleeing to Europe, he made the difficult decision to risk his life in an effort to regain control over his own life. Right of freedom of expression Zeinah's question 'why can't I say what I want' posted on her personal Facebook profile page was included as an epigraph above. During our meet-and-eat focus group discussion, it became clear that censorship and freedom of expression was a fundamental concern of the majority of the participants. For example, several participants shared experiences of photos and videos documenting atrocities in Syria that were removed from Facebook, Instagram and other platforms. Platform owners offered standard take-down notices mentioning posted content violated their terms of service. In response, various strategies are used to increase the chances content could circulate, including setting up multiple personal profile pages, maintaining social media accounts for local war journalists from afar, sharing information about using virtual private network (VPN) services to anonymise postings or using encrypted personal messaging services like What'sApp and WickR to circulate content beyond social media platforms. As Zeinah reflects: 'I use Facebook different from Dutch society. I had two accounts. One political one. And another one.' Although she currently lives in the Netherlands like fellow participants, she emphasises transnational engagement with the developments in Syria shape her everyday life: 'Everything is politics. We are so involved in the war in Syria -everything is -there is no distinction between politics and our everyday life'. Managing multiple accounts is common for many teenagers who negotiate digital identity performances aimed at distinct audiences. Others entirely shy away from political engagement in fear of repercussions. For example, Obbay, a 22-year-old young man from Homs who managed to continue studying piano in a conservatory upon arriving in the Netherlands, focuses on social media 'to promote what I'm doing, who I am', rather than politics: Actually it is very dangerous to talk about the political things in the social media, because you know we have many sides fighting now. It's like one coin with two faces. If you use it for this face, the other face will be mad of you. Because of that I don't use social media for political things, I only use it for what I'm doing, what is my dream for example. In sharp contrast, Farhan uses social media to publicise his engagement with the international community of fellow Kurdish people seeking to establish greater autonomy for Iraqi Kurdistan. Farhan is a 16-year-old young man. He was born in the Netherlands to parents who fled, but he grew up in Kurdistan, before returning to the Netherlands nearly 2 years ago. Although he knows it may have repercussions, he feels it is important to engage with this cause online, there he argues 'we can say we are Kurds, we are trying to become a proper country, to make our own decisions'. He posts photos on Facebook and Instagram about the cause, and takes selfies with the Kurdish flag: 'Look I have 30 friends in the Netherlands, and maybe 5 in Germany, and 5 in Switzerland, so they see these photos. So they know Iraqi Kurdistan exists'. For young refugee men and women, posting on social media has become a politicised performative practice. Strategies like managing multiple accounts, taking explicit stance or selfcensorship appeal to the rights of freedom of expression, as laid down in article 19 of the Universal Declaration of Human Rights: 'Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers' (UDHR, 1948). The explicit recognition of the right to use 'any media' speaks to the various platforms young refugees use to document and communicate human right violations. The explicit emphasis these rights apply 'regardless of frontiers' holds particular relevance for young refugees living in the diaspora. Right to information The right to information as laid down also in Article 19 of the UDHR as the right to 'seek, receive and impart information' also holds particular urgency in the lives of young refugees ( Figure 1). Jo is a 34-year-old self-proclaimed 'hacktivist', 'information must be free' is his motto. When I asked Jo permission to audio record our interview, he brought out his audio recorder to record the conversation for his own records. That was a first time experience for me, which is revealing of both feelings of distrust towards outsiders and a desire to ensure outsiders can be held accountable. We see some of the material objects dear to him, his portable video camera, an audio recorder, a small spy-cam, a piece of hardware, one of his laptops and old postcards of Old Damascus, cigarettes, coffee cups brought from Syria and coffee bought in the Netherlands but originally from Turkey. As the revolution against president Assad turned into violence and after being jailed for several months, he fled. Using a Sailing marine weather app to ensure himself whether conditions would be manageable he attempted to travel to Greece from Turkey. During his crossing he lost one of his phones, and one of his laptops. Jo might be an a-typical example, but there are several points which are illustrative for patterns discernable among the Syrian refugee community in Europe. Important is the page Jo insisted on opening before taking the photograph. It's the Wikimedia entry on Cunningham's Law: 'The best way to get the right answer on the Internet is not to ask a question, it's to post the wrong answer' (Wikimedia, 2017). Jo explains that because 'Syrians don't trust anyone anymore -especially TV' he goes by the 'old trick from Wikipedia godfather, to collect info just throw a lie online and start collect the right answers'. This strategy allows him to weigh various opinions and sides. This tactic is illustrative for negotiating 'information precarity' (Wall et al., 2017), the major media and communication obstacle refugees are struggling with is trying to establish what information can be trusted and to weigh the rumours that are spreading through digital feedback networks (Dekker et al., forthcoming 2017). This pertains to a difficulty in finding smugglers, navigation, as well as locating information pertaining to official national asylum procedures, local government contacts, education opportunities, but also local Dutch everyday customs, habits, expectations and routines. Right to family life All informants used social media to maintain forms of family life across geographical distances. Figure 2 shows a photo of Patrisia, a 15-year-old girl from Aleppo who is into badminton and piano, which she shared from her personal archive. She chose this manipulated photo depicting herself and her 1.5 months old baby sister. It was one of the pictures dearest to her heart. For her it signifies both the period of living separated from her mother for 18 months, her mother fled by herself to the Netherlands and Patrisia joined her through family reunification. In the Netherlands, a new baby sister was welcomed in the family, and Patrisia used Snapchat to share her love for her sister with her grandparents and best friends still living in Syria. Such transnational Snapchat conversations are illustrative of digitally claiming the right to family life, which are laid down, for example, in Article 8.1 of the UN Convention of the Child: 'Parties undertake to respect the right of the child to preserve his or her identity, including nationality, name and family relations as recognised by law without unlawful interference ' (1989). The appeal to transnational family reflects also her cosmopolitan 'mode of attachment', as part of more 'multiple, uneven and non-exclusive affiliations' (Clifford, 1998: 180) like her new classmates in the Netherlands and local Dutch neighbours. Transnational attachments are increasingly politicised particularly because of their challenging of 'conventional notions of locality as well as of belonging' (Clifford, 1998: 180). On the European level of policy and discourse, such transnational family practices have been called upon to delineate boundaries between what entails proper Europeanness (Kringelbach, 2015), particularly because they question dominant understandings of the European 'normal single nation family' (Beck and Beck-Gernsheim, 2014: 2). The fear of the satellite dish in the 1990s and policing of Internet cafe´'s in the early 2000s in the Netherlands has been supplanted by a pan European fear of transnational Skype video-chat to cause isolation, fragmentation and possibly radicalisation (Parks, 2012). Right to cultural identity Struggles over cultural identity and recognition are a final theme emerging from young refugees' reflections on their personal pocket archives. There are multiple ways informants emphasise their diasporic attachments and love for their home country, city, town or region of origin, for example, through nicknames such as 'free Kurdistan', creating publics on Twitter and Instagram around hashtags such as #I_love_Syria and sharing memories (for example, those algorithmically generated and suggested by Facebook after 12 months of posting photos or videos with tagged Facebook friends). Through such practices, young refugees can be said to actively claim their right to cultural identification. These practices resonate, for example, with UDHR article 27.1: 'Everyone has the right freely to participate in the cultural life of the community' and article 5 of the UNESCO Universal declaration on cultural diversity 'all persons have the right to participate in the cultural life of their choice and conduct their own cultural practices ' (2001). Most strikingly, Moonif, a 23-year-old young man from Lattakia, Syria whose main hobby is scouting, highlighted asylum procedures that pose an increasing risk to one's historical digital identity archives: 'The IND Dutch [Immigration and naturalization service, KL] ask your name on Facebook during the interview. Like they look at photos you post, like a packet of cigarettes they know where it is form. People delete or change their profiles'. In their fear of digital traces being used by migration officials to reject asylum claims, young refugees are self-censoring their historical social media presence. This self-censorship is a harsh violation of human rights. Additionally, informants commonly express great dissatisfaction with the way in which refugees are portrayed in mainstream European media outlets. This was also touched upon by Moonif. He spoke about the image he shared on Instagram, to critique dominant a-historical framings of Syria as a place with 'no internet, no phones, no flights, no water' and so on that tend to silence the fact that before the war people were living 21st century lives in Syria ( Figure 3). Wael, 22-year-old young man from Douma, Syria, labels himself as a 'human rights activist' and he further elaborates on this theme. He states his 'passion is to integrate the Syrian society in the Dutch society' and he is eager to counter dominant shallow and homogenising news frames using social media: To show who we are and why we are here. We are not just like any people. We are educated, we had a civilization. I want to show this to the Dutch people. I don't know why Dutch people don't ask themselves that why Syrian people weren't here in Holland before the war? We didn't need Europe. We didn't want to leave our country. Wael adds he is eager to conduct his own cultural practices and share news about the ongoing violence and suffering in Syria especially to inform fellow Dutch. However, he does not feel his messages are heard: 'when I share anything about the war like some horrible pictures from Syria, the Dutch people they don't like, or they don't see it. But when I put something else from the normal life they like it, see it and share it'. He feels his participation is tolerated and only really acknowledged when he presents himself in a certain expected way. Rather than sensing solidarity, fellow Dutch contacts seem to engage only with a narrow, 'cosmetic cosmopolitanism' (Nakamura, 2002: 14) that is de-politicised and works to conceal structural injustices. Conclusions Young refugees are more often than not connected migrants (Diminescu, 2008). Although young refugees are extremely vulnerable as they are often primarily concerned with gaining access to basic rights such as food, shelter, physical and mental security, education and work, the smartphone use of young refugees shows that communication rights are increasingly fundamental. Communication rights do not only offer the means to articulate other human rights, they are interwoven with those other basic rights (Hamelink, 2004;Witteborn, 2011). This article draws on the experiences of 16 young refugees in the distinctly situated context of the Netherlands. Combining law-in-books and law-in-action, this intervention seeks to offer an affirmative critique of communication rights from the margins through theoretical, methodological and empirical scrutiny. Conceptually, my focus was not on the top-down infrastructure of human rights frameworks, but rather on bottom-up communication rights claims from the margins. This way communication rights can be deployed as a scaffolding to map perceived communication (and other rights) deficits, but also as a lens to show the micro-political potential for agency in everyday digital storytelling practices. Methodologically, in devising a participatory action research setup, the smartphone is considered as a personal pocket archive. Rather than externally imposing as an outsider a digital storytelling approach to empower a marginalised group, this study seeks to take serious young refugees' own digital archives as important sites of alternative knowledge production. Focusing on the digital performativity of right claims, the empirical section details how young refugees constitute themselves as political subjects of communication rights online. The main claims for communication rights that emerged from the analysis revolve around the right to self-determination, the right to selfexpression, the right to information, the right to family life and the right to cultural identity. Various social changes young refugees reflect upon result from mediatisation. First, this is discernable in the paradoxical implications of 'cosmopolitanising trajectories' (Christensen and Jansson, 2015) that may both exacerbate feelings of loneliness or frustrations about non-recognition. Second, young refugees strategically navigate between various platforms, mobilising the medium-specific affordances for distinct aims and to reach particular audiences (Madianou, 2014). In making such claims to communicate, forced migrants go against the grain of human rights as they are hierarchically positioned as subalterns along the lines of nationality, geography, religion, race, ethnicity, gender, sexuality, age, generation and technology.
2018-04-03T00:50:12.726Z
2017-09-25T00:00:00.000
{ "year": 2017, "sha1": "9b9ff31fbe6947620d88bfeeb32c1675a87fad38", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1748048517727182", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9b9ff31fbe6947620d88bfeeb32c1675a87fad38", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology", "Medicine" ] }
55227903
pes2o/s2orc
v3-fos-license
Hydroalcoholic Extract of Crambe on Sitophilus zeamais Insects and Maize Seed Quality The objective of this work is to evaluate the insecticidal and attractiveness of concentrations of hydroalcoholic extract of crambe grains on Sitophilus zeamais, and its effect on the physiological quality of corn seeds. The experiments were conducted at the Laboratory of Entomology and Seeds of the Assis Gurgacz University Center, in Cascavel, PR. The evaluation of attractiveness and insecticidal effect were evaluated using DIC, with 4 treatments (0, 5, 15 and 25% extract concentration) and 10 or 5 replications, respectively, totaling 40 experimental plots for the insect attractiveness test and 20 Experimental plots for the test of the insecticidal effect. For the experiment on the physiological quality of corn seeds submitted to the extracts, a DIC was set up in a 4 × 4 factorial scheme, factor 1 being the storage time of the seeds (0, 30, 60 and 90 days) and factor 2, the concentrations (0, 5, 15 and 25%), with 4 replicates, totaling 64 plots. Data were submitted to ANAVA, and means adjusted to regression or submitted to the Tukey test at 5% of probability, using the statistical program ASSISTAT®. The results evidenced the treatment with a hydroalcoholic extract in the concentration of 25% as the one with the highest insecticidal effect, and extract at 15% concentration yields a higher percentage of germination, normal seedlings, and mass of seedlings than the control. At 25% concentration, the extract do not negatively influence any of the parameters analyzed. Storage time above 60 days stimulates germination, mass and length of maize seedlings. Introduction Maize is produced in many continents and can be used in several ways, from human and animal nourishment to the high technology industry, in the production of films, biodegradable packaging, and other products.According to Paes (2006), about 70% of the world's maize production is destined to animal feed.In developed countries this figure may reach 85%.However, only 15% of all world production is destined for human consumption, directly or indirectly.Travaglia (2011) states that grains often need to be stored for more than a year due to off-seasons and drought periods.The purpose of storage is to preserve the characteristics and quality of the grains over time in order to meet the market demands.However, if storage conditions are not adequate the grains become susceptible to deterioration and exposed to possible contamination. Seed or grain quality may be affected by several factors, including storage pests, such as Sitophilus zeamais, which may be responsible for the physical deterioration of the stored batch (Lorini Kryzanowski, França-Neto, & Henning, 2010). Improper storage conditions lead to severe attacks of storage pests, which might make these grains unfit for consumption (Michelraj & Sharma, 2006). Currently, storage pests are commonly controlled by applying chemical products, however, Barbosa (2004) states that the residues of these chemical insecticides can be found not only in the grains, but also in the processed products in different concentrations.Thus, according to Viebrantz, Radunz, and Dionello (2016), due to the need to improve food quality and safety, the use of chemical methods has been replaced by alternative methods. Crambe (Crambe abyssinica) is an oilseed that belongs to the brassicaceae family.It presents rapid growth and short cycle.It is highly tolerant to pests, which only attack the crop during the seedling phase.Some pests that have already been reported for attacking crambe crops are the Cabbage Aphid (Brevicoryne brassicae), Cucurbit Beetle (Diabrotica speciosa) and Agrotis sp.(Bezerra et al., 2011). The brassica family has been studied due to the production of secondary metabolites, such as glucosinolates (Merah, 2015).Pitol, Broch, and Roscoe (2010) explain that crambe presents low incidence of pests due to the presence of glucosinolates.Pal Vig, Rampal, Thind, and Arora (2009) also mention studies that point out that this compound has several biological activities, such as protection against pathogens and weeds.Tsao, Peterson, and Coats (2002) suggest that remnants of plants that contain glucosinolates, when incorporated into the soil, can control soil pests. According to Pal Vig et al. (2009), glucosinolates accelerate insect respiration, which consequently increases their need for ATP while blocking its production.This leads to the exhaustion of energy sources and culminates in the insect's death. The objective of this research was to evaluate the insecticidal effect and attractiveness of different concentrations of hydroalcoholic extract of crambe on Sitophilus zeamais, as well as its effect on the physiological quality of stored maize seeds throughout the storage period. Material and Methods This experiment was conducted at the Laboratory of Entomology of the Assis Gurgacz University Center in Cascavel, Paraná, Brazil, at a temperature of 25±2 °C and relative humidity of 60±5%.The insects used in the tests were obtained from the insect farm kept in the laboratory, with maize kernels placed in containers measuring 8 cm in diameter × 6 cm in height.Grains of the maize hybrid AM 4003 were obtained from Melhoramento Agropastoril, a grain-producing company.Crambe grains were obtained from the experimental fields at the School Farm of the Assis Gurgacz Foundation University Center, Cascavel, Paraná, Brazil, in 2014, and stored away from light and heat. The crambe grains were ground in an IKA A11 Basic 2500 1/min IP43 mill in order to obtain powder, which was mixed with 100 mL of the hydroalcohol composed of 50% water and 50% alcohol, homogenized in a blender in previously determined concentrations, and kept in a beaker with film and foil for light protection, for 48 h. Evaluation of Repellency/Attractiveness on Sitophilus zeamais Insects The experiment was set up in a completely randomized design, consisting of 4 treatments (concentrations 0%, 5%, 15% and 25%) and 10 replications of each, totaling 40 experimental plots. For the evaluation of the repellency on Sitophilus zeamais, two MDF feeding arenas measuring 45 × 45 × 3 cm were used, having a central hole with a diameter of 8 cm and four lateral holes with a diameter of 6 cm each.They were interconnected symmetrically by four 10-cm paths that connected the central and lateral holes, all with a depth of 2 cm, coated with contact paper and covered with perforated paper to allow aeration. Ten grams of maize kernels were placed in each container.Container #1 was the control (maize kernels only).Container #2 had maize kernels mixed with 0.5 mL of hydroalcoholic extract of crambe at 5% concentration.Container #3 had maize kernels mixed with 0.5 mL of hydroalcoholic extract of crambe at 15% concentration.Container #4 had maize kernels mixed with 0.5 mL of hydroalcoholic extract of crambe at 25% concentration. Ten Sitophilus zeamais insects were released into the central container of each arena.After 1 hour and 48 hours, the number of insects in each container was counted in order to assess the attractiveness at the first moment of exposure and two days later. Data were subjected to analysis of variance (ANOVA) and means were fit to the regression in the statistical program ASSISTAT® version 7.7 (Silva & Azevedo, 2016). Insecticide Evaluation The assay was performed on Petri dishes, in a completely randomized design consisting of 4 treatments, as follows: Treatment #1 -control (distilled water); Treatment #2 -hydroalcoholic extract of crambe at 5% concentration; Treatment #3 -hydroalcoholic extract of crambe at 15% concentration; and Treatment #4 -hydroalcoholic extract of crambe at 25% concentration.There were also 5 replications of each, totaling 20 experimental plots. Each Petri dish was lined with two sheets of germination test paper.One ml of distilled water or one ml of the hydroalcoholic extract of crambe was added to the Petri dishes with a syringe in the different concentrations determined.The dishes were then infested with 10 non-sexed adult insects of Sitophilus zeamais and sealed with film paper with microapertures to allow air to enter.The 20 Petri dishes were placed in a BOD chamber, at a temperature of 25±2 °C, photoperiod of 14hL and relative humidity of 60±5%. The evaluations were carried out 12 h and 24 h after beginning the experiment by obtaining the number of dead insects.Data were subjected to ANOVA and means were fit to the regression in the statistical program ASSISTAT® version 7.7 (Silva & Azevedo, 2016). Germination Test The germination test in maize kernels was conducted at the Seeds Laboratory of the Assis Gurgacz Foundation University Center, in Cascavel, Paraná, Brazil.The experiment was set up in a completely randomized design consisting of a 4 × 4 factorial scheme.Factor 1 was seed storage time (0, 30, 60 and 90 days), and factor 2 was the concentration of the hydroalcoholic extract of crambe (0, 5, 15 and 25%).There were 4 replications, totaling 64 experimental plots. Each replication included 50 seeds of the maize hybrid AM 4003.The parameters evaluated were percentage of germination, percentage of normal seedlings, and seedling mass and length at day 7 after sowing (Brasil, 2009). The data were subjected to analysis of variance and the means were compared by Tukey's test at 5% probability in the statistical program ASSISTAT® version 7.7 (Silva & Azevedo, 2016). Results and Discussion Table 1 shows the analysis of variance of the regression of data related to the insecticidal action of the hydroalcoholic extract of crambe on Sitophilus insects after 12 and 24 h of exposure.The number of dead insects after 12 h of exposure fit the linear regression, whereas the number of dead insects after 24 h fit the cubic regression, both determined based on the R 2 . Data transformation was necessary to determine the most adequate coefficient of variation, which is justified by Haddad and Vendramim (2000), who state that the transformation of data by using √ is usual in entomology, since it homogenizes the experimental variance, which is a statistical requirement for the validation of the tests of significance and confidence intervals for the means of the treatments. Table 1.Regression in the analysis of variance of the insecticidal action of different concentrations of hydroalcoholic extract of crambe on the insect S. zeamais, for 12 h and 24 h, with transformation of using √ Figure 1 shows the percentage of dead insects as the concentration of hydroalcoholic extract of crambe increased.After 12 hours of exposure, the extracts provided approximately 35% of dead insects, whereas in the control, all insects remained alive.These results point out that the alternative control of these insects is possible.Some studies have demonstrated the resistance of Sitophilus zeamais to chemical insecticides that have not yet been released for the control of storage pests in Brazil, such as indoxicarb (Haddi, Mendonça, Santos, Guedes, & Oliveira, 2015), thus, having an alternative method for controlling them is necessary.Nascimento, Diniz Filho, Mesquita, Oliveira, and Pereira (2008) found 96 to 100% of dead Sitophilus insects exposed to Tagetes patula extract in vapor form.Restello, Menegatt, and Mossi (2009) studied the effect of the essential oil of T. patula on Sitophilus mortality and found 100% of dead insects.Silva, Melo, Pessoa, Almeida, and Gomes (2012) observed a raise in Sitophilus mortality as the rates of Momordica charantia (L.) extract increased, reaching 100% mortality when applying 10 mL of the extract. Figure 2 shows the percentage of dead insects after 24 hours of exposure to different concentrations of hydroalcoholic extracts of crambe.The data did not fit the regression.In the control, 100% of the insects remained alive.In the treatment with the extract at 5% concentration, about 65% of the insects died.This number decreased to 45% when concentration was 15% and increased to about 60% when extract concentration was 25%. According to Glaser (1996), the increase in Sitophilus mortality due to the presence of crambe extract happens because the glucosinolates found in brassicas might act as control for pests such as nematodes, flies, maggots and mites. Figure 2. Analysis of the insecticide action on Sitophilus zeamais after 24 hours of exposure to different concentrations of hydroalcoholic extract of crambe under controlled conditions of temperature and photoperiod Table 2 shows the insects' behavior in the feeding arena containing maize kernels with the 4 different concentrations of hydroalcoholic extract of crambe, at 1 h and 48 h of exposure.Just as the insects were exposed, it was observed that 44.2% of them were attracted to maize kernels with no addition of hydroalcoholic extract, what differs statistically from the other treatments.Concurrently, 23% of the insects were attracted to kernels with extract at 15% concentration, 19.2% were attracted to kernels with extract at 5% concentration and 15% were attracted to kernels with extract at 25% concentration, showing no statistical difference among treatments. Concentration of the hydroalcoholic extract of crambe (%) After 48 h of exposure, Treatment #1 remained as the most attractive to insects, with 30% preference; however, it was statistically equal to Treatment #3, with 25.4% preference, and Treatment #4, with 28% preference.Treatment #2 attracted only 16.6% of the insects, what differs statistically from the control (T1).Cruz, Sousa, Medeiros, Silva, and Gomes (2012) studied the action of different essential oils in the control of Sitophilus zeamais insects and concluded that lavender oil had the best effect in repelling weevils from maize.Nonetheless, further research is required in order to establish the ideal concentration.Note.Means followed by the same letter in the columns are not significantly different (Tukey, P < 0.05). Figure 3 shows that the percentage of insects attracted to maize kernels with hydroalcoholic extract of crambe (45% attracted to control) after 1 h of exposure reduced with higher concentrations of the extract; only 17% of the insects were attracted to treatment #4 -25% extract concentration.Fernandes and Favero (2014) studied the attractiveness/repellence of the essential oil of Schinus molle on Sitophilus zeamais at 1 h and 24 h of exposure and observed repellent action of the lethal concentrations of 5 and 10 ppm. Almeida, Silva Junior, Silva, Lino, and Silva (2012) studied the control of Sitophilus zeamais with a hydroalcoholic extract made from pinecone and black pepper and reported that the infestation percentage of the insect-pest on maize seeds decreased with higher concentrations of the extract, what was corroborated in this experiment. Figure 3. Percentage of Sitophilus zeamais attracted to maize kernels treated with different concentrations of hydroalcoholic extract of crambe, after 1 h of exposure in a free-choice feeding arena Concerning to preference, after 48 h of exposure (Figure 4), 30% of the insects were attracted to maize kernels in the control treatment.This percentage was lower in treatments #2 and #3 and showed a tendency of increase in treatment #4 -25% extract concentration (28% of insects attracted).Guimarães et al. (2014) noticed that aqueous extracts from seeds of Capsicum baccatum showed repellent activity on maize weevils whereas the alcoholic extract from the pulp of Capsicum baccatum showed 34.6% attractiveness on adult Sitophilus zeamais insects.Such result is similar to those found in this experiment, which indicates that the substances released in alcohol might be different than those released in water.Table 3 shows the percentage of germination and normal seedlings, mass of 10 seedlings (g) and seedling length (cm) of maize kernels treated with different concentrations of hydroalcoholic extract of crambe (0, 5, 15 and 15%) under different storage times (0, 30, 60 and 90 days). The coefficients of variation found in every parameter assessed point to the homogeneity of the data, since the percentage of germination, normal maize seedlings and mass of 10 seedlings (g) presented CV below 10% (3.99, 4.69 and 6.12%, respectively) whereas seedling length had a 14.18% CV.According to Gomes (2002), coefficients of variation up to 10% suggest low heterogeneity, that is, the data is reliable.Ten to 20% represent medium homogeneity.Note.*Significant at 5% probability; ns: non-significant; CV (%) = Coefficient of variation.MSD = minimum significant difference. Concerning to the germination percentage, there was no interaction between the factors storage time and extract concentration, what shows the independence of the factors analyzed.Regarding the storage time, at 0 day the germination rate (88%) was statistically inferior to those under other storage times, all above 90% and statistically equal.In relation to the extract concentration, at 15% there was higher germination than at 0% and 25%, despite being statistically equal to the treatment with 5% extract concentration, what demonstrates that the extract stimulates maize seed germination.However, Brow and Morra (2005) reported that plants that have glucosinolates might affect successive crops.Some reports state that the presence of brassicas reduces the establishment of seedlings of different crops.Brassica napus straw, for instance, decreases the emergence of wild oat (Avena sterilis) and black mustard (Brassica nigra) and inhibits the germination of grasses and broccoli. As for seedling length, there was no interaction among the factors and the extract concentration did not influence this parameter.Such result is in accordance with those of Renosto, Vonz, Paiva, Marostica, and Viecelli (2014), who studied static extracts of crambe at 2.5 and 10% concentration and did not find significant difference in the length of the aerial part of the maize plants. Storage time significantly influenced length.The longer the kernels exposed to the extracts were stored, the longer the length of the seedlings.Thus, it is concluded that storage time positively influences growing. Table 4 shows the relationship between storage time and different concentrations of the hydroalcoholic extract of crambe and its effect on the normality of the maize seedlings.Regardless of the storage time, the percentage of normal seedlings was statistically the same.However, if each storage time is analyzed separately, only at 0 day the extract concentration did not influence the percentage of normal seedlings.At 30 and 60 days, the percentage of normal seedlings was higher than in the control only at 15% concentration (94.5% and 91.5%, respectively).Menegusso and Simonetti (2015) studied aqueous extracts made from the root and aerial part of crambe plants and found that at 10% concentration, the number of normal maize seedlings was higher than in the control, however, at higher concentrations (20 and 30%), the number of normal seedlings decreased. After 90 days of storage, the percentage of normal seedlings was statistically equal to that of the control at every concentration, except at 25%, which was lower (Table 4).Note.Same lowercase letter in the same column and same uppercase letter in the same row are not significantly different (Tukey, P < 0.05). Table 5 depicts the relationship between storage time and extract concentration as well as its effect on the mass of 10 seedlings seven days after sowing.At 60 days of storage, regardless of the concentration to which the maize kernels were exposed, the mass of 10 seedlings was higher than in any other storage time, except at 0% and 90 days, which was statistically equal to the mass at 60 days of storage. The extract concentration did not significantly influence the mass of 10 seedlings at 0 or 90 days of storage, however, it influenced at 30 and 60 days.The best result was achieved at 15% concentration and 30 days of storage.Such results differ from what was found in experiments using crambe hay over maize, in which the researchers reported a reduction in the mass of the aerial part of maize plants (Spiassi, Fortes, Pereira, Senem, & Tomazoni, 2011).Note.Same lowercase letter in the same column and same uppercase letter in the same row do not differ among themselves at 5% significance by Tukey's test. In summary, besides making maize kernels less attractive to Sitophilus insects, the hydroalcoholic extract of crambe also improved the germination parameters of the stored maize seeds, what indicates the feasibility of the development of a hydroalcoholic extract of crambe-based product. Conclusions Considering the proposed goals and the results obtained in this experiment, it is concluded that:  The hydroalcoholic extract of crambe at 25% concentration applied on maize kernels results on an attractiveness rate (28%) statistically equal to that of the control treatment (30%) but on a higher mortality rate (60%-0%).  Concerning to the parameters related to the quality of the maize seeds, the hydroalcoholic extract of crambe at 15% concentration yields a higher percentage of germination, normal seedlings, and mass of 10 seedlings than the control.At 25% concentration, the extract do not negatively influence any of the parameters analyzed. Figure 1 . Figure 1.Evaluation of the insecticidal action after 12 h of exposure of Sitophilus zeamais to different concentrations of hydroalcoholic extracts of crambe under controlled conditions of temperature and photoperiod Figure 4 . Figure 4. Percentage of Sitophilus zeamais attracted to maize kernels treated with different concentrations of hydroalcoholic extract of crambe, after 48 h of exposure in a free-choice feeding arena  Storage time above 60 days stimulates germination, mass and length of maize seedlings. Table 2 . Percentage of Sitophilus zeamais insects attracted to maize with different concentrations of hydroalcoholic extract of crambe, in a free-choice feeding arena, assessed at 1h and 48h of exposure Table 3 . Percentage of germination and normal seedlings, mass of 10 seedlings (g) and seedling length (cm) of maize kernels treated with different concentrations of hydroalcoholic extract of crambe, under different storage times and controlled conditions Treatment Germination at 7 days (%) Normal seedlings (%) Mass of 10 seedlings (g) Seedling length (cm) Table 4 . Relationship between storage time and concentrations of the hydroalcoholic extract of crambe and its effect on the percentage of normal maize seedlings under controlled conditions Table 5 . Relationship between storage time and concentration of the hydroalcoholic extract of crambe and its effect on the mass of 10 maize seedlings under controlled conditions
2018-12-12T12:42:46.615Z
2017-12-13T00:00:00.000
{ "year": 2017, "sha1": "c1352ad7e1af007755c630f1b7fdc68005a3d597", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/jas/article/download/71356/39640", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c1352ad7e1af007755c630f1b7fdc68005a3d597", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
6180091
pes2o/s2orc
v3-fos-license
Falciform Ligament Abscess after Omphalitis: Report of a Case A falciform ligament abscess is a rare type of intra-abdominal abscess. A 2-yr-old male, who had omphalitis two months previously, presented with a fever and right upper quadrant abdominal pain. The ultrasound and CT scan showed an abdominal wall abscess located anterior to the liver, which was refractory to conservative management with percutaneous draninage and antibiotics. On the third recurrence, surgical exploration was performed and revealed an abscess arising from the falciform ligament; the falciform ligament was excised. A follow up ultrasound confirmed complete resolution of the abscess with no further recurrence. INTRODUCTION The falciform ligament extends from the umbilicus upward to the diaphragm, and laterally to form the hepatic coronary ligaments. It represents a potential space, and a few cases of falciform ligament abscess secondary to infectious diseases of the liver and gallbladder have been reported in adults. It is often misdiagnosed as a simple abdominal wall abscess due to the location of the abscess and treated non-operatively, which is usually unsuccessful. Here we report a case of falciform ligament abscess after omphalitis in a child. CASE REPORT A 2-yr-old male presented with a fever and right upper quadrant abdominal pain. The patient underwent surgery for type three ileal atresia during the newborn period, and had a history of omphalitis two months prior to presentation. On examination the body temperature was 38.6°C and the umbilicus was normal. There was a firm tender mass palpated in the right upper quadrant of the abdomen. The laboratory test results showed a leukocytosis of 18,290/μL with a shift to the left. The abdominal ultrasound and computed tomography (CT) revealed a 3.4×3.4cm abscess located at the right paramedian abdominal wall extending to the anterior surface of the liver (Fig. 1). The patient was managed with percutaneous drainage and intravenous antibiotics. The microbiology examination of the abscess identified Methicillin-resistant Staphylococcus aureus (MRSA). Shortly after drain removal, the fever and abscess emerged again and two weeks of intravenous vancomycin and aspiration of the abscess were performed two more times. However, after normalization of the clinical parameters, a follow-up CT revealed a 3.3× 2.4-cm residual abscess at the same location. At laparotomy, the abscess was found to originate from the falciform ligament; the abdominal wall was clear (Fig. 2). The falciform ligament was completely excised, and the post-operative course was uneventful. Pathological examination revealed a fibrosis of the ligament with abscess formation. A follow-up ultrasound 2 weeks after the operation showed complete resolution of the abscess. Now it is 4 months after the operation and he is well and healthy. DISCUSSION A soft tissue mass beneath the abdominal wall continuous with a thickened round ligament is a diagnostic feature of a falciform ligament abscess by ultrasound or CT scanning (1). However, because of its rarity and obscure location, a definite radiological diagnosis of a falciform ligament abscess is difficult. Infections can extend from the liver, gallbladder (2, 3) and umbilicus (4). An infection of a cystic lesion of the falciform ligament has been reported as a cause of a falciform ligament abscess (5). As shown in this case, it is important to suspect a falciform ligament abscess in a patient with a right upper quadrant abscess and a prior history of abdominal infections. Lipinski et al. (4) reported two cases of a falciform ligament abscess secondary to an omphalitis; contiguous spread of the infection via the round ligament was thought to be the etiology. In the present case, however, the round ligament was divided due to the previous operation for ileal atresia, because the supraumbilical transverse, round ligament-cutting incision was used to provide more wide operative field. The superficial veins of the abdominal wall form a network that radiates out from the umbilicus, and a few small veins named paraumbilical veins connect the network to the portal vein forming a portal-systemic venous anastomosis (6). This venous network might explain the extension mechanism of the omphalitis into falciform ligament abscess in the absence of a round ligament. Moreover, the paucity of the vascular network inside the ligamentous structure might have impaired the venous outflow from the ligament and the MRSA could be colonized easily within the falciform ligament to form an abscess. Although the round ligament was manipulated during the operation at the neonatal period, the long time interval between the ileal atresia operation and falciform ligament abscess would preclude the possibility of the abscess as a post-operative complication. MRSA has been reported to be the most frequent causative agent of omphalitis in children (7). The identification of MRSA, which is consistent with the previously isolated microorganisms from the omphalitis, also supports the speculation that the abscess originated from the omphalitis. Delivery at home, low birth weight, use of umbilical catheters, and septic delivery have been known to be risk factors of the omphalitis (8), but the causes of the omphalitis reported here are uncertain. As the omphalitis was cured before the symptoms of the falciform ligament abscess became apparent and there had been no abdominal complaints before the onset of the omphalitis, we speculate that the omphalitis must have preceded the falciform ligament abscess. Many readily accessible abscesses are treated successfully with percutaneous drainage and the antibiotics. However, in this patient drainage and antibiotics did not completely treat the abscess. This might be also explained by the paucity of the vascular network that hindered exposure to the circulation and therefore the antibiotics. Previous authors reported successful treatment of the falciform ligament abscess after excision of the ligament (4,9,10). Therefore, when a falciform ligament abscess is suspected, surgical excision rather than percutaneous drainage should be considered for the initial treatment. We treated a patient with a falciform ligament abscess secondary to a prior omphalitis. The patient was successfully treated with falciform ligament excision. A strong index of suspicion is necessary for early diagnosis and treatment of similar cases.
2014-10-01T00:00:00.000Z
2010-06-16T00:00:00.000
{ "year": 2010, "sha1": "99f15424a6a544a154e9ac6e82180087aa161848", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3346/jkms.2010.25.7.1090", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "99f15424a6a544a154e9ac6e82180087aa161848", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246996943
pes2o/s2orc
v3-fos-license
Chiral plasma instability and inverse cascade from nonequilibrium left-handed neutrinos in core-collapse supernovae We show that the backreaction of left-handed neutrinos out of equilibrium on the matter sector induces an electric current proportional to a magnetic field even without a chiral imbalance for electrons in core-collapse supernovae. We derive the transport coefficient of this effect based on the recently formulated chiral radiation transport theory for neutrinos. This chiral electric current generates a strong magnetic field via the so-called chiral plasma instability, which could provide a new mechanism for the strong and stable magnetic field of magnetars. We also numerically study the physical origin of the inverse cascade of the magnetic energy in the magnetohydrodynamics including this current. Our results indicate that incorporating the chiral effects of neutrinos would drastically modify the hydrodynamic evolutions of supernovae, which may also be relevant to the explosion dynamics. I. INTRODUCTION AND SUMMARY Following the theoretical suggestion by Lee and Yang [1], Wu and collaborators experimentally discovered that the weak interaction violates the parity symmetry, one of the most fundamental symmetries in nature [2]. In this experiment, a polarized 60 Co in the external magnetic field at low temperature decays into 60 Ni by emitting an electron (and an antineutrino) via the weak interaction. The observation that the electron is preferentially emitted into the direction opposite to the magnetic field shows that the weak interaction violates parity symmetry. Later, it was also confirmed that neutrinos are only left-handed within the Standard model (SM) of particle physics [3]. On the other hand, this important feature of neutrinos is missed in the conventional radiation transport theory used in numerical simulations of core-collapse supernovae; see, e.g., Refs. [4][5][6][7][8][9]. Only recently, based on the idea of the relevance of the chirality of neutrinos in the hydrodynamic evolutions of supernovae [10], the radiation transport theory incorporating this effect was constructed systematically from the underlying SM [11]. Using this chiral radiation transport theory, it was also found that a magnetic field induces nonequilibrium corrections on the neutrino energy-momentum tensor and neutrino number current [12]; see also Eqs. (1) and (2) below. In this paper, we show that backreaction of these nonequilibrium left-handed neutrinos on the matter sector induces an electric current proportional to the magnetic field in core-collapse supernovae. We derive this transport coefficient based on our previous works [11,12]. As this chiral electric current generates a strong magnetic field via the so-called chiral plasma instability (CPI) [13,14], this could provide a new mechanism for the strong and stable magnetic field of magnetars. We also numerically study the physical reason why the subsequent magnetohydrodynamic (MHD) evolutions including this current exhibit the inverse cascade of the magnetic field. Although neutrinos fully out of equilibrium would provide potentially dominant contributions, backreaction from neutrinos near equilibrium could at least qualitatively and perhaps even quantitatively lead to nonnegligible effects on the evolution of the matter sector as will be shown here. Whether our new mechanism operates at a more quantitative level should be tested by numerical applications of the chiral radiation transport theory for neutrinos [11] in the future. This direction would also be important to study the impacts of the chiral effects of neutrinos on the explosion dynamics. Throughout this paper, we use the natural units = c = k B = 1. The electric charge e is absorbed into the definition of electromagnetic fields. II. CHIRAL ELECTRIC CURRENT INDUCED BY NONEQUILIBRIUM NEUTRINOS In Ref. [12], it was shown based on the chiral transport theory for neutrinos [11] that there are additional contributions from nonequilibrium neutrinos to the neutrino current j i ν and neutrino momentum density T i0 ν propor-tional to the magnetic field B: where µ ν is the neutrino chemical potential and v is the fluid velocity. Here and below, ∆O denotes the contribution of chiral effects to a physical quantity O. The coefficient κ can be computed analytically under certain approximations as [12] where M is the mass of nucleons, G F is the Fermi constant, g V,A are the nucleon vector/axial charges, µ is the chemical potential, n is the number density (with the subscripts "p" and "n" denoting protons and neutrons, respectively), and β is the inverse temperature. Here, we note the relation ∆T i0 ν = µ ν ∆j i ν . This can be naturally understood as the neutrino momentum density (which is equal to the neutrino energy current ∆T 0i ν ) being given by the neutrino number current multiplied by the neutrino energy around the Fermi surface. From the momentum conservation law, the matter sector receives momentum kick from neutrinos as where T i0 e is the momentum density of electrons. Here, we ignored the nucleon recoil in the neutrino scattering consistently, which we also used to derive Eqs. (1) and (2) when taking the leading-order contribution in the expansion in terms of |p|/M (with p the momentum transfer) [12]. We also assume a relation between the current and momentum density for electrons similar to the one for neutrinos above, ∆T i0 e = µ e ∆j i e , where µ e is the electron chemical potential and j i e is the electron number current. The electric current of electrons is given by J i e = −j i e . Combining these relations together, we obtain an electron current proportional to the magnetic field: Note that this is different from the so-called chiral magnetic effect (CME) [15][16][17] in that it occurs even without the chirality imbalance of electrons itself. Physically, this can be understood similarly to the Wu experiment showing the correlation between the direction of the emitted antineutrinos/electrons and the direction of the magnetic field. Our result (5), which we have derived here for the first time based on the SM, can be seen as a nonequilibrium many-body manifestation of the microscopic parity violation. Note that in this mechanism, damping the chirality imbalance of electrons due to the finite electron mass is irrelevant unlike the scenario in Refs. [18,19]. 1 Taking n n − n p ∼ 0.1 fm −3 , µ n − µ p ∼ 100 MeV, µ ν ∼ µ e ∼ 100 MeV, T ∼ 10 MeV, |v| ∼ 0.01, and the typical length scale of the system, ∼ 10 km, we have ξ B ∼ 10 MeV. 2 While ξ B here is not associated with the chirality imbalance of electrons, they have the same quantum number. From the comparison of Eq. (5) with the expression of the CME, J CME = µ 5 B/(2π 2 ) [15][16][17], we may define an "effective chiral chemical potential" µ 5,eff = 2π 2 ξ B ∼ 100 MeV. Assuming µ e T , one can also relate the coefficient ξ B to the "effective chiral charge" n 5,eff as where n e is the number density of electrons. In general, the backreaction leads to the modifications of the total energy current and charge current (except for other dissipative terms) as Here, T 0i mat and J i denote the energy current of the matter sector and total electric current, respectively, with being the energy density, P the pressure, and u µ the fluid four velocity. It is however more convenient to work on the hydrodynamic equations in the Landau frame by redefining a fluid velocityũ µ (such that T 0i ∝ũ i ) as where B µ = µναβ u ν F αβ /2 is the magnetic field defined in the fluid rest frame, with F αβ as the field strength of the electromagnetic field. Consequently, we have When assuming the local charge neutrality n e = n p , we reproduce the result (5) also in the Landau frame. 1 One might also suspect that the nonzero neutrino mass mν can cause the chirality flipping of neutrinos. However, such flipping is suppressed by an additional factor involving mν /µν 10 −8 compared with the weak process without chirality flipping. 2 One should note that this value is sensitive to the choice of the parameters, and it should be rather regarded as the upper bound for ξ B , similarly to the discussion in Ref. [12]. To make our paper self-contained, we here summarize several relations regarding the CPI; see, e.g., Ref. [23]. Inserting the perturbation of the form δB ∝ e σt+ik·x (with σ being the growth rate of the CPI) into Maxwell equations with the current ∆J = ξ B B, the linear analysis leads to the dispersion relation where k ≡ |k| is the wave number. This σ is positive as long as 0 < k < k crit with k crit ≡ ξ B , and the corresponding critical length is One also finds that σ becomes maximum when k = ξ B /2 ≡ k CPI and the corresponding time and length scales are respectively. We now provide an estimate for the magnetic field generated by this CPI. Although the origin of ξ B is different, the argument here is similar to Ref. [18]. In the ideal Fermi gas approximation, the additional energy density due to µ 5,eff is ∆ = 1 4π 2 (µ 4 5,eff + 6µ 2 5,eff µ 2 e ) . Assuming this whole energy is converted to that of the magnetic field by the CPI, B 2 CPI /2, one can estimate the maximum magnetic field as We will also verify this estimate by numerical simulations of the MHD below (see Fig. 1). Here, the strong magnetic field is generated from the energy temporarily stored in neutrinos, as can be seen from the relation µ 5,eff ∝ µ ν . This new mechanism could potentially explain the origin of the gigantic magnetic field of magnetars. Note that unlike the conventional mechanism for magnetars, the magnetic field generated by the CPI possesses a nonzero magnetic helicity that characterizes the linking structure of poloidal and toroidal magnetic fields [18]. This ensures the stability of the resulting strong magnetic field. In this estimate of the maximum magnetic field, we adopt several optimistic assumptions. To what extent this mechanism is efficient in core-collapse supernovae should be numerically checked by the chiral radiation transport theory for neutrinos in Ref. [11]. IV. INVERSE CASCADE IN CHIRAL MHD In Ref. [23], numerical simulations of the MHD with the current ∆J = ξ B B (chiral MHD) in the protoneutron star were performed. Consequently, the CPI and the subsequent inverse cascade of the magnetic field in the late nonlinear phase for 8 × 10 −4 ≤ ξ B,ini ≤ 2 × 10 −2 (in the units of 100 MeV = 1) are observed; see also Refs. [24,25] for the inverse cascade of the chiral MHD in the context of the early Universe. As we discussed above, in this paper we provide a new mechanism for ξ B , which leads to a rather larger value ξ B ∼ 0.1. Although the chiral MHD with this value has not been tested previously, one expects that this would also lead to the inverse cascade of the magnetic field. More generally, one can ask the physical origin of the inverse cascade of the magnetic field in the present system unlike the usual MHD. To address this question, we extend the work [23] in somewhat wider range of the initial value of ξ B including ξ B ∼ 0.1 (10 −5 ≤ ξ B,ini ≤ 10 −1 ) and clarify the origin of the inverse cascade. The governing chiral MHD equations are given in Ref. [23]. For the present numerical simulations, we rewrite these equations into a conservative form using Maxwell equations as ∂ ∂t Equations (17)- (20) correspond to the mass conservation, momentum conservation, energy conservation, and induction equation, respectively. Here, ρ is the rest-mass density, E is the electric field, η is the resistivity, Γ is the ratio of specific heats (which we assume to be the value of the ideal gas, Γ = 5/3), I is the unit matrix, and with ν as the viscosity. To describe the evolution of n 5,eff , we also postulate Eq. (21) similar to the chiral anomaly relation that stands for the helicity conservation (see, e.g., Ref. [10]). In Eq. (21), the advection, diffusion, chiral separation effect, and cross helicity are ignored for simplicity as in Ref. [23]. In fact, the total helicity conservation is derived from the spatial integration of Eq. (21) as where are the global effective chiral charge and the magnetic helicity, respectively. Here, A is the vector potential defined by B = ∇ × A. One can eliminate the electric field in the governing equations above through the modified Ohm's law including the chiral current, where J = ∇ × B is the total electric current. As for the approximate Riemann solver, the HLLD scheme [26] is used in our code to solve chiral MHD equations (17)-(21) in a conserved form. We use a MUSCLtype interpolation method to attain second-order accuracy in space while the temporal accuracy obtains second order by using Runge-Kutta time integration. In addition, the constrained transport method is implemented in our code to guarantee the condition ∇ · B = 0 [27]. Our numerical setups are almost same as those in Ref. [23] except for several physical parameters. We adopt ν = 0.01 and η = 1 in all our numerical runs. The initial value ξ B,ini (listed in Table I) varies from 10 −1 to 10 −5 among models. In our numerical runs, we resolve λ crit in Eq. (13) by 10 grid points and take the grid size ∆ = λ crit /10. The number of grid points in our simulations is fixed (N 3 = 128 3 ). However, the size of the calculation domain, L = N × ∆, is changed between models because of the variation of ξ B,ini in models. The typical timescale of the CPI, τ CPI , and L in each model are also listed in Table I. 3 3 As global simulations with the macroscopic length are computa- The analytically predicted temporal evolution of the magnetic field due to the growth of the CPI (∝ e t/τCPI ) is also shown by a black solid line, for reference. In all models, the growth rate of the magnetic field in the early phase of simulation runs after the relaxation of the initial perturbation of the magnetic field has good agreement with the linear analysis of the CPI. In the later phase of the simulations, the amplification of the magnetic field due to the CPI is saturated. The magnetic helicity H mag is drastically generated in the nonlinear phase. The temporal evolutions of H mag and N 5,eff for model 1 are plotted by red and blue lines in Fig. 2. The vertical axis is normalized by the initial effective chiral charge, N 5,eff,ini . From the point of view of the conservation of the total helicity, the decrease of N 5,eff compensates for the increase of H mag . As discussed later, the saturation level of N 5,eff , which is related to ξ B by Eq. (6), depends on L. The inverse cascade of the magnetic energy and fluid kinetic energy is also observed in all our models. Figure 3 shows several time snapshots of 3D rendering of B x and magnetic field lines in the whole calculation domain of model 1 (ξ B,ini = 10 −1 ) listed in Table I Let us now discuss the physical reason why the magnetic field exhibits the inverse cascade in this system. In the linear phase of our simulation runs, the CPI with the typical wavelength λ CPI is developed as shown in Fig. 3(b). In this phase, the anomaly equation (21) shows that n 5,eff is independent of time and so is ξ B [see Eq. (6)]. This is confirmed in Fig. 5, which shows the temporal evolution of ξ B . On the other hand, ξ B decreases in the nonlinear phase of the simulation runs. From Eq. (14), this results in the increase of λ CPI . This is the origin of the inverse cascade of the magnetic field. Here, as previously reported in Ref. [23], we expect that the decrease of ξ B ends when λ crit ∼ L. In Fig. 5, (2π/L)/ξ B,ini is also shown by a solid black line. As we expected, all color lines approach asymptotically the black line after the linear phase. In this way, we can explain the inverse cascade of the magnetic field by the process of the CPI. Let us look into this mechanism more closely. The evolution of the magnetic field is governed by the induction equation (20), where the second and third terms on the right-hand side are related to the CPI. Here, we evaluate the contribution of the first term to the evolution of the magnetic field. This nonlinear interaction between the fluid and magnetic field is divided into three terms, which correspond to the compression, stretching, and advection terms, respectively. In Fig. 6, 2D We can see that the first three terms corresponding to the nonlinear interaction are relatively smaller than the last two terms. This indicates that the evolution of the magnetic field in our system is described mostly by the diffusion and CME, which is the process of the CPI. The condition that the process of the CPI is dominant in the evolution of the magnetic field is derived from the induction equation (20) This condition is indeed satisfied for Eq. (5) with the parameter choice above and η = 1. Note that this is only the sufficient condition for the inverse cascade. When this condition is not satisfied, then the nonlinear interaction between the fluid and magnetic field is no longer negligible. It would be interesting to study whether the inverse cascade persists in such a case. Finally, in this paper we performed chiral MHD simulations for several given values of ξ B , but it would also be important to perform self-consistent simulations directly using the form of ξ B in Eq. (5).
2022-02-21T06:47:32.360Z
2022-02-18T00:00:00.000
{ "year": 2022, "sha1": "d75f49f80c11a56b29be67d6a4fbc10465c27aa2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "86ea0f0e6d21d39ecfecc2a39efa2bb7e6bbe0f5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
149856549
pes2o/s2orc
v3-fos-license
Making use of Data on Social Science in Slovakia-First steps to a national Data Archive IASSIST Quarterly Summer 2001 Quantitative research is a standard instrument for testing hypothesizes, and in the social science in Slovakia one can say, a most frequently used strategy for collecting empirical evidence. The recent development in the area of information technology and the advance in computer assisted statistical analysis, also contribute much to a considerable increase in the quantitative analysis of the electronic data. Quantitative research is a standard instrument for testing hypothesizes, and in the social science in Slovakia one can say, a most frequently used strategy for collecting empirical evidence.The recent development in the area of information technology and the advance in computer assisted statistical analysis, also contribute much to a considerable increase in the quantitative analysis of the electronic data. At present, in Slovakia there are many potential machinereadable data sources suitable for the secondary analysis.Data are produced mostly by academic and educational institutions, institutions of the state administration, forprofit agencies of public opinion and market research, different non-for-profit foundations and associations. At the same time, it is possible to observe a growing demand for older research data, for example in the academic environment, forasmuch as a comparative and longitudinal research design increasingly predominates.The Institute for Sociology of the SAS is also a member of several research networks with the focus on a large-scale comparative research (such as the ISSP -International Social Survey Program, the EVS -European Value Study and so on). Participation in such networks becoming increasingly important, it enables to evaluate the position of Slovakia within Europe (with respect to a different subject matters) in a broader and more complex perspective.Additionally, the co-operation with the partner institutions leaves space also for an exchange of the experiences and knowledge, which, generally, helps to increase professional standards of the scientific investigations and integrates the Slovak scientists within the international scientific community.An interesting example of the transfer of knowledge is a diffusion of the national data archives within Europe, during the recent forty years.The main ideas of this practical endeavor give many inspirations and useful stimuli to us.This was the starting point to think about the shift from current informal practice of storing the research data in Slovakia to more complex and standardized archiving methods and procedures. Comparing the benefits of the two approaches to archiving of the research data (the informal and the standardized), there are no doubts about the merits of the latter.The major benefits of data infrastructure are particularly on the following dimensions: The archive as an instrument for an easy and independent access to research data Nowadays, in Slovakia an informal way of the social data storage is prevailing which makes an access to the relevant datasets too sensible in respect of time and staff changes in a research team.To acquire the dataset assumes to make own search for the history of the relevant research and search for the datasets in the memory of the researchers, than contact the research team which produced the relevant data and negotiate with the researchers about an access to these data.However, this time consuming activity is still not the guarantee to get the data in adequate format and quality, and appropriate for the use in a secondary analysis. General characteristics of a systematic archiving of available empirical research data (such as: access to a catalogue of the primary data containing the sufficient documentation of the datasets, access to the data itself, datasets are concentrated in one place and in compatible formats, quality check of the data, data security -no inadequate changes or loss of data, transparent rules for data access, exchange and distribution of the datasets and so on) sound really promising for the mastering of the shortcomings mentioned before. The archive as an instrument for the control and improvement of the social research One of the basic ambitions of the scientist is to support his or her hypothesizes with evidence, and to do it in a clear and transparent manner.However, there are several examples of the scientific misconduct in history, that brought forward the problem of the validity and reliability in the social sciences.An independent access to the primary data will enlarge the options for the verification of the results and the testing research instruments or methods, employed in the secondary analysis of the research data.Therefore, the issue of archiving is very topical in Slovakia. The archive as an information and communication channel The summary of available empirical research data will provide the users with the overview of the subject areas of the research in the country and the researchers engaged in the research projects.This indirect information enable to establish new contacts within the scientific community and enhance the direct co-operation in the field of comparative research. Newsletters and bulletins, which provide the news about the current research projects, the supply of available data or the offers for a co-operation, these all we see as efficient channels for the information about what has been done and what is new in the area of the empirical research. All in all, these are the main characteristics, which draw the attention to the problem of the usage of the research data in Slovakia.The recent practical endeavor in this area results in the new project proposal of the Institute for Sociology of the SAS. The research project 'The Slovak Archive of Social Data' will focus on creating a primary data catalogue, and furthermore on a systematic documentation of the existing social science datasets.The catalogue of social data will enable the domestic as well as foreign researchers to orientate in the field of an existing fund of the Slovak primary data, and moreover will simplify the access to the required datasets. As we are only at the very beginning, we cannot foresee all the problems connected with the establishing of a data archive.In the preparatory phase, our endeavor is focused on increasing the public understanding of the data archive mission.The task seems simple, we need to inform the scientific community (via journals, presentations etc.) and bring the words "archive" and "archiving of social data" in domestic social research vocabulary.Researchers who work with quantitative data and are engaged in the comparative projects, usually understand, and appreciate the mission of social data archive.However, there is a †large group of researchers who are not so familiar with secondary analysis, and we need to introduce the mission to them in more details. The fact is, that the archiving of the social data has no tradition in Slovakia and its benefits are not obvious for everybody.These days, the positive examples of already established and functioning archives in Europe are greatly helping to illustrate the actual profits.Later, when the work on the proposed project will start, the newly established archive itself should serve as a telling example. We believe that the integration of the new Slovak data archive within the existing network of the social science data archives will open wider opportunities for the cooperation and exchange of the information within the scientific community.
2019-05-12T14:23:05.915Z
2001-10-31T00:00:00.000
{ "year": 2001, "sha1": "2ed2a5c536fc6f68f8511161c8e86b42ca8b7d21", "oa_license": "CCBYNC", "oa_url": "https://iassistquarterly.com/index.php/iassist/article/view/739/731", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "cce0dfa6c29ea00d46c30078c1eead844cc4e925", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Sociology" ] }
232321660
pes2o/s2orc
v3-fos-license
Evaluation of an Initial Specimen Diversion Device (ISDD) on Rates of Blood Culture Contamination in the Emergency Department Introduction Blood cultures are the gold standard for identifying bloodstream infections. The Clinical and Laboratory Standards Institute recommends a blood culture contamination rate of less than 3%. Contamination can lead to misdiagnosis, increased length of stay and hospital costs, unnecessary testing, and antibiotic use. These reasons led to the development of initial specimen diversion devices (ISDD). The purpose of this study was to evaluate the impact of an initial specimen diversion device on rates of blood culture contamination in the emergency department. Methods This was a retrospective, multi-site study including patients who had blood cultures drawn in an emergency department. February 2018 to April 2018, when an ISDD was not utilized, was compared with June 2019 to August 2019, when an ISDD was being used. The primary outcome was total blood culture contamination. Secondary outcomes were total hospital cost, hospital and intensive care unit length of stay, vancomycin duration of use, vancomycin serum concentrations obtained, and repeat blood cultures obtained. Results A statistically significant difference was found in blood culture contamination rates in the pre-ISDD group vs. the ISDD group (7.47% vs. 2.59%, p < 0.001). None of the secondary endpoints showed a statistically significant difference. Conclusions Implementation of an ISDD reduced blood culture contamination. When implementing the ISDD to a healthcare system, compliance is important and will affect contamination rates dramatically. INTRODUCTION Blood cultures are the gold standard for identifying bloodstream infections. However, blood cultures commonly become contaminated with environmental or skin-residing organisms. 1,2 Contaminated cultures can lead to misdiagnosis, increased length of stay (LOS) and hospital costs, and unnecessary testing and antibiotic use. [1][2][3] National contamination rate recommendations are set at less than 3% by the Clinical and Laboratory Standards Institute (CLSI). 4 However, institutions across the U.S. have varying and often higher contamination rates ranging from 2 -10%. 4 Ascension Via Christi Hospitals, Inc (AVC) ministries includes three separate hospitals: St. Francis, St. Joseph, and St. Teresa. St. Francis is considerably larger and is a Level 1 trauma center with access to multiple specialties. St. Joseph and St. Teresa are smaller tertiary hospitals with less resources and support including phlebotomy access. Across all AVC ministries, average contamination rates are about 6% in blood draws occurring in the emergency department (ED). There are many organisms that can lead to contaminated blood cultures, but among the most common are coagulase-negative staphylococci, Corynebacterium species, Bacillus species other than Bacillus anthracis, Propionibacterium acnes, Micrococcus species, viridans group streptococci, and Clostridium perfringens. 1,2,3,5 All of these pathogens can represent true bacteremia when found in a blood culture. Obtaining accurate blood cultures will prevent potential errors in diagnosing, unwarranted lab tests, antibiotic usage, and lower total costs hospital wide. Blood culture contaminants increase total hospital costs from $4,500 -13,000 per contaminant. 2,4,6,7 For the current study, the median of the existing data of $8,750 per contaminant was used. Of note, several of the existing financial outcome studies were older, with one of them being from over a decade ago. 6 The median number utilized may not reflect the inflation or increased hospital costs that have occurred during that time span. It also did not reflect the change in price for the actual device. There are many practices that decrease the number of blood culture contaminants, which include pulling from independent venipuncture sites, use of alcohol or chlorhexidine swabs prior to puncture, and use of highly-trained phlebotomists. 3 Gander et al. 6 evaluated the advantage of utilizing phlebotomists for all blood culture draws in the ED versus regular nursing draws. The study showed a decrease of contamination from 7.1% to 3.1% using phlebotomists. However, this is above the national benchmark of less than 3% contamination. AVC ministries use a variety of practices when sepsis is suspected. AVC utilizes rapid diagnostic tests to identify gram positive organisms, especially Staphylococcus and Streptococcus species, within a matter of hours. If sepsis is suspected, patients go through standard practices regarding fluid requirements and antibiotics within one hour in accordance with guidelines. Alcohol swabs are used in the emergency departments for venipuncture sites, as opposed to chlorhexidine or povidone-iodine swabs. AVC contamination rates have varied when looking specifically at phlebotomist drawn labs, but the range varies from 1 -4% month to month. Consistently, phlebotomy drawn blood cultures have contamination rates that are closer to or are under the national goal of less 3%. Phlebotomy is not used within any of the AVC ministry emergency departments; thus, this study population did not capture any benefit from phlebotomy use. There are other factors that affect contamination rates as well, including skill level of the staff and educational interventions. In ED environments, it can be challenging to monitor technique and re-educate when needed. 1 To combat these challenges, two initial specimen diversion devices (ISDD) were produced. Steripath TM is a device that mechanically diverts and sequesters 1.5 -2 mL of blood, which commonly is considered the volume that most likely contains skin-residing organisms. It allows for a separate, sterile blood flow pathway to be collected in a closed vein-to-vial path. This was the device piloted during the study. Kurin TM sequesters the first small amount of blood, but this volume is closer to 0.1 -0.2 mL of blood. 8 This device also does not use a separate, sterile pathway as opposed to the Steripath TM device. The use of ISDD's can lower contamination rates by 80% of the baseline contaminant rate and to a total contamination rate to less than 1%. 1,4,9 AVC chose to implement a three-month pilot use of Steripath TM for blood cultures drawn in the ED. The three-month time period was compared to another three months from the year prior. With a decrease of contaminated blood culture rates by 80%, AVC would be below the national recommended rate of less than 3%. For this study, an initial specimen diversion device was evaluated to determine its effectiveness in lowering the contamination rates of blood cultures drawn in the ED. METHODS This study was approved by the institutional review board as a retrospective chart review conducted at Ascension Via Christi Hospitals, Inc., that included patients in the emergency department who had a blood culture collected. The time frame in which an ISDD was used for collection was June 1, 2019 to August 31, 2019. This time frame was compared to February 1, 2018 to April 31, 2018 in which an ISDD was not used. Historically, there were ongoing issues with higher contamination rates despite implementing different practice changes. The two time frames were chosen specifically as it was confirmed the same practices were used to gather blood cultures and no other practice changes were ongoing during the separate time periods. There was little nursing turnover within the emergency department during this time period, with the majority of the same nurses included within both time periods. Patients were identified from the electronic health record (EHR) if a blood culture was taken in the ED during those two time periods. Patients who met inclusion criteria were divided into their respective groups and analyzed for the primary and secondary endpoints ( Table 1). The primary endpoint was total blood culture contamination rate. The secondary endpoints analyzed were total hospital cost, hospital length of stay, intensive care unit (ICU) length of stay, vancomycin duration of use, vancomycin serum concentrations obtained, and repeated blood cultures obtained. The cost for the device used in the analysis was $15 per device. Outcomes. The primary endpoint was total blood culture contamination rate. The secondary endpoints analyzed were total hospital cost, hospital length of stay, ICU length of stay, vancomycin duration of use, vancomycin serum concentrations obtained, and repeated blood cultures obtained. The ISDD group had a primary endpoint analysis in both an intent-to-treat manner and per protocol manner. The per protocol examination excluded all contaminants from the ISDD group in which the ISDD was confirmed not to be used. For the secondary endpoints, hospital length of stay and ICU length of stay were not included if the stay was greater than seven days. The time frame was established to avoid including patients in the hospital staying for longer periods due to reasons other than a blood culture contaminant. Repeated blood cultures taken greater than seven days from the original blood cultures also were excluded from the study. The time frame was established to avoid capturing repeated blood cultures obtained for reasons other than repeating a blood culture after a contaminant. Statistical Analysis. To detect an 80% reduction in the two groups with an α < 0.05 and a power of 0.80, 235 subjects were needed in each arm. Blood culture contamination and repeat blood cultures were analyzed with a chi-squared test. Hospital length of stay was analyzed with the t-test. All other secondary outcomes were analyzed with Wilcoxon Rank sum to account for non-normalized distribution. RESULTS The study included a total of 3,331 patients. After review of patient's age, it was confirmed that no patients needed to be excluded due to age. A total of 1,713 patients were included in the pre-ISDD results group and 1,618 patients were included in the ISDD group. Baseline characteristics (except confirming age ≥ 18 years old) were not obtained as it was predicted to not change the results. Study Outcome. The primary outcome blood culture contamination rate was 7.47% in the pre-ISDD group vs. 2.59% in the ISDD group (p < 0.001). The per protocol contamination rate was 0.86%. The total hospital cost was $1,120,000 in the pre-ISDD group vs. $383,690 in the ISDD group, providing a difference of $736,310. The per protocol hospital cost analysis was $138,690, which provided a difference of $981,310. The hospital length of stay (p = 0.7), ICU length of stay (p = 0.3), and vancomycin duration of therapy (p = 0.19) were not statistically significant (Figure 1). Vancomycin serum levels obtained were 0.085 vs. 0.075 (p = 0.58). Repeat blood cultures were 33 vs. 31 in the two groups, respectively (p = 0.8). DISCUSSION Current recommendations for blood culture contamination are rates less than 3% set by the CLSI. The baseline AVC ED contamination rate was 6%. Our study demonstrated a statistically significant reduction in blood culture contamination rates by implementing an ISDD. Both post-ISDD groups met the national goal of less than 3%. There was not an 80% reduction in blood culture contamination in the intent-to-treat analysis (65% reduction); however, there was over an 80% reduction in the per protocol analysis (88% reduction). The primary endpoint showed an ISDD, when used appropriately, significantly reduced contamination rates. The difference between the intent-to-treat and per protocol analysis showed the real-life application of using an ISDD INITIAL SPECIMEN DIVERSION DEVICE continued. compared to the true benefit of the device. When the device was being used properly, contamination rates were less than 1%. Like all novel instruments, compliance can be an issue. Over half of the contaminants in the ISDD group were contributed to nurses not utilizing the ISDD. When it could not be determined if the device was used, it was recorded in the per protocol analysis, meaning the true benefit of the device was likely not captured adequately. Educational instructions for proper ISDD use were provided prior to implementing the device and re-education was done with each nurse when a blood culture contaminant was confirmed throughout the study period. Although re-education was occurring, some nurses simply chose not to use the device. There were no reports of the ISDD being hard to use; however, there were reports that when the ED was busy, nurses went with what they were comfortable with. This reality was known prior to the study which is why a per protocol and intentto-treat analysis were performed to capture the real world effect. However, due to the retrospective nature of this study, the compliance was a major limitation of this study. The hospital cost analysis showed money potentially saved by using an ISDD. When looking at the per protocol results, almost $1 million could be saved with consideration to microbiology, antibiotic usage, length of stay, and labor costs. Geisler et al. 2 showed an average cost accruing to more than $6,000 dollars per contaminant with the most influential factors being LOS, daily hospital costs, and antibiotic usage. Many factors influencing the increased costs were avoidable factors. The largest factor was extended length of stay. 2 This study's estimated cost analysis was unable to find consistency with previous literature regarding cost savings. This study was unable to capture differences within the secondary endpoints, however, there was a significant difference in contamination rates. Based on previously published literature, that difference should have resulted in cost savings. 3,4,6,7 To date, the use of a studied ISDD is the single most effective intervention for reducing costs related to blood culture contamination. 2 AVC utilizes rapid diagnostic testing on all blood cultures which allows confirmation of a positive blood culture quicker and time to confirm a contaminant. In the previous studies, rapid diagnostic testing was not used, therefore, they had longer waiting periods for confirmation. This could be a potential answer for the reason as to why this study was unable to detect significant differences in our secondary endpoints and could lead to the conclusion that an ISDD might provide greater benefit in hospitals that do not utilize rapid diagnostic technology for their blood culture analysis. One of the major limitations of this study was the secondary outcomes analysis took place from the intent-to-treat analysis only. This study could not capture if the ISDD was used with every blood culture obtained. AVC did not require documentation for device use every time a blood culture was drawn. This prevented knowing the true benefit of the device in the secondary outcomes. Future studies should look specifically at secondary outcomes when the ISDD use was confirmed vs. a time period when an ISDD was not used. A recommendation, if implementing an ISDD in a facility, would be to require documentation of whether the device was used. Another data point not captured within this study was the actual organisms being isolated due to the retrospective nature of this study. This information could provide valuable clinical information when comparing the two study arms, especially if there are profound differences in the organisms being identified. Future studies should include this information, as it potentially could capture trends that clinicians should be aware of going forward. Nursing turnover between the two study periods could have provided a limitation. Although there was little turnover, there were some nurses included in only one of the time periods. This could change our results as the benefit could have been captured from the nursing staff expertise as opposed to the device itself. When implementing an ISDD, this study stressed the importance of compliance with the device. Although we were unable to show any statistically significant effect on our secondary outcomes, it was limited by our inability to confirm compliance with the device. As shown through the primary outcome, contamination rates were improved when the ISDD usage was confirmed. ISDD implementation will have the biggest benefit when there is near 100% compliance. CONCLUSIONS This study showed that utilizing an ISDD significantly reduced blood culture contamination. The study also showed that when implementing the ISDD to a healthcare system, compliance is important and will affect contamination rates dramatically. AVC ministries final decision was to continue using the ISDD due to the proven benefit in reducing blood culture contamination. Although it was shown that barriers of compliance can reduce the benefit of an ISDD, with continuing re-education and increased compliance, contamination rates were expected to decline.
2021-03-24T05:09:45.806Z
2021-03-19T00:00:00.000
{ "year": 2021, "sha1": "44d2be4a9507408c8b8c26874bbb07b23cf9e643", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7984743", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "44d2be4a9507408c8b8c26874bbb07b23cf9e643", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
239864741
pes2o/s2orc
v3-fos-license
Large‐Volume and Shallow Magma Intrusions in the Blackfoot Reservoir Volcanic Field (Idaho, USA) The Blackfoot Reservoir volcanic field (BRVF), Idaho, USA, is a bimodal volcanic field that has hosted silicic eruptions during at least two episodes, as recently as 58 ka. Using newly collected ground and boat‐based gravity data, two large negative anomalies ( −16 mGal) are modeled as shallow ( <1   km) intrusions beneath a NE‐trending alignment of BRVF rhyolite domes and tuff rings. Given the trade‐off between density contrast and model volume, best‐fit gravity inversion models yield a total intrusion volume of 50−120 km3 ; a density contrast of −400 kg   m−3 results in two intrusions, each ∼9 km ×4.5 km and about 0.5 km thick, with cumulative volume of 100   km3 . A network of 340°−360° trending faults lies directly above and on the margins of the mapped gravity anomalies. Most of these faults have 5−10 m throw; one has throw up to ∼50 m. We suggest that the emplacement of shallow sill‐like intrusions produced this fault zone and also created a ENE‐trending fault set, indicating widespread ground deformation during intrusion emplacement. The intrusions and silicic domes are located 3−5 km E of a regional, 20 mGal step in gravity. We interpret this step in gravity as thickening of the Upper Precambrian to lowermost Cambrian quartzites in the Meade thrust sheet, part of the Idaho‐Wyoming Thrust Belt. Silicic volcanism in the BRVF is a classic example of volcanotectonic interaction, influenced by regional structure and creating widespread deformation. We suggest volcanic hazard assessments should consider the possibility of large‐volume silicic eruptions in the future. • Large-amplitude gravity anomalies are mapped in a combined ground and boat-based gravity survey in the Blackfoot Reservoir volcanic field, Idaho (BRVF), adjacent to young (1.5 Ma, 58 ka) topaz rhyolite domes and tuff rings within a Quaternary basaltic volcanic field • Best-fit 3D inversion of the gravity data, constrained by density contrast estimates and excess mass calculations, indicates the presence of two bodies with thick sill-like shapes in the uppermost crust, with cumulative volume of ∼100 km 3 and volume uncertainty in the range The Blackfoot Reservoir volcanic field (BRVF), located in the northeast Basin and Range of the western USA (Figure 1), is a bimodal volcanic field (McCurry & Welhan, 2012). We use newly collected groundbased and boat-based gravity data to investigate the potential for shallow intrusions associated with an alignment of five silicic domes and explosion craters, erupted approximately 58 ka, in an area called the Central Dome Field (CDF) located within the BRVF (Figure 2a). The CDF includes a network of N to NNW-trending surface faults that are unique to the region in their variable along-strike displacement and en echelon, corrugated map pattern (McCurry & Welhan, 2012;Polun, 2011). These features suggest that these are young normal faults (Ferrill et al., 1999), similar to those produced by volcanotectonic interaction mapped in other volcanic fields Bursik & Sieh, 1989;Garibaldi et al., 2020;Gottsmann et al., 2009;Mazzarini et al., 2004;Tuffen & Dingwell, 2005) (Figures 2b and 3), inspiring us to further evaluate the potential for shallow intrusions. We use 3D gravity models to explore the potential subsurface geometries that create the observed gravity anomalies we map in the CDF. The models are calibrated with the density of nearby silicic domes and with an excess mass calculation. We estimate the volumes of the inferred intrusions and the domes to constrain the intrusive to extrusive volume ratio. The locations and displacements of faults (McCurry & Welhan, 2012;Polun, 2011) are compared with the boundaries of the modeled intrusions (Figures 2a,2b,and 3). Our results suggest that potential future silicic activity may involve comparatively large volumes of silicic magma and could be accompanied by widespread surface deformation. Results also suggest that regional tectonic structures may influence magma ascent and accumulation in the shallow crust, as found in other volcanic systems (Acocella & Funiciello, 1999;Bacon et al., 1980;Deng et al., 2017;White et al., 2015). Overview of BRVF Geology The BRVF lies in the transition between the Intermontane Seismic Belt and a seismically quiescent region that includes the Eastern Snake River Plain (ESRP) (Anders et al., 1989). The BRVF sits roughly 200 km from Yellowstone and has experienced younger eruptions than the most recent eruptive lava flows at Yellowstone (Eaton et al., 1975). This distributed volcanic field comprises Quaternary scoria cones, basalt flows, rhyolitic domes, and tuff rings ( Figure 3). There are three rhyolitic domes at the southern end of the Blackfoot Reservoir, named China Hat, China Cap, and North Cone. These three domes and nearby tuff rings make up a NE-trending volcano vent alignment that defines the CDF (Figure 2b). The base of the China Hat and China Cap domes are primarily block and ash flows with surge deposits exposed in a quarry at the base of China Hat dome. The craters of two tuff rings, Burchett Lake and Gronewell Lake, are filled with water. These tuff rings have low outer slopes typical of surge deposits associated with phreatomagmatic eruptions (Figure 2b). The China Cap dome has been dated using 40 Ar/ 39 Ar, yielding an age of 58 ka (Heumann, 2004). Sheep Island lies on the western side of the Blackfoot Reservoir and is dated to part of a prior eruptive episode roughly 1.5 Ma and provides evidence for silicic volcanism in the BRVF to be recurring rather than a singular event. The basaltic lavas of the BRVF erupted from low scoria cones and fissures. Basalt lava flows reach a thickness of 290 m in the CDF, where they surround the silicic vents and cap the underlying geology as a continuous lava flow field. Basalt eruptions in the BRVF have poor age constraints. Some of the lavas from the BRVF flowed out to the southwest into Gem Valley (Figure 1). These have been dated radiometrically between 100 and 25 ka (McCurry et al., 2011). Basalt vent alignments also occur in Gem Valley. Mapping of the surrounding bedrock geology reveals several generations of faults including NW-trending, SW-dipping thrust faults of the Idaho-Wyoming Thrust Belt (Figures 2 and 3) formed during the Jura-Cretaceous Sevier Orogeny (Armstrong & Oriel, 1965;Dixon, 1982). NW-trending normal faults, perhaps representing two phases of late Tertiary extension, overprint these older faults. In addition to these older structures, there is a third set of distinctive normal faults (Polun, 2011) (Figures 2 and 3) that are only found within the BRVF. We evaluate the origin of these latter faults and their relationships to silicic volcanic vents in light of gravity anomalies and models, described below. Mabey and Oriel (1970) first identified negative gravity anomalies in the CDF, which they interpreted as shallow sedimentary basins. Using the same gravity data set, Leeman and Gettings (1977) interpreted the gravity anomalies in the CDF to be related to a large silicic magma body (∼330 km 3 ) in the upper 6 km of the crust. Their model of the gravity anomalies in the CDF is consistent with the spatial association of the anomalies with young silicic domes. Also, this interpretation is consistent with prominent gravity anomalies associated with shallow intrusions elsewhere (Battaglia et al., 2003;Blakely, 1994;Bott & Smithson, 1967;Finn & Williams, 1982;George et al., 2016;Miller et al., 2017;Paulatto et al., 2019). We provide newly collected ground-based and boat-based gravity data that further constrain these anomalies and their relationship to Quaternary volcanoes and faults. Gravity Data Collection and Processing New gravity data were collected broadly throughout the BRVF, with higher density sampling in and around the CDF to identify the shapes of the anomalies. These data were merged with the regional database (Keller et al., 2006), consisting almost entirely of data collected by the USGS, including survey data collected by Mabey and Oriel (1970). In addition to ground-based data, we collected boat-based gravity data over the A total of 460 new ground-based gravity measurements were made with a Burris gravimeter (B-38) with measurement precision of approximately 0.003 mGal. Station location was determined using a Trimble R10 and CenterPoint RTX service, which has a horizontal precision of 3 − 5 cm and a vertical precision of 7 − 10 cm (Glocker et al., 2012). After correcting for an instrument drift of ± 0.025 mGal/day, the uncertainty on our gravity measurements is ± 0.03 mGal. The gravity base station is in the town of Soda Springs and the same base station was used throughout all of the campaigns. This allowed us to make multiple base readings each day of the survey to accurately capture the instrument drift, which is quite linear for this instrument. Ground-based gravity data reduction included tidal, latitude, atmospheric mass, free-air, spherical cap Bouguer and terrain corrections (White et al., 2015). These corrections were applied to the new data and to the drift-corrected regional data from the USGS to achieve consistency among gravity data from different sources. The terrain correction was applied in two parts. An inner correction used a 10 m DEM with 20 km radius about each gravity station, and an outer correction used a 30 m DEM with 167 km radius about each station. The DEM data used for the terrain corrections were obtained from the USGS National Elevation Database (NED), and a density of 2,670 kg m −3 was used for Bouguer and terrain corrections as it is generally accepted as the average density of crustal rocks (Hinze, 2003). Gravity was remeasured at several USGS gravity station locations to use as tie-in points, similar to the procedure in Deng et al. (2017). (Armstrong & Oriel, 1965). The Hubbard 25-1 borehole is represented by the green star. Red triangles show basaltic vents. The ground-based gravity data reveal large amplitude (∼21 mGal) negative anomalies in the CDF with a gravity gradient under the reservoir (Figure 4). We collected over 14,000 data points with a Dynamic Gravity Systems (DGS) Marine Gravity Sensor (AT1M) on a pontoon boat to define the shape and gradient of the gravity anomaly in the reservoir (Figure 5b). This gravimeter is gimbaled to compensate for the accelerations imposed by the motion of the boat. The same corrections made to the ground-based data were applied to the boat-based data, with additional corrections accounting for the motion of the gravimeter. The Eötvös correction was applied to account for the velocity of the boat as it adds or subtracts to the tangential velocity of the gravimeter relative to the rotational axis of Earth, and the acceleration of the platform the gravimeter rests on was accounted for in the inertial reference frame of the vessel (Telford et al., 1990). A correction was made for the mass of water in the reservoir, although this is found to have trivial impact as the reservoir is <10 m deep and changes depth very gradually (Wood et al., 2011). The velocity and acceleration of the vessel were obtained through the differentiation and double differentiation of the GPS position, respectively. The boat-based data were sampled at a rate of 1 Hz on a continuously moving platform, leading to a higher spatial density of measurements on the reservoir compared to the ground-based measurements. Including Oriel and Platt (1980), showing that the Quaternary basalts cover the valley floor and flowed toward the town of Soda Springs to the south and Gem Valley to the southwest. The faults in the BRVF show a distinctly different trend/orientation relative to the bedrock faults in the adjacent ranges. all of the boat-based data in our gravity model would cause the region beneath the reservoir to be over-constrained leaving the more sparsely sampled ground-based regions to be comparatively under-constrained and less significant in the gravity model. Consequently, the boat-based data were sampled every 100 m along the survey track lines to mitigate over-constraining the region beneath the Blackfoot Reservoir during the inversion. The combined ground-based and boat-based data were further filtered to include only a 780 km 2 area (3,126 measurements), centered on the two negative CDF gravity anomalies (Figure 5b). This filtering helps to identify longer wavelength, regional signals that underlie the negative anomalies in the BRVF and to separate these shorter wavelength gravity anomalies from the regional gravity, as described in the next section. Both the entire data set and the grid of sub-sampled data used to model the anomalies are provided in the supplementary material. Overall, the data coverage defines the two gravity anomalies in the CDF, including the continuation of one of the anomalies beneath the reservoir as discovered and verified by the boat survey. Gravity stations are most numerous over these anomalies and in the high-gradient areas adjacent to the anomalies. Therefore, (Evans, 2010), with faults, domes, and vents. Normal faults are marked by black lines with throw markers; ENE trending faults southeast of the rhyolitic domes (red patches) are black lines without throw markers. Basaltic vents are red triangles. The Hubbard 25-1 borehole (green star, Figure 6) is located just south of China Hat dome. The map region is constrained to the data bounds used for the inversions. (a) Reference map centered on the BRVF used as the bounded region for the inversion, (b) complete Bouguer gravity with all gravity stations represented by colored circles, (c) regional and (d) local gravity anomalies with the gravity stations used in their inversions respectively. All the gravity stations have a 1 km radius mask to highlight the best constrained areas. Blue grid lines show the prisms boundaries used in the respective inversions. Prisms for the regional model (c) are 4 × 4 km and extend slightly past the data bounds to minimize edge effects; prisms for the local model (d) are 2 × 2 km. the gravity anomalies are well resolved by the gravity station distribution. One exception is the SE corner of the southern negative anomaly. In this area we must rely on the older USGS gravity data set, as access was not permitted to these lands by the current property owner. Nevertheless, dense sampling up to the edge of the property and the older USGS data reasonably constrain the gravity gradient, just not with the resolution obtained elsewhere on the map. Isolation of the CDF Gravity Anomalies Gravity anomalies arise from a combination of broader regional effects of the basement structure and shorter wavelength anomalies produced by local mass variations in the shallower subsurface. Separating the local gravity anomalies from the regional gravity signal is paramount to interpreting and modeling the gravity data. The complete Bouguer gravity map of the CDF (Figure 5b) includes two distinct, negative gravity anomalies with magnitude of approximately −21 mGal. These short wavelength anomalies lie within a regional gravity anomaly, with high amplitude positive values (20 mGal) to the west and low amplitude negative (−5 mGal) values to the east ( Figure 4). The regional variation does not correlate with the topography, and the transition between the positive and negative values happens over a relatively short distance (∼8 km). This gradient is not linear, but shows a step in the regional gravity that is located 2 − 3 km west of the rhyolite domes in the CDF ( Figure 5b). The regional gravity trend was isolated by removing data more negative than a threshold value of −6 mGal, which was chosen by graphical separation of the local minima within the regional trend ( Figure 5c). The filtered data that were removed are interpreted to be the local gravity anomalies. The threshold value used to separate the regional anomaly from the local is subtracted from the local data and these data are contoured ( Figure 5d). The filtered local gravity anomaly has an amplitude of approximately −15 mGal, with clear separation from other sources of anomalous gravity. Adding the two maps (Figures 5c and 5d) gives the original gravity map (Figure 5b). The regional, long-wavelength gravity anomaly ( Figure 5c) shows a large amplitude positive anomaly (20 mGal) over the range between Gem Valley and the BRVF. A cross-sectional profile from Dixon (1982) (his number 17) depicts the west-dipping Meade thrust fault cutting and displacing the contact between the Precambrian and Cambrian (1−3 km depth). This displacement shallows and thickens quartzites beneath the range on the western edge of the BRVF. We suggest that the observed regional gravity step correlates to the approximate eastern limit of the quartzites that are displaced in the Meade thrust fault. The local gravity anomalies have elliptical shapes, each striking NW− SE. The two negative anomalies are separated by a saddle of higher gravity values ( Figure 5d). The domes and tuff rings lie within and near this saddle. The volcano vent alignment is nearly orthogonal in trend to the long-axes of the negative anomalies. The faults in the BRVF appear to wrap around the negative anomalies on the west side of China Hat dome and the western margin of Blackfoot Reservoir (Figure 5d). Constraints on the Local Gravity Model The two negative CDF gravity anomalies ( Figure 5d) represent a mass deficit. We calculate the mass deficit, Δ , using Green's function (Parker, 1974): where Δ ( ) is the gravity anomaly, and are the number of grid points in the (easting) and (northing) directions, respectively, and Δ and Δ is the grid spacing (500 m) in the and directions. This integration of the detrended gravity data gives a mass deficit of −3.5 × 10 13 kg. For a reasonable range of density contrasts, the mass deficit calculation shows that the causative body of these anomalies is of order of one hundred cubic kilometers of material. Hand samples of rhyolite from the China Cap dome yield unsaturated bulk rock densities of 1600 − 1800 kg m −3 . The Nettleton and Parasnis approaches to modeling bulk density from gravity profile data (Agustsdottir et al., 2011;Nettleton, 1939;Parasnis, 1952;Saballos et al., 2013) yield a bulk dome density of about 1,700 kg m −3 for China Cap dome, which is consistent with bulk silicic dome densities determined using the same methods elsewhere (Agustsdottir et al., 2011). We assume that the density contrast between intrusive silicic rocks and the crust is not as large as the density contrast between the rhyolite dome and the crust, but it may approach this value. Additionally, density estimates of A-type granophyres and rhyolite intrusions are as high as 2400 kg m −3 (Lowenstern et al., 1997). The Hubbard 25-1 Borehole (Figure 2b), drilled in 1983, provides constraints on the density and lithology of the country rock within the upper crust of the BRVF (Polun, 2011). The well is located approximately 1.5 km south of China Hat and approximately 1 km west of the edge of the southern negative gravity anomaly ( Figure 5b). The compensated neutron lithodensity logs contain data that constrains the bulk density as a function of depth within the borehole. The range of densities within the log spans from 2600 − 2800 kg m −3 with an average density over the entire 2 km section of 2,700 kg m −3 ( Figure 6). The lithology within this well alternates between basalts, siltstones, and shales near the surface to interbedded limestones, sandstones, and shales at depth. The thickness of basalts in the uppermost part of the log is approximately 290 m including scoria layers, constraining the thickness of BRVF basalts. We were unable to determine from the logs if the deeper basalts (750 and 1,100 m) are extrusive or intrusive. Nevertheless, we are confident that igneous rocks are present at these depth intervals. Given a mass deficit of −3.5 × 10 13 kg, for density contrasts −800 to −300 kg m −3 , the causative body has a volume range of 50 − 120 km 3 . This range of density contrasts is used in our gravity inversion models and our model results are compared with this range of volume estimates. Gravity Modeling of Regional and Local Anomalies Inverse modeling is used to deduce subsurface structure both for regional and local anomalies (Figures 5c and 5d). Our modeling approach first discretizes the subsurface into a grid of vertical-sided rectangular prisms (i.e., the blue grids in Figures 5c and 5d). We assume a constant density contrast between all prisms and the surrounding bedrock, but the magnitude of this density contrast is solved during inverse modeling of the gravity data. Inversion Procedure Two inversion procedures are used, one to model the regional signal and one for the local anomalies. Regional inversion modeling assumes a single bottom depth for all prisms, while local inversion modeling uses unique top and bottom depths for each prism. Inputs to the inversion include a range for each adjustable parameter value (depth-to-bottom, depth-to-top, density contrast). Both inversions initialize multiple sets of initial parameter guesses, drawn from input ranges specified in a configuration file. The total number of parameter sets is one more than the total number of modifiable parameters. The local inversion model has 391 independent model parameters, resulting in the initialization of 392 unique sets of randomized parameters; the regional inversion model has 58 independent model parameters, resulting in the initialization of 59 unique sets of randomized parameters. The inversion process adjusts and tests these parameter combinations, using a calculated solution for the gravity due to a prism. The gbox solution for gravity (Blakely, 1996), written in C for speed, is used as the forward model. The gravity anomaly associated with each prism is summed across the map area and then compared with observed gravity values interpolated on to a grid. Interpolated and gridded gravity values are used because of variability in the density of gravity measurements across the region and to speed calculations. The grid size for the inversion process is selected by experimentation to minimize the number of model parameters and to best resolve the subsurface structure. Modeling a large number of small prisms often results in an awkward prism solution that requires additional smoothing, which does not necessarily improve the model (White et al., 2015). Our modeling attempts using a large number of small prisms created unrealistic bumps and rapid changes in prism thickness, resulting in an unrealistic model geometry given the relatively smooth variation in the observed gravity. The downhill-simplex optimization algorithm (Nelder & Mead, 1965;Press et al., 2007) is used to resolve and identify a best set of model parameters based on a goodness-of-fit test designed to minimize the residual error between the measured data and the calculated solution. We use the root-mean-squared error (RMSE) for this goodness-of-fit test. Typically, 100, 000 − 200, 000 forward solutions are calculated to find a best-fit model. Multiple simulations are completed by varying the random seed and prism boundaries to fully explore the model parameter space and to identify local minima. Figure 2). The average host rock density through the upper 2.5 km in the BRVF is 2,700 kg m −3 , a higher than average density that adds to the density contrast causing the negative CDF gravity anomalies. The unit from ∼500− 600 m was characterized as anhydrite in the log report, although the low density of this unit conflicts with the normally high density of anhydrite (Robertson et al., 1958). Other units are high density sedimentary rocks consistent with the Paleozoic section and some basalt sequences that may be intrusions within this sequence, capped by Quaternary basalt. Regional Model The model of the regional gravity field (Figure 5c) is based on the interpretation that a thickening of Precambrian quartizites in the Meade thrust fault exists near the western edge of the BRVF (Dixon, 1982). The prism size used for the regional model is 4 × 4 km, due to the more widely-spaced gravity data to the west of the BRVF. We model the regional data with a flat-bottomed geometry to more closely emulate the thickening of quartzites on the west side of the BRVF. The modeled density contrast ranges from 0 to 150 kg m −3 and the modeled depth range for the quartzite contact is 0.5 − 12 km. The model prisms extend slightly beyond the data boundaries to resolve edge effects and better constrain the gravity anomalies at the edges of the model area (Figure 5c). Figure 7 shows the geometry of the best-fit inversion model for the regional gravity data. The depth-tobottom is 8.1 km; all models solved for a density contrast around 150 kg m −3 . The average depth-to-top on the western margin of the region is ∼2 km, which is in agreement with the range from Dixon (1982) for the Inversion results for the regional gravity anomaly. The top perspective image depicts the CDF over the extent of the prisms for the inversion of the regional anomaly. The centers of the prisms are represented by circles that are colored and contoured by the depth to the tops of the prisms. The bottom depth of this model is uniform at 8.1 km and the model density contrast is 150 kg m −3 . The bottom plot is a 3D perspective mesh of the tops of the prisms and is colored by depth-to-top. This model shows that a thickening of high density quartzites associated with thrust faulting is a possible cause of the regional anomaly. depth to the Precambrian-Cambrian contact (between 1.5 and 3 km). The regional model shows that the quartzites are thickened by 6 km, on average, near the range on the western edge of the BRVF, and that the Precambrian-Cambrian contact sits at roughly ∼8 km depth in the area of the local anomalies of the CDF. The shallowest prisms in the model are in the southwestern region of the model where it reaches a depth of ∼ 650 m where the highest gravity values are located (∼20 mGal). The regional model is not able to reproduce the highest gravity values (>18 mGal) without increasing the density contrast, but a higher density contrast does not agree with known densities of quartzite. The model suggests that the regional step in the gravity field is related to the approximate eastern limit of the thickening quartzites in the Meade thrust sheet, but the story is likely more complex. Local Model of the Igneous Intrusions Inversion models of the local CDF gravity anomalies (Figure 5d) are constructed using a wide range of potential density contrasts (−100 kg m −3 to −900 kg m −3 ). The minimum value for the depth-to-top parameter is 250 m, based on the approximate thickness of the basalt section (McCurry & Welhan, 2012). This lithologic and mechanical contrast is assumed to introduce a mechanical and compositional boundary that would limit the depth to the top of the intrusions (Kavanagh et al., 2006;Richardson et al., 2015;Wetmore et al., 2009). The maximum value for the depth-to-bottom parameter is constrained to 3 km. Maximum prism depths deeper than 3 km tend to produce anomalies of longer wavelength than the observed anomaly. All best-fit models show two compact bodies in the shallow (<1 km) subsurface that thin toward their margins, giving them a sill-like geometry (Roman-Berdiel et al., 1995); the two bodies have thin or absent prisms between them. Density contrasts between −800 and −500 kg m −3 tend to produce geometries with more variation in depth to top of the bodies, a laccolith shape, while density contrasts between − 300 and −500 kg m −3 tend to produce geometries with more variation in depth to bottom and bodies with flatter tops, a lapolith shape. As in all gravity models, there is parameter compensation in the tradeoff between density contrast and volume. For example, increasing the density contrast can result in thinner prisms on average, and conversely, decreasing the density contrast can result in thicker prisms. We tested and compiled best-fit models by imposing limits on the density contrast to evaluate the tradeoff between volume and density contrast of the model space. Some of these model results did not have low RMSE. Larger density contrast results in a deeper average depth of the body, but all are relatively shallow (average depth ≤ 1 km). Figure 9 shows the solutions for 17 simulations, each testing 100, 000 − 200, 000 parameter combinations. This plot illustrates the tradeoff between density contrast and volume (Blakely, 1994). Solutions have density contrasts between −800 and −400 kg m −3 and agree with: (a) lithology observed in the Hubbard 25-1 borehole, (b) dome density determined from China Cap hand samples and Parasnis/Nettleton density analyses (Nettleton, 1939;Parasnis, 1952), and (c) volume estimates from mass deficit. A range of reasonable solutions with nearly identical RMSE occur between density contrasts of −600 to −400 kg m −3 . These solutions give a range of volume estimates from ∼60 to ∼100 km 3 . The minimum volume of the anomalous mass is ∼50 km 3 with a maximum density contrast of approximately −800 kg m −3 . The maximum volume of ∼120 km 3 is obtained with a density contrast approximately −300 kg m −3 , acknowledging that the RMSE is higher for this low density contrast model. In all models the northern body is larger than the southern body. For example, at −400 kg m −3 density contrast the volume of the northern anomaly is approximately 60 km 3 and the volume of the southern anomaly is approximately 40 km 3 . Modeling the Gravity Anomalies as Shallow Intrusions The new gravity data, combined with previous surveys, identifies two large negative anomalies. The addition of boat-based gravity data constrains the western margin of the northern gravity anomaly, which resides largely under the Blackfoot Reservoir. Based on these data and models, we suggest that the large negative gravity anomalies within the CDF are due to high-level silicic intrusions rather than due to a sedimentary basin, as inferred by Mabey and Oriel (1970). If the anomalies were produced by sediments, the basin would be thickest toward the center and the anomaly would have low gravity gradient near its center (Gimenez et al., 2009). Instead, the anomalies show short-wavelength variation where they have the largest negative values. These short-wavelength anomalies indicate that the causative body is actually closer to the surface near the centers of the gravity anomalies. We tested the sedimentary basin model and found poor fits (high RMSE) to the observed gravity data, especially in the center regions of the isolated negative gravity anomalies where the amplitude of the anomalies is high. It is particularly difficult to model basin geometries that create a narrow divide between the two isolated depocenters. Geologic data support the interpretation that the gravity anomalies are related to igneous intrusions rather than to sedimentary basins. One key observation is from the Hubbard 25-1 exploration log (Polun, 2011). The presence of anhydrites in the upper 700 m suggests that the area of the CDF was submerged and gradually infilled by sediments eroded from the adjacent ranges. However, this section is thin (∼400 m) and has a small density contrast indicating that it is unlikely the negative gravity anomalies are related to this stratigraphic sequence. Additionally, we note the anhydrite unit in the well log ( Figure 6) is logged as a lower density unit, which is inconsistent with the high density of anhydrite (Robertson et al., 1958). It is possible the anyhdrite unit is actually misidentified in the log. The rest of the section is dominated by a passive margin sequence characteristic of the Paleozoic section. Figure 8. Example inversion results for the local gravity anomalies. The modeled density contrast is −400 kg m −3 ; the deepest prism extends to a depth of 2.9 km. Thickness contours of the modeled prism geometry (a) are plotted over a 10-m hillshade DEM with faults, vents, and domes superimposed. Model prisms with thickness 100 m, are outlined with blue squares that underlay the thickness contours. A 3D perspective of the prism geometry with 2.5 times vertical exaggeration (b) illustrates the separation between the two distinct bodies modeled by the inversion. Basaltic vents and rhyolitic domes are represented by red and black triangles respectively; faults are marked by black lines with fault throws; location of the Hubbard 25-1 borehole, detailed in Figure 6, is depicted by a green star (a) and green cylinder (b). Animations of the 3D rendering can be found in the supplementary material. There is an absence of clear basin-bounding normal faults on the eastern and western margins of the BRVF, whereas sedimentary basins in the region have clear basin-bounding faults. The west margin of the modeled intrusion coincides with a west dipping fault with the largest vertical offset (50 m) observed in the BRVF. This sense of offset is concurrent with deformation during the emplacement of shallow intrusions (Acocella, 2000;Acocella et al., 2002;Castro et al., 2016). We note that the sense of offset is opposite of that which would be expected if the fault bounded a sedimentary basin. There are plenty of basins in the region, Gem Valley for example, but all are elongate parallel to basin-bounding faults and none of them exhibit this pattern of faulting. Density contrast between the interpreted intrusions and the country rock is a source of uncertainty. The density of the country rock is constrained to be approximately 2,700 kg m −3 . The densities of dome rocks, 1,700 kg m −3 , are likely too low and produce too high a density contrast compared to high-level intrusive equivalents to these dome rocks. Rhyolite melt densities are typically 2,350-2,400 kg m −3 (Bachmann & Bergantz, 2004), which would produce a density contrast of approximately −300 to −350 kg m −3 . Granites are created from rhyolite magmas in the midcrust through crystallization of dense mineral phases, filter pressing and compaction, all of which leaves a lower density residual melt that can ascend to high crustal levels or erupt (Bachmann & Bergantz, 2004). These bodies, interpreted to be high-level intrusions, are also well below saturation pressures for volatiles in silicic magmas, and so may be porous and may be fractured Figure 9. The trade-off between density contrast and volume of bodies associated with the local gravity anomalies (Figure 8) is illustrated using 17 different inversions. Each circle represents an inversion result; the size/color of the circle corresponds to the goodness-of-fit (RMSE) of the inversion. Inversion results give a minimum intrusion volume of 50 km 3 with a maximum density contrast of −800 kg m −3 . A range of reasonable solutions between −600 and −400 kg m −3 that have respective volumes between approximately 60 and 120 km 3 is identified by the blue box. and altered during cooling. Both of these processes result in lower bulk rock density. For example, 10% saturated bulk porosity in a rock of nonporous density of 2,350 kg m −3 yields a bulk density of 2,260 kg m −3 , or a density contrast of −440 kg m −3 . Density contrasts of around −600 kg m −3 are used to model gravity anomalies associated with other high-level intrusions (Acocella, 2000;Miller et al., 2017). We suggest a reasonable range of density contrasts between the intrusions and the country rock is −600 to −400 kg m −3 . All models in this range of density contrasts produce two elliptical shaped bodies each with approximate map dimensions of ∼ 9 km × 4.5 km. Altering the density contrast in this range results in a thickening or thinning of the intrusions while the horizontal dimensions remain relatively constant (Figure 10). This range of density contrasts corresponds to cumulative intrusion volumes of 60 − 100 km 3 . A density contrast of −400 kg m −3 yields an intrusion volume of approximately 100 km 3 . Both gravity anomalies, and by inference the intrusions, are slightly elongate NW, perpendicular to the NE (approximately 35 • ) alignment of silicic domes (Figure 5d). This geometry is consistent with the high-level intrusion model proposed by Vigneresse et al. (1999). In the absence of substantial volume of intrusion, the unperturbed stress state in the region is extensional, with 1 vertical and equal to lithostatic pressure in magnitude. A fracture or dike will propagate vertically and perpendicular to the least principle compressive stress, 3 . From the vent alignment we infer that 3 is oriented approximately 125 • . As the intrusion shallows, the magma pressure exceeds the lithostatic pressure causing a stress rotation, with 3 becoming vertical, resulting in horizontal intrusion. 2 becomes oriented approximately 125 • and 1 approximately 35 • , allowing the intrusion to grow faster in a NW-SE direction, perpendicular to the trend of the vent alignment. Emplacement Related Deformation The coincidence of the edges of the negative gravity anomaly with dramatic, if relatively small displacement faults points to volcanotectonic interaction during intrusion and silicic dome eruptions (Bursik & Sieh, 1989;Bursik et al., 2003). The faults in the BRVF extend from just north of the town of Soda Springs through the Blackfoot Reservoir, only cutting through bedrock at the surface near the southern end of Pelican Ridge (Figure 2a). While Polun (2011) placed the eastern limit of the rift zone at the discontinuous Hole in the Rock-China Hat fault, we believe, based on topographic data available through the Idaho LiDAR Consortium (Figure 10), that the eastern margin of the rift is an unnamed fault located along the western slopes of the Fox Hills extending north to the east of the Blackfoot Reservoir (Figure 2). The maximum E-W width of the faulting in the BRVF, at the latitude of China Hat, is ∼10.7 km. The faults in the BRVF are primarily NNW to NNE-trending and exhibit both east and west dips. The western portion of the fault system in the BRVF includes a prominent nested graben trending N to NNW with the most topographically well-defined portion located just west of the rhyolite domes ( Figure 10). The graben is bounded on the west by the east-dipping Government Road Fault, which has a prominent scarp that is as much as 50 m high. The Government Road Fault is flanked on its west in its central portion by two additional east-dipping faults with scarps as large as 15 m (Figures 2 and 10). The eastern side of the graben is defined by the west-dipping Hole in the Rock and China Hat faults, which appear to be separated by a small left step just north of the China Hat dome (Figures 2 and 10). The graben appears to be floored by a loess-covered surface that is composed of the lavas from several basaltic vents including Red Mountain. The surface steps down 100 m from west to east across a series of east and west-dipping faults creating narrow (∼50 −150 m) full and half grabens separated by relatively broad (∼250 − 750 m) horsts. Throughout the broader graben the surface is typically flat or dipping slightly (<3 • ) east, a slope that appears to have been, at least in part, present before the youngest phase of faulting based on profiles outside the graben to the north and south. Polun (2011) estimated horizontal extension across the graben from fault displacement and dip. These estimates suggest that the portion of the horst and graben system most proximal to the CDF has the largest magnitude of horizontal extension ranging between 75 and 200 m, depending on the fault dips. The total extension is taken to be a minimum because the estimates did not include all of the faults on the eastern extent of the fault system. The estimates based on minimum extension (i.e., fault dip of 70 • ) indicate increases from single digits to >50 m over a distance of 4 − 5 km on either side of the CDF. Based on these data, it appears that extension in the BRVF is greatest adjacent to the gravity anomalies and silicic domes, consistent with faulting during emplacement and/or draining of the intrusions. A set of ENE-trending faults are only found directly overlying the intrusions, especially SW of China Hat dome. These faults appear to be unrelated to the normal tectonic setting of the BRVF. Instead, these faults may have formed during uplift and possibly deflation associated with the intrusions, perhaps associated with the extrusion of magma at the nearby domes (Figure 5d). This ENE-trending fault set is far less pronounced than the other faults in the BRVF (Figure 2b). The average throw across faults in this set is 1 − 2 m with a maximum of ∼10 m. Most of the faults are north dipping with the exception of one in the northern third of the set and the three southern-most faults. Acocella and Funiciello (1999) show that roof lifting associated with the emplacement of a laccolith is viable in producing significant uplift over the intrusion as well as faulting at the margins of the intrusion. We suggest that the pattern of diffuse faulting at the surface is associated with the emplacement of the modeled intrusions and draining of the shallow magmatic system during eruption of the CDF rhyolite domes. The highly faulted graben on the west end of the CDF has the greatest extension and lies on the margin of the modeled intrusion geometry. This shows a spatial correlation with the margins of the intrusion and the greatest structurally accommodated extension (Spinks et al., 2005). The amount of horizontal extension that is accommodated is at minimum 75 − 200 m in the CDF. Castro et al. (2016) has shown that shallow (20 − 200 m), rapid intrusion of laccoliths can produce large uplift (>200 m) and deformation at the margins of intrusion. In the BRVF, we observe the highest magnitude of faulting near the CDF and gravity anomalies with waning surface deformation north and south of the Figure 11. Synthesis of data and the interpretation of our model. (a) The map shows a perspective of the CDF obliquely perpendicular to the strike of the gravity anomalies, parallel to basaltic vent distribution, and is overlain by the complete Bouguer gravity anomaly as well as the regional and local faults, vents, and domes. The cross section line AA′ runs SW− NE from Gem Valley to the Meade Thrust. (b) The complete Bouguer gravity anomaly and elevation along the profile line shows the gravity high on the western side of the BRVF and the two gravity lows in the CDF that are separated by a saddle, or relative gravity high bounded by two gravity lows. The cross section (c) illustrates the schematic interpretation of the gravity anomalies in the BRVF and structural geology of Dixon (1982). The gravity high on the west side of the BRVF correlates to the relative shallowing of Precambrian/Proterozoic quartzites via the Meade Thrust (Figure 7), and the gravity lows in the CDF are interpreted as shallow rhyolitic intrusions associated with the domes at the surface, and faulted, uplifted topography ( Figure 8). The source of the shallow intrusions is inferred to be much deeper (∼ 14 km) and likely related to the source of the rhyolitic magmas (dashed lines) (McCurry et al., 2015). gravity anomalies. Our model suggests that shallow silicic intrusions were emplaced, uplifted the BRVF and generated ancillary networks of faults similar to the Cordón Caulle (Castro et al., 2016). Overall, the CDF and its twin gravity anomalies are closely associated with faulting on several scales. The area of the CDF is marked by two negative gravity anomalies, interpreted to be sill-like intrusions, and by faulted topography (Figures 11a-11c). These faults wrap around the two gravity anomalies, especially on the west side of the reservoir, a fault pattern that is consistent with deformation associated with intrusions. In a more regional context, the BRVF is situated in a complex tectonic setting that may influence the locations of these intrusions. The regional gravity anomaly and model are explained by thickening of a dense quartzite by thrust faulting (Figure 11c). Such regional density contrasts in the crust are interpreted to influence magma ascent elsewhere (Deng et al., 2017), possibly explained by changes in stress trajectories associated with the differential loads caused by these broad lithologic variations (Connor et al., 2000;Rivalta et al., 2019). Implications for Volcanic Hazards and Geothermal Exploration The two anomalies may indicate silicic intrusions occurred at two different times, as indicated by the differing ages of BRVF silicic domes. The CDF alignment erupted approximately 58 ka and the Sheep Island dome, forming an island on the west side of the reservoir, erupted approximately 1.5 Ma (McCurry & Welhan, 2012). This difference in dome ages is consistent with at least two episodes of intrusion. Observations of recent high-level silicic intrusions and eruptions indicate that activity frequently involves a complex series of events (Castro et al., 2016;Jay et al., 2014;Miller et al., 2017;Shaffer et al., 2010). If the intrusions in the BRVF formed coeval with the effusion of the domes, similar to the high-level intrusion at Cordón Caulle (Castro et al., 2016), then it is likely that the northern intrusion was emplaced, in a separate event, prior to the southern intrusion. The multiple vents of varying ages, the two gravity anomalies and the spatial association with the basaltic volcanic field all indicate that the possibility of future intrusions and dome eruptions should be assessed. Potential for future silicic eruptions in dominantly basaltic volcanic fields changes the way volcanic hazards need to be estimated Duffield et al., 1980;Ewert et al., 2005;Jónasson, 2007;Kósik et al., 2020;Riggs et al., 2019). The CDF events preserve evidence of explosive volcanism, but are comparable or smaller in volume than nearby and more abundant basaltic eruptions. The interpretation of two gravity anomalies as being caused by large-volume and shallow silicic intrusions changes the hazard, since it indicates these eruptive episodes could have evolved into much larger magnitude and intense eruptions with widespread effects. Even as intrusions, deformation appears to be associated with the emplacement of these shallow bodies, and is of much larger amplitude than identified in most basaltic volcanic fields. Such intrusions and their associated silicic eruptive vents are widespread. Other examples include large-volume exogeneous and endogeous silicic domes erupted on the Eastern Snake River Plain, the Buckskin Dome and Ferry Butte south of the town of Blackfoot and Yandell Mountain southeast of Blackfoot (Figure 1). The CDF domes and tuff rings are small-volume compared to these features (0.46 km 3 ), but the approximately 100 km 3 of the BRVF intrusions is large compared to these other features. For this volume, an intrusive to extrusive ratio for silicic volcanism is 217:1, but recognizing the range of reasonable volumes from the tradeoff curve (Figure 9) gives an intrusive to extrusive ratio can be between 109:1 and 261:1. While the modeled intrusions are high-volume compared with the mapped eruptive products, we note they are less than one-tenth the volume of the largest caldera eruptions and their intrusive magmas (Gregg et al., 2012;Takarada & Hoshizumi, 2020). Eruption magnitudes are classified using orders of magnitude change in volume. In this context, although uncertainty in the volumes of the intrusions is high because of uncertainty in the density contrast, the volume range is consistent with moderately large volume explosive eruptions. The ages of the eruptions within the BRVF (∼60 ka and 1− 1.5 Ma) suggest a return period of approximately 1 million years. This is a low hazard rate but it also has a high uncertainty with only two constraining events. For comparison, the domes of the ESRP span in from 1.4 − 0.309 Ma and yield a return period of 270 ka (Kuntz et al., 2003). It is possible that the BRVF could have a similar return period for eruptions as the ESRP, considering that the volcanism is chemically congruent (McCurry & Welhan, 2012), but has yet to experience enough volcanism to reflect that similarity. The current geochemical model for the BRVF includes a deeper magma storage system at ∼ 14 km depth (McCurry et al., 2015). Our model is consistent with the conceptual model of a deep magma source. Figure 11c depicts our model of the upper 9 km of the crust spanning from Gem Valley to the northeastern extent of the BRVF. As discussed in previous sections, the gravity high on the west side of the BRVF correlates with the shallowing of Precambrian/Proterozoic quartzites, and the two gravity anomalies in the CDF separated by a saddle are related to shallow silicic intrusions. The wavelengths of the gravity anomalies are short compared to anomalies that would be produced by magma at 14 km depth, and therefore are not related to a deep or midcrustal source. Welhan et al. (2014) investigated heat flow anomalies from the surrounding region to help assess potential geothermal resources and show that the heat flow directly above the modeled intrusions is low while the heat flow to the northwest near the trace of the Meade Thrust is much higher. Given that modeled intrusions are shallow and thin, and were likely emplaced during or before the time of emplacement of China Hat (∼58 ka), they may have completely cooled. 1. A new gravity survey of the BRVF reveals two negative gravity anomalies underlying and adjacent to late Pleistocene silicic domes and tuff rings. These anomalies, after detrending, have amplitudes up to −16 mgal and ellipsoidal shape, elongated NW. 2. The anomalies are modeled as two shallow silicic intrusions. In map dimensions, each is approximately 9 × 4.5 km. Given the uncertainty in density of the intrusions, their combined volume is estimated to be in the range of 50 − 120 km 3 . Calculated using density contrast of −400 kg m −3 , the northern intrusion has volume of approximately 60 km 3 and the southern intrusion has volume of approximately 40 km 3 . 3. Significant deformation appears to have accompanied the emplacement of these intrusions. NNW-trending fault sets bound the intrusions, with the largest displacement (50 m) observed on any faults in the BRVF immediately adjacent to the southern intrusion. The gravity anomalies are overlain by ENE-trending faults, which may have formed during emplacement and possibly deflation. It is possible that the ascending magma exploited faults in the BRVF and their ascent was influenced by crustal scale structures associated with thrust faults. 4. At least one and likely two episodes of large-volume and shallow intrusion has occurred in the bimodal BRVF. Had these magmas not stalled in the shallowest crust, they would have produced moderately large magnitude eruptions that would have affected broad areas. We suggest identification and quantification of shallow intrusions may help better quantify volcanic hazards in bimodal volcanic fields. Given the tradeoff between density contrast and volume, the intrusive to extrusive volume ratio for silicic volcanism in the CDF is between 109:1 and 261:1. Data Availability Statement Datasets for this research are available at Hastings et al. (2021).
2021-09-01T15:10:01.344Z
2021-06-24T00:00:00.000
{ "year": 2021, "sha1": "38e4403f26ef3be7612e269e7dd03c3b5adaa1fd", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2021JB022507", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "ca1a926974c9c31243f35ac915974f592b42712a", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [] }
227101472
pes2o/s2orc
v3-fos-license
Role of CCCH-Type Zinc Finger Proteins in Human Adenovirus Infections The zinc finger proteins make up a significant part of the proteome and perform a huge variety of functions in the cell. The CCCH-type zinc finger proteins have gained attention due to their unusual ability to interact with RNA and thereby control different steps of RNA metabolism. Since virus infections interfere with RNA metabolism, dynamic changes in the CCCH-type zinc finger proteins and virus replication are expected to happen. In the present review, we will discuss how three CCCH-type zinc finger proteins, ZC3H11A, MKRN1, and U2AF1, interfere with human adenovirus replication. We will summarize the functions of these three cellular proteins and focus on their potential pro- or anti-viral activities during a lytic human adenovirus infection. Zinc Finger Proteins Zinc finger proteins are a big family of proteins with characteristic zinc finger (ZnF) domains present in the protein sequence. The ZnF domains consists of various ZnF motifs, which are short 30-100 amino acid sequences, coordinating zinc ions (Zn 2+ ). Other metals, such as cobalt, copper, nickel, and cadmium ions, can also associate with the ZnF motifs and compete with zinc ions when binding to the ZnF motifs [1]. The Xenopus laevis transcription factor TFIIIA has played a key role in understanding ZnF structures. This essential protein contains nine consecutive, 30 amino acid long sequence motifs, which fold around zinc ions and form a structure that visually resembles a "finger", hence the name zinc finger [2,3]. The cysteine (C) and histidine (H) residues are important for the ZnF motifs and coordinate the zinc ions and thereby stabilize the ZnF structure. A classic example here is the TFIIIA ZnF motif, where the zinc ion is maintained by two cysteine and two histidine residues (C 2 H 2 ). Substitutions of the zinc coordinating histidine residues within the ZnF motifs inhibit TFIIIA function as a transcription factor [4,5]. Not all zinc coordinating histidine substitutions within individual ZnF motifs affect TFIIIA function, indicating that ZnF motifs are not functionally equivalent [4,5]. The last four decades have revealed that the ZnF containing proteins are very abundant in eukaryotic cells. It has been estimated that at least 3% of all human genes encode the ZnF proteins with abundant C 2 H 2 and C 4 motifs [6]. Even though many of the ZnF proteins contain the classical C 2 H 2 motif, there are still numerous proteins with atypical ZnF motifs [7,8]. These non-classical ZnFs have slightly different cysteine and histidine combinations, which can be used to discriminate between different ZnF types. For example, in the CCCH-type ZnF proteins, a zinc ion is coordinated by three cysteines and a single histidine (C-x-C-x-C-x-H), whereas in the CCHC-type ZnF, the two cysteines are followed by the single histidine and cysteine (C-x-C-x-H-x-C) residues to coordinate Human Adenoviruses and CCCH-Type Zinc Finger Proteins Human adenoviruses (HAdVs) are widespread pathogens causing ocular, respiratory, and gastrointestinal diseases, with more than 100 different virus types identified so far [42] (http://hadvwg.gmu.edu). Despite such a variety, the majority of studies have been focused on common HAdV types 2 and 5 (HAdV-2 and HAdV-5). To accomplish its replication, HAdV encode multiple proteins and non-coding RNAs that interfere with the normal function of cellular proteins, thereby reprogramming the host cell transcriptome and proteome to more effectively produce new virus progeny [43][44][45]. Analogously to other DNA viruses, the HAdV gene expression pattern is subdivided into an early and a late phase based on the expression timing of the respective genes. Generally, HAdV early genes (e.g., E1A, E1B, E2, E3) encode viral proteins involved in suppression of the host cell response, deregulation of the cell cycle, and initiation of virus DNA replication. The majority of the late phase-specific genes are encoded from a single transcription unit, the so-called major late transcription unit (MLTU), which becomes highly active after initiation of virus DNA replication. Most of the HAdV late genes encode for structural proteins of the virion, such as hexon, penton, fiber, and protein VII, which are needed to assemble infectious virus particles [43]. Typically, HAdV infections cause cell lysis (i.e., lytic infection), even though some HAdV types can also establish long-term persistent infections in T and B cells [46,47]. HAdVs seem to encode only one known ZnF protein, the E1A-289R protein, which contains the C 4 -type ZnF motif in the conserved region 3 (CR3) part of the protein [48]. The E1A-289R protein is an intrinsically disordered protein that functions as a hub, mediating primary interactions with more than 50 cellular proteins. The E1A ZnF is believed not to make direct RNA contacts, but instead mediates protein-protein interactions with different cellular transcription factors [49]. ZC3H11A as a Pro-Viral Factor Promoting HAdV-5 mRNA Export ZC3H11A is a CCCH-type ZnF protein with three ZnF motifs present at the beginning of the N-terminus of the protein (Figure 1). Structural prediction programs suggest that most of the protein, excluding the ZnF and coiled coil domains, is to a large extent intrinsically disordered. For a long time, ZC3H11A functions remained elusive. The first indication of a potential function came from two large-scale proteomics studies demonstrating that ZC3H11A is one of the components of the Transcription-Export (TREX) complex [53,54]. TREX is a multiprotein complex that is conserved from yeast to humans and serves a key function in nuclear export of mRNAs [55]. The TREX complex interacts with the mRNA capping complex and the exon junction complex (EJC), thereby integrating TREX to the nuclear export pathway of capped and spliced mRNAs [53]. This also explains why the ZC3H11A protein is found to interact with various mRNA capping and splicing factors [56,57]. Despite the fact that the ZC3H11A protein co-purifies with the individual members of the TREX complex, its exact role in the complex still remains, to a large extent, enigmatic. Interestingly, elimination of the ZC3H11A protein with an siRNA approach increased accumulation of polyadenylated mRNA in the cell nucleus, suggesting a direct involvement of ZC3H11A in mRNA export [53]. ZC3H11A as a Pro-Viral Factor Promoting HAdV-5 mRNA Export ZC3H11A is a CCCH-type ZnF protein with three ZnF motifs present at the beginning of the Nterminus of the protein ( Figure 1). Structural prediction programs suggest that most of the protein, excluding the ZnF and coiled coil domains, is to a large extent intrinsically disordered. For a long time, ZC3H11A functions remained elusive. The first indication of a potential function came from two large-scale proteomics studies demonstrating that ZC3H11A is one of the components of the Transcription-Export (TREX) complex [53,54]. TREX is a multiprotein complex that is conserved from yeast to humans and serves a key function in nuclear export of mRNAs [55]. The TREX complex interacts with the mRNA capping complex and the exon junction complex (EJC), thereby integrating TREX to the nuclear export pathway of capped and spliced mRNAs [53]. This also explains why the ZC3H11A protein is found to interact with various mRNA capping and splicing factors [56,57]. Despite the fact that the ZC3H11A protein co-purifies with the individual members of the TREX complex, its exact role in the complex still remains, to a large extent, enigmatic. Interestingly, elimination of the ZC3H11A protein with an siRNA approach increased accumulation of polyadenylated mRNA in the cell nucleus, suggesting a direct involvement of ZC3H11A in mRNA export [53]. It has recently been shown that CRISPR/Cas9 knock-out of ZC3H11A in HeLa cells (hereafter referred to as ZC3H11A KO cells) does not impair HeLa cell growth under normal conditions [30]. However, ZC3H11A KO cells showed a retarded growth after heat-shock, suggesting that ZC3H11A is a stress-induced protein protecting cells against the harmful effects of stress. Similarly, a virus infection could also be regarded as a stress-inducer. In fact, in ZC3H11A KO cells, multiple nuclearreplicating viruses (HIV-1, HAdV-5, influenza virus (IAV), and herpes simplex virus (HSV-1) were stalled in their growth, whereas cytoplasmic replicating viruses (vaccinia virus (VACV) and Semliki Forest virus (SFV)) were not [30]. These data indicate that nuclear replicating viruses have evolved It has recently been shown that CRISPR/Cas9 knock-out of ZC3H11A in HeLa cells (hereafter referred to as ZC3H11A KO cells) does not impair HeLa cell growth under normal conditions [30]. However, ZC3H11A KO cells showed a retarded growth after heat-shock, suggesting that ZC3H11A is a stress-induced protein protecting cells against the harmful effects of stress. Similarly, a virus infection could also be regarded as a stress-inducer. In fact, in ZC3H11A KO cells, multiple nuclear-replicating viruses (HIV-1, HAdV-5, influenza virus (IAV), and herpes simplex virus (HSV-1) were stalled in their growth, whereas cytoplasmic replicating viruses (vaccinia virus (VACV) and Semliki Forest virus (SFV)) were not [30]. These data indicate that nuclear replicating viruses have evolved to take advantage of the stress-induced ZC3H11A protein to facilitate virus growth, something that cytoplasmic replicating viruses cannot use since these viruses are not, to the same extent, dependent on the nuclear TREX export machinery for virus growth. Furthermore, it was shown that ZC3H11A, via its three ZnF motifs (Figure 1), binds to short purine-rich sequences in cellular and HAdV-5 RNAs [30]. However, the binding specificity for ZC3H11A changed with a more complex binding motif identified in HAdV-5-infected cells. This change in specificity was also observed by a significant change in the cellular mRNAs that were targeted by ZC3H11A in HAdV-5-infected cells compared to uninfected cells. In general, the cellular mRNAs targeted by the ZC3H11A protein are involved in mRNA metabolic processes, pre-mRNA splicing, and the cellular response to stress. In ZC3H11A KO cells, the viral fiber mRNA was retained in the nucleus, lending support to the hypothesis that the ZC3H11A protein facilitates viral mRNA export. In line with the defect in virus mRNA export, the expression of the viral late proteins was also affected in Viruses 2020, 12, 1322 5 of 13 ZC3H11A KO cells ( Figure 2). However, it is noteworthy that accumulation of all late structural proteins except the hexon protein was drastically reduced in ZC3H11A KO cells, suggesting the interesting possibility that the hexon mRNA may use an alternative export pathway [30]. change in specificity was also observed by a significant change in the cellular mRNAs that were targeted by ZC3H11A in HAdV-5-infected cells compared to uninfected cells. In general, the cellular mRNAs targeted by the ZC3H11A protein are involved in mRNA metabolic processes, pre-mRNA splicing, and the cellular response to stress. In ZC3H11A KO cells, the viral fiber mRNA was retained in the nucleus, lending support to the hypothesis that the ZC3H11A protein facilitates viral mRNA export. In line with the defect in virus mRNA export, the expression of the viral late proteins was also affected in ZC3H11A KO cells ( Figure 2). However, it is noteworthy that accumulation of all late structural proteins except the hexon protein was drastically reduced in ZC3H11A KO cells, suggesting the interesting possibility that the hexon mRNA may use an alternative export pathway [30]. Increased accumulation of the ZC3H11A protein promotes selective virus late mRNA (e.g., fiber mRNA) nuclear export during late phase of HAdV-5 infection (Wild-type). Lack of the ZC3H11A protein (ZC3H11A KO ) reduces HAdV-5 late mRNA nuclear export, virus late protein synthesis, and formation of infectious virus progeny. ZC3H11A interference with the NF-κB signaling pathway prevents expression of pro-inflammatory genes, a block that is relieved in ZC3H11A KO cells. Figure created with Biorender.com. The nuclear localization of ZC3H11A changed during a HAdV-5 infection [30]. In uninfected cells, ZC3H11A accumulates in nuclear speckles, which are storage sites for RNA processing factors [58]. In HAdV-5-infected cells, the localization of ZC3H11A, along with cellular splicing factor SRSF2, changed dramatically to form foci at the so-called viral replication centers where viral DNA replication, transcription, and RNA processing takes place [59]. Interestingly, the ZC3H11A protein levels have been shown to increase in both HAdV-2 and HAdV-5 infected cells, specifically during the late phase of infection [30,60]. This is an unusual phenomenon since a HAdV infection in general hinders translation of most cellular mRNAs during the late phase of infection [60]. The increase in ZC3H11A protein accumulation was not accompanied by a similar increase in ZC3H11A mRNA The nuclear localization of ZC3H11A changed during a HAdV-5 infection [30]. In uninfected cells, ZC3H11A accumulates in nuclear speckles, which are storage sites for RNA processing factors [58]. In HAdV-5-infected cells, the localization of ZC3H11A, along with cellular splicing factor SRSF2, changed dramatically to form foci at the so-called viral replication centers where viral DNA replication, transcription, and RNA processing takes place [59]. Interestingly, the ZC3H11A protein levels have been shown to increase in both HAdV-2 and HAdV-5 infected cells, specifically during the late phase of infection [30,60]. This is an unusual phenomenon since a HAdV infection in general hinders translation of most cellular mRNAs during the late phase of infection [60]. The increase in ZC3H11A protein accumulation was not accompanied by a similar increase in ZC3H11A mRNA expression, suggesting that the increase in protein expression is regulated at the level of translation or via post-translational mechanisms [30]. A high-throughput RNA sequencing experiment in ZC3H11A KO cells revealed a significant up-regulation of innate immune related mRNAs, especially those downstream of the NF-κB and Viruses 2020, 12, 1322 6 of 13 interferon type 1 signaling pathways [61]. Hence, ZC3H11A appears to be a factor involved in negative regulation of the NF-κB signaling pathway (Figure 2). The ZC3H11A protein has been found to colocalize with the splicing factor SRSF2 and the m6A (N6-adenosine methylation) reader protein YTHDC1 in nuclear speckles [62]. Further, m6A is the most abundant RNA modification in eukaryotes and is regulated by the: "writer" proteins, which deposit the methyl group onto mRNAs; "reader" proteins, which define the fate of the m6A modified mRNAs; and "eraser" proteins, which remove the m6A signal from mRNA [63]. Interestingly, a recent study found that the m6A reader protein YTHDC1 was redistributed into the viral replication centers in HAdV-5-infected cells [64]. The same study also showed that elimination of the YTHDC1 protein significantly reduced HAdV-5 late fiber mRNA splicing. Taken together, a lack of the ZC3H11A or YTHDC1 proteins seem to specifically affect fiber mRNA biogenesis. Since mRNA splicing is coupled to export, it is possible that ZC3H11A may export m6A modified and YTHDC1-bound viral mRNAs, such as fiber mRNA, to the cytoplasm. Abnormal ZC3H11A expression or protein interactions have been found in several human diseases and cancers. For example, the expression level of ZC3H11A is significantly higher in breast cancer tissues than in normal tissue [65,66]. It has also been shown that there is significant overexpression of ZC3H11A in mutant KRAS lung adenocarcinomas [67]. KRAS is a proto-oncogene and its mutations are the most common molecular alteration found in non-small cell lung cancers, hence representing one of the predictors of poor prognosis in this cancer [68]. Further, it has been reported that ZC3H11A associates with a mutant version of the nuclear matrix protein, Matrin-3, which has been found in patients with amyotrophic lateral sclerosis (ALS) [69]. This study showed that ALS-linked mutations increase Matrin-3 co-localization with the TREX complex components, which may explain the nuclear mRNA export defects in ALS patients [69]. MKRN1 as a Potential Anti-Viral Factor in HAdV-5 Infection Makorin ring finger protein 1 (MKRN1) is another CCCH-type ZnF protein shown to be engaged in the HAdV-5 lifecycle [50]. The MKRN1 protein consists of four C 3 H 1 -type motifs and one C 3 HC 4 -type RING finger domain (Figure 1). There are three MKRN proteins (MKRN1, MKRN2, and MKRN3) identified within the human MKRN protein family, with MKRN1 as an ancestral gene of the family [70]. In contrast to ZC3H11A, the MKRN1 protein has been relatively well characterized. The MKRN1 protein functions as an E3 ubiquitin ligase since it has a functional RING finger domain, one of the fundamental features of the E3 ubiquitin ligases [71]. MKRN1 mediates ubiquitination of several substrate proteins, including Fas-associated protein with death domain (FADD), human telomerase reverse transcriptase (hTERT), p14ARF, p21, p53, peroxisome-proliferator-activated receptor γ (PPARγ), and AMP-activated protein kinase (AMPK) [29,[71][72][73][74][75]. Since MKRN1 induces p53, p21, and p14ARF degradation, it is thought to be an important regulator of the cell cycle and apoptosis [76]. In addition to the E3 ubiquitin ligase activity, the MRKN1 protein is an RNA-binding protein. Original findings by Cassar and co-workers showed that MKRN1 is a stress-granule-resident protein that is associated with mRNAs encoding proteins that function during cellular stress [77]. A more recent study has shown that MKRN1 is involved in ribosome-associated quality control of prematurely polyadenylated mRNAs [78]. Based on this study, MKRN1 positioning upstream of mRNA poly(A) tails and MKRN1-mediated degradation of cytoplasmic poly(A)-binding protein (PABPC1) ensures ribosome stalling upstream of the poly(A) sequences. As proposed by the authors, this mechanism blocks translation of erroneous proteins from prematurely polyadenylated mRNAs [78]. Regarding a HAdV infection, the MKRN1 protein was identified as a binding partner of the ubiquitous HAdV histone-like protein VII (pVII) [50]. Surprisingly, this interaction was shown to cause self-ubiquitination of the MKRN1 protein. Further, the MKRN1 protein is efficiently degraded by the host cell proteasome, which overlaps with de novo accumulation of the viral pVII protein during the late phase of virus infection. Additional experiments have shown that transient overexpression of the MKRN1 protein reduces accumulation of the viral capsid proteins, which coincides with a decreased Viruses 2020, 12, 1322 7 of 13 formation of infectious virus particles (R. Inturi, personal communication). In the same study, it was also shown that the amount of MKRN1 protein was remarkably reduced in measles virus (MV) and vesicular stomatitis virus (VSV) infected cells, although the exact molecular mechanisms behind these observations were not revealed [50]. Taken together, this study [50] suggests that MKRN1 may function as a potential anti-viral factor in HAdV-5 infection and that the viral pVII protein induces the MKRN1 protein self-ubiquitination and proteasomal degradation. MKRN1 could be considered as a potential widespread anti-viral protein since it interferes with additional virus infections. For example, MKRN1 can specifically induce ubiquitination and proteasomal degradation of the West Nile virus (WNV) capsid protein. As a consequence of that MKRN1 inhibits WNV replication and protects cell against WNV-induced cell death [29]. Similarly, porcine MKRN1 (pMKRN1) has been shown to modulate porcine circovirus type 2 (PCV2) replication. The pMKRN1 can induce ubiquitination and proteasomal degradation of the PCV2 capsid protein and thereby reduces virus progeny production [79]. U2AF1 as a Potentially Dispensible Factor in HAdV Late Alternative RNA Splicing U2AF (U2 small nuclear ribonucleoprotein (snRNP) auxiliary factor) is a splicing factor required for the stable recruitment of U2 snRNP to the 3 splice site in a pre-mRNA during the early stages of spliceosome assembly [80]. U2AF is a heterodimer consisting of a 65-(U2AF2) and 35-kDa (U2AF1) subunits. Both U2AF subunits are RNA-binding proteins. U2AF1 contains a putative RNA recognition motif (RRM) flanked by two CCCH-type ZnF motifs (Figure 1). This putative RRM does not mediate RNA contacts and instead functions as the interaction surface when U2AF1 associates to U2AF2 [81]. The CCCH-type ZnFs appear to mediate the RNA contacts in U2AF1. U2AF2 contains three RRM motifs and an N-terminal RS domain, which is the signature structure for RNA splicing factors. Similarly to the RRM-like motif in U2AF1, the first RRM in U2AF2 does not bind RNA and instead mediates protein-protein contact with splicing factor SF1, which is involved in 3 splice site recognition. In spliceosome assembly, U2AF2 specifically binds to the pyrimidine tract close to the 3 splice site, whereas U2AF1 makes contact with the conserved 3 splice site AG dinucleotide [82,83]. Introns with weak pyrimidine tracts that bind U2AF2 inefficiently require the U2AF1 subunit interaction with the 3 splice site AG to be functional, so-called AG-dependent introns. In contrast, 3 splice sites with strong pyrimidine tracts that bind U2AF2 efficiently can splice without the contribution of U2AF1 interaction with the 3 splice site AG, so-called AG-independent introns [84]. The significance of U2AF in HAdV splicing has been studied in some detail. U2AF, which is an essential splicing factor, is normally localized to nuclear speckles in the G1 phase of the cell cycle. These sites are believed to be storage sites for inactive RNA processing factors. After a HAdV infection, U2AF becomes redistributed from the nuclear speckles to so-called viral replication centers, which are sites for active viral transcription and RNA processing. The U2AF2 RS domain is essential for the recruitment of U2AF to the viral replication centers [85]. The requirement of U2AF for splicing in the HAdV system has been most extensively studied in vitro, using the major late transcription unit (MLTU) L1 family of mRNAs as a model substrate. In this unit, a common 5 splice site is spliced to two alternative 3 splice sites, resulting in the formation of the so-called 52,55K or IIIa mRNAs [86]. The 52,55K 3 splice site has a consensus type of long polypyrimidine tract that binds U2AF2 efficiently, whereas the IIIa 3 splice site has a weak sequence context with a short pyrimidine tract. During virus infection, a temporal shift in 3 splice site choice occurs, resulting in the activation of the IIIa splice site in late virus-infected cells [86]. Previous work has demonstrated that activation of IIIa mRNA splicing is controlled by the 28-nucleotide long sequence element coinciding with the IIIa 3 splice site; the so-called virus infection-dependent splicing enhancer (the 3VDE) [87]. U2AF is an essential splicing factor for 52,55K splicing both in nuclear extracts prepared from uninfected and HAdV-infected cells. The major late first intron, which like the 52,55K 3 splice site has an extended polypyrimidine tract, is as expected, U2AF1 independent since U2AF2 Viruses 2020, 12, 1322 8 of 13 binds efficiently to the extended polypyrimidine tract. The weak IIIa 3 splice site requires U2AF1 for activity in nuclear extracts (NE) prepared from uninfected cells [51]. This result was anticipated since U2AF1 binding to the 3 splice site AG is needed for splicing of weak introns. In contrast, U2AF1 appears to be completely dispensable for IIIa splicing in nuclear extracts prepared from HAdV late infected cell (Ad-NE) [51]. Thus, the IIIa 3 splice site is transformed from a U2AF1-dependent to a U2AF1-independent intron in Ad-NE. In fact, the experimental data suggest that the 3VDE operates through a novel mechanism that appears to be completely U2AF-independent in HAdV-5 infected cells [51]. Collectively, available data suggest that the cellular spliceosomal machinery undergoes a drastic change in specificity during a HAdV infection. Clearly, the change in U2AF1 requirement at the late stage of IIIa pre-mRNA splicing is a first signature of this reformation. Conclusions and Future Perspectives The CCCH-type ZnF proteins have emerged as the essential regulators of RNA metabolism. Particularly, their regulatory roles in different virus infections have put them into the spotlight as possible targets for anti-viral therapies. Even if most HAdV infections are self-limiting, fatal infections can occur in immuno-compromised hosts and occasionally in healthy children and adults infected with particular HAdV types (e.g., HAdV-7) [88,89]. The observations that the enigmatic ZC3H11A protein promotes HAdV-5 late mRNA export and that a lack of this protein severely inhibits HAdV-5, HIV-1, IAV, and HSV-1 growth point towards the specific role of this protein in different virus lifecycles [30]. Hence, interference with the ZC3H11A protein functions may be considered as a potential therapeutic intervention point. One possibility here is to use metal-based compounds which can compete with zinc ions for binding to the ZnF motifs. Truly, zinc ion replacement with gold, platinum, cobalt, and selenium complexes can disrupt ZnF protein binding to nucleic acids [90]. Further, cisplatin, a well-known platinum-based anti-cancer drug, can selectively bind to and cause structural perturbation of some ZnF motifs [91]. It remains to be tested whether any of the metal-based compounds can alter the biological functions of the ZC3H11A protein. There are still several basic questions that need to be answered about the ZC3H11A protein. For example, how does the ZC3H11A ZnF domain interact with RNA and what is the contribution of individual ZnF motifs in this process? This question will apply also to other CCCH-type ZnF proteins, such as MKRN1, as only a few of them (e.g., ZAP) have established crystal structures to explain their RNA binding specificities [38,39]. Further, it will be of interest to understand how different post-translational modifications control ZC3H11A functions in the infected cells. In line with that, the ZC3H11A protein is heavily sumoylated in heat shock-treated cells [92]. However, the contribution of this modification in different virus infections has not yet been investigated. In contrast to MKRN1, which establishes a firm complex with the HAdV-5 pVII protein [50], nothing is known about ZC3H11A interference with viral proteins. During the late phase of infection, viral late mRNAs are efficiently exported, whereas the cellular mRNAs tend to accumulate in the nucleus [93]. This process is controlled by two viral proteins-E1B-55K and E4orf6 [93,94]. Since ZC3H11A is involved in mRNA export, it would be of interest to study if the ZC3H11A protein interacts with the E1B-55K/E4orf6 complex to achieve selective viral mRNA export during the late phase of infection. ZC3H11A appears to be a multifunctional protein (Figure 2). With this in mind, can we assign other biological functions to the ZC3H11A protein, in addition to its role in mRNA export and NF-κB signaling? Regarding its predominant nuclear localization [30], it is likely that the protein is also directly involved in different co-and post-transcriptional processes. The finding that the L1-IIIa pre-mRNA splicing becomes U2AF1 independent at late times of infection opens up a possible model for how HAdV remodels the host RNA splicing machinery to selectively process the late viral pre-mRNAs. Thus, it is possible that much of MLTU alternative splicing, like the L1-IIIa pre-mRNA splicing, would work in the absence of U2AF1, creating an environment where host cell splicing, which to a large extent is U2AF1-dependent, would be shut off. Such a mechanism would be the equivalent to how HAdV selectively shuts off host cell translation late during infection [60]. Clearly, previous work needs to be expanded to also include an analysis of the global effect of U2AF1 depletion on early and late HAdV pre-mRNA splicing. Taken together, we are confident that the coming years will present us with several exciting studies, revealing both the structural and functional characteristics of the CCCH-type ZnF proteins and their interplay with different virus infections.
2020-11-19T09:13:15.568Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "28b434650bd301d7f2b6938cb61b9e00210e5c8f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4915/12/11/1322/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1a949740904399fd1aba4c06a730fb74ad3f739d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
248145572
pes2o/s2orc
v3-fos-license
An Epidemiological Investigation of the Diphtheria Outbreaks Reported in a District of Gujarat : Introduction : Mortality and morbidity due to infectious disease have reduced in the last couple of decades. Diphtheria is one of the infectious diseases which can be preventable with a complete immunization. Objective : To understand trends and identify factors affecting the outbreak of diphtheria in Banaskantha district of Gujarat. Method : A retrospective study based on the available case records for the years 2019, 2020 and 2021(till June). The study was conducted after the reported diphtheria cases in a district. The study was a public health response and intended to provide specific geographical recommendations to the district. The data was recorded from the reported case record and immunization registers. The data were analyzed for defined variables. Results : Out of the 366 cases identified during years 2019-2021. Almost 74% cases have occurred during 2019, with a 7.7% mortality rate. Total 48% of cases were among the age group of 5-10 years, with an increasing number of cases during August-December in specific geographical distribution. Among all the cases, 164 (44.5%) have never taken any vaccine in their lifetime or are unaware of the vaccination status, and 87.9% of cases have not taken third dose of DPT or Pentavalent Vaccine, which is associated statistically with the mortality. Conclusion : The prevalence of diphtheria cases was high in children who have not taken all three doses of DPT or Pentavalent vaccine. These have shown an essential role of immunization, focusing on the vaccine for all doses and need to create a customized awareness communication plan. Introduction: Diphtheria is a severe bacterial infection caused by Corynebacterium diphtheria which may involve many body organs if remains untreated. The Greek meaning of the name suggests "Leather," which points towards the pseudo membrane, the critical [1] feature of the disease. It gets manifested within two to five days of exposure with any contaminated surfaces or through air droplets. It will get symptoms from mild sore throat and fever with grey or white discoloration of the throat to grave multisystem failure on the release of toxins in the body. Diphtheria was one of the primary causes of mortality before introducing vaccines among [1] children. Despite the strengthening of the universal immunization program, diphtheria remains endemic :: 40:: Epidemiological investigation of Diphtheria… Shah et al in India. A total incidence of 22,986 diphtheria cases was reported globally during the year 2019. India has the highest proportion of 41.9% (n=9622) reported [2,3] worldwide during 2019. Developed countries have successfully controlled the spread of disease. However, it remains a concern for developing countries as most cases were reported through developing nations. Vaccination proves an effective tool in reducing and preventing diphtheria cases among children aged < 15 years. Children with at least three doses of diphtheria, pertussis (whooping cough), and tetanus (DPT) vaccine have shown approximately 50% efficacy against diphtheria and with more than two booster doses have shown 91% [4] efficacy. India had overall 87% coverage for the third dose of DPT vaccine with 63.2% (n=445) districts out of 704 reported above or equal to 80% [3,5] coverage. Only 61% of children in India aged 12-23 months are fully vaccinated even after more than 40 [6] years of vaccination campaign. Gujarat shows the coverage of 86.1% for the third dose of pentavalent vaccine, and Banaskantha, a district of Gujarat, [7,8] having 64.1% coverage. Multiple factors play an essential role in diphtheria cases among the community, i.e. vaccination coverage, poverty, education, hygiene and cleanliness, migration, social stigma and beliefs for vaccination etc. Borders of the states are commonly affected by diphtheria cases due to frequent to and fro movement and loss of follow-up [1] during vaccine campaigns. Present study was conducted to understand the trends of diphtheria cases in Banaskantha district of Gujarat. Factors affecting the outbreak of diphtheria w e r e a l s o i d e n t i f i e d . D i s t r i c t -s p e c i f i c recommendations are provided based on the available information which will be helpful in preparing the prevention plan in future. Method: The present record-based retrospective study was conducted on 336 confirmed diphtheria patients reported from a district of Gujarat. The records were taken from the line listing of cases with selected variables screened and identified by health staff during surveillance of vaccine preventable diseases. In June 2021, the team was formed at state health department with group of experts; and investigation was carried out with the sole objective of finding out epidemiological linkages and recommending necessary actions to control the outbreaks. The investigation team has coordinated with health department of Banaskantha district and requested all the available data of diphtheria cases of previous years. The team was provided with line list of patients identified during the diphtheria outbreaks in Banaskantha District in 2019, 2020 and 2021(till June). Variables like the number of cases, mortality rate, age group-wise bifurcation, month-wise number of cases reported, vaccination status, and correlation with mortality were identified andanalyzed. Geographical periodic year-wise analysis was also carried out to identify the distribution and identification of cases. Case Definition: Clinical Identification: Symptoms characterized with fever, sore throat, headache and particular [2] greyish or whitish discoloration in the throat. Case Classification : As per the available district records, all field level staffs were trained for primary screening and identification of cases with diphtheria. All private medical practitioners (including AYUSH) were sensitized to strengthen the liaison between private and public health facilities for early referral and management. Cases with severe illness were referred at Sub District Hospital (SDH) Tharad, and those in need of a ventilator were referred to tertiary level medical college. The training and sensitization activities were carried out in February 2019 during roll out of surveillance system of Diphtheria, Pertussis and Neonatal Tetanus cases by the district health officers. Data collection and Analysis : The yearly reports for the disease outbreak were the source of data, and due efforts were made to conceal the identity of patients. The data collection was carried out from secondary data of the case record registers, which contains demographic, programmatic information of patients, and clinical parameters of patients filled up by health staff. Inclusion Criteria : All cases registered by the District Health Staff. Exclusion Criteria : Cases missed by the health staff to identify and register and cases with inadequate information. Ethical Permission : This study was conducted as an the emergency response to outbreak and designed to provide information on to the public health response. The investigation was aimed at achieving public good (beneficence) and collective welfare (solidarity); no harm was done to any individual (nonmaleficence); fair, honest, and transparent (accountability and transparency); and participants' data were deidentified before analysis (confidentiality). The trend of identification of diphtheria cases at Banaskantha District was described based on the data collected from the district health team. Statistical analysis was carried out for proportions and chi-square test to identify the relationship between cases, immunization status and associated variables using Statistical Package for the Social Sciences (SPSS) version 16. Results: Data for the total 366 cases identified for diphtheria during the last three years at Banaskantha (BK) district was available, out of which 73.77% was during 2019. The distribution showed that maximum cases identified during 2019 (n=270, 73.77%) followed by 2020 (n = 78, 21.3%) and 2021 (n =18.5%). Total 33 (9.01%) deaths took place, 20 (58.3%) in 2019. 10 (30.3%) in 2020 and 3 (9.09%) in 2021. Analyzing the age-wise distribution of identified cases, around half of the children (47.81%) were in the age group of 5 -10 years, followed by 10-16 years (27.87%), 2 -5 Years (13.66%), < 2 years (6.56%) and > 16 years (4.10%). (Table1) Most of the cases were reported during the Sept-Dec period majority, but the last trend was observed in March 2021 with a report of three cases. Table 2 showed the variables; gender, high-risk establishments, brick kilns, nomads, farm laborers, construction sites, and the recent history of travel since last month. Their vaccination status was statistically associated with the reported deaths of diphtherial disease. Out of 366 reported cases, only 70 (19.1%) cases had not received any vaccine during their lifetime, and 94 (25.7%) cases did not aware of their complete immunization status. 15 (21.4%) out of 70 cases reported deaths who have not taken any vaccine. That was associated with mortality due to diphtheria. The district immunization records showed that the third dose of diphtheria toxins containing vaccine was associated with mortality. The study found that a total of 12 (3.3%) cases had taken the third dose of vaccine, and 29 (87.9%)reported deaths had not taken the vaccines. The association was statistically significant with an adjusted odds ratio of 5.6 (95% CI 1.59 -19.73). There was no history of taking booster doses at 10 and 16 years of age among the reported cases. Discussion: The current study suggests that most of the cases were reported during 2019, and then subsequent decreases over time. Mortality was also reported high during the same year. Almost half of the cases were reported in the age group of 5-10 years, which shows a shift in the prevalence of diphtheria cases from under 5 years to 5-10 years. Similarly, a studies conducted in central India and Indonesia have reported 55.32% of cases in the 5-12 years age group and 40.22% among the 5-9 age [1,10] group. The majority of cases (76%) in the study were geographically reported from the blocks at the district boundary with to and fro migration in other states. The study in Europe has also highlighted the role of migration from the epidemic areas and by unvaccinated people leads to the spread of diphtheria across the different geographical [9] regions. Variables During a particular period, the rise of diphtheria cases may form a base for assuming several causes related to migration, epidemiological dynamics, etc. The present study has shown an increase in the number of cases during the August to December months. A study in Rajkot and Indonesia has also reported the rise in diphtheria cases from August to October and from September to December due to several environmental factors and seasonal changes making people more susceptible to infectious [10,12] disease. The talukas, which had reported more cases over the period, were border districts to Rajasthan State. The socio-cultural aspects and careseeking behavior of these reported pockets should be the next level of assessment that can guide the health team for a local customized awareness campaign. Immunization plays an influential role in preventing diphtheria in different countries, especially in developing countries like India. The study has shown half of the cases identified (55%) were aware of the immunization status or any vaccine taken during their lifetime. Three-fourths (77%) of cases have never taken the DPT 3 or Penta 3 vaccine. A study conducted at Hyderabad has reported vaccine efficacy among the population with three DPT doses (49%; 95% CI 0-80) and upto five DPT doses(91%, 95% CI 68-98) as compared with [4] two DPT doses (0%, 95% CI 0-63). A systemic review has suggested a 60% reduction in transmission of diphtheria among people if vaccinated with DPT 3 and interruption of transmission by 28% through vaccination among the [11] outbreak settings. In the study at Rajkot, 65% of identified cases have not received a single dose of DPT, revealing the importance of immunization [12] status for reducing diphtheria cases. The study had few limitations as it is dependent on the records available with the district health team. The more constructive intensified surveillance activity with proper data collection format would yield better results for further analysis. However, this study showed that the characteristics of the cases identified have similar findings with other studies conducted in different parts of the world. Conclusion: The study concludes that most of the diphtheria cases were identified in 2019 at Banaskantha District. Diphtheria was more commonly identified among the age group of 5-10 years, during August to September months. The deaths were reported among the cases with no history of completed DPT or petavalent vaccine, in blocks located at the borders of Banaskantha District with frequent migration. These findings may be significant in designing a strategy to cover the maximum number of children for the diphtheria vaccine. Also, a structured database must be maintained for individual child regarding their vaccination status across their lifecycle to monitor the dropouts and left outs. Recommendations: The findings suggest that active screening and case findings of diphtheria among the community should be ensured through the meticulous follow up national guideline, surveillance of diphtheria, pertussis and neonatal tetanus. Immunization is essential for reducing diphtheria cases, so the efforts are to be made to cover the maximum number of children, especially in remote areas and who are frequently missed due to system side or beneficiary side reasons. A catch-up campaign for the vaccination should be scheduled for drop outs and left outs children to ensure at least three and booster doses of DPT or pentavalent vaccine. The staff should be provided refresher training of guideline; universal immunization program (UIP) guideline and district has to ensure that as per the guideline norms; a line :: 45 :: list for all eligible children up to aged 16 years for vaccination under UIP is to be maintained with their immunization status for effective vaccination coverage. Concentrated efforts are to be made to cover the migratory population sites and high-risk groups (settlements/hamlets/hard-to-reach areas) for diphtheria and vaccine-preventable diseases. A locally customized awareness campaign should be driven for those who don't have a clear vaccination history and push for social mobilization to ensure complete immunization as per the national immunization schedule norms. Declaration: Funding: Nil Conflict of Interest: Nil
2022-04-14T15:16:27.779Z
2021-12-31T00:00:00.000
{ "year": 2021, "sha1": "78f294735fe62b034c7043d7441f66711e07a9ff", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.51957/healthline_273_2021", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ea2fa4c1b09cc3a06551e090899d086f37a59682", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
6124168
pes2o/s2orc
v3-fos-license
Spatially incoherent illumination interferometry: a PSF almost insensitive to aberrations We show that with spatially incoherent illumination, the point spread function width of an imaging interferometer like that used in full-field optical coherence tomography (FFOCT) is almost insensitive to aberrations that mostly induce a reduction of the signal level without broadening. This is demonstrated by comparison with traditional scanning OCT and wide-field OCT with spatially coherent illuminations. Theoretical analysis, numerical calculation as well as experimental results are provided to show this specific merit of incoherent illumination in full-field OCT. To the best of our knowledge, this is the first time that such result has been demonstrated. We show that with spatially incoherent illumination, the point spread function width of an imaging interferometer like that used in full-field optical coherence tomography (FFOCT) is almost insensitive to aberrations that mostly induce a reduction of the signal level without broadening. This is demonstrated by comparison with traditional scanning OCT and wide-field OCT with spatially coherent illuminations. Theoretical analysis, numerical calculation as well as experimental results are provided to show this specific merit of incoherent illumination in full-field OCT. To the best of our knowledge, this is the first time that such result has been demonstrated. Aberrations can degrade the performances of optical imaging systems. This issue is particularly crucial when imaging biological samples since scattering media or multi-scale aberrating structures usually hinder the objects of interest. Aberrations are known to blur optical images by perturbing the wavefronts; more precisely the distorted optical images are obtained by amplitude or intensity convolution of the diffraction limited images with the aberrated point spread function (PSF). Depending on the nature of the illumination, spatially coherent or incoherent, intensity or amplitude has to be considered [1,2]. In order to reduce or to avoid blurring, adaptive optics (AO), which was originally proposed and developed for astronomical imaging [3,4], is usually used to correct the perturbed wavefront thus achieving diffraction-limited PSF during imaging. Optical interferometry techniques have been widely used for imaging. Among those techniques, the use of optical coherence tomography (OCT) has increased dramatically in various researches and clinical studies since its development. Traditional scanning OCT selects ballistic (more precisely singly backscattered) photons through scattering media based on a broadband light source and coherent cross-correlation detection [5]. Both longitudinal [6,7] and en face scanning [8,9] OCTs use spatially coherent illumination and rely on point-by-point scanning to acquire three-dimensional reflectivity (back-scattering) images. Parallel OCT systems that take images with planes that are perpendicular to the optical axis have also been developed with specific detectors and methods by using either spatially coherent illumination like wide-field OCT [10][11][12] or spatially incoherent illumination like full-field OCT [13]. Higher resolutions are achieved in these systems as en face acquisition allows using larger numerical aperture optics. Wide-field OCT systems with powerful laser sources or superluminescent diodes give high sensitivity but the image can be significantly degraded by coherent cross-talks [14]. Full-field OCTs use thermal lamps or light-emitting diodes for high resolution, highly parallel image acquisitions but could suffers low power per spatial mode [15]. In this paper, we show that with spatially incoherent illumination, the resolution of full-field OCT is almost insensitive to aberrations. Instead of considering the PSF of a classical imaging system such as a microscope, we will pay attention to the system PSF of interferometric imaging systems for which an undistorted wavefront from a reference beam interferes with the distorted wavefront of the object beam. More precisely we will consider the cases of scanning OCT with spatially coherent illumination, widefield OCT with spatially coherent illumination and full-field OCT with spatially incoherent illumination; surprisingly we found that in full-field OCT with incoherent illumination the system PSF width is almost independent of the aberrations and that only its amplitude varies. In order to stick to the PSF definition, we will consider a point scatterer as our object and will analyze the response of the system to such object. Suppose the single point scatterer is at position ( ′ , ′ ) = ( , ) , the sample arm PSF of the interferometer is ℎ and the reference arm PSF of the interferometer is ℎ . For simplification, we ignore all the constant factors in the following expressions. So in all the three cases, the sample field at the detection plane would be = ℎ ( ′ − , ′ − ) ( ) In the case of traditional scanning OCT, the reference field of each scanning position at the detection plane would be ℎ ( − ′ , − ′ ) . Since coherent illumination is used, Interference happens at each scanning position and the final interference would be a sum of the interference term across the scanning filed result in ⟨ ⟩ = ∬ ℎ ( ′ − , ′ − )ℎ ( − ′ , − ′ ) ′ ′ ( ) Thus, the system PSF of scanning OCT system would be a convolution of the sample arm PSF and the reference arm PSF as shown in Fig. 1 (a-c). When aberrations exist, the convolution of the aberrated sample arm PSF with the diffraction-limited reference arm PSF results in an aberrated system PSF for the scanning OCT systems ( Fig. 1 (d-f)). In the case of wide-field OCT, as coherent sources are used, the optical beams are typically broadened by lenses to form parallel illuminations on both arms of the interferometer [12]. Thus plane waves impinge on both the object and the reference mirror. In the sample arm, the point scatterer will send a spherical wave back that will be focus on the camera plane that can be described by expression (1). For the reference arm, consider it as homogeneous illumination, a plane wave will be reflected back by the reference mirror and form a uniform field at the camera plane. Thus the interference happen between the two arms would be ⟨ ⟩ = ℎ ( ′ − , ′ − ) ( ) as constant value is ignored. So the system PSF is actually defined by the sample PSF. It is illustrated in Fig. 1 (g-i). When aberrations distort the backscattered wavefront of the sample arm, the aberrated sample arm PSF interferes with a uniform reference field results in an aberrated system PSF for the wide-filed OCT systems ( Fig. 1 (j-l)). When we deal with the case of full-field OCT with spatially incoherent illumination, we have to go back to the basic definition of the spatial coherence of the beams that impinge the reference arm as well as the sample arm of the interferometer. Let's consider a circular uniform incoherent source located in the image focal plane of a microscope objective with a focal length of 0 , which could be obtained with a standard Koehler illumination. The source illuminates the field of view of the microscope objective. One first step is to determine the spatial coherence length in the field of view. The Van Cittert-Zernike theorem states that the coherence angle is given by the Fourier transform of the source luminance [16]. If the pupil diameter is , the angle would be defined as sin = / . At the level of focal plane, this corresponds to a zone of radius ρ = 0 / or ρ = /2 . We can say that, in absence of aberrations, the focal plane is "paved" by small coherent areas (CA) of radius ρ. This radius is also the radius of the diffraction spot that limits the resolution of the microscope objective in absence of aberrations. When going from one diffraction spot to the next adjacent diffraction spots the incoherent plane waves impinging the objective are separated by ± on the edges of the pupil. In absence of aberrations for an interferometry like full-field OCT, the single point scatterer at the object plane of the sample arm lies in a single CA (Fig. 2(a)) and the backscattered signal will only interfere with signal reflected from the corresponding CA in the reference arm (Fig. 2(c)). Note that the size of the CAs is the same as the diffraction spot, the signal from one CA at the camera plane could be expressed as the reference PSF. Thus the interference would be ⟨ ⟩ = ℎ ( ′ − , ′ − )ℎ ( ′ − , ′ − ) ( ) The system PSF is actually the dot product of the sample PSF and the reference PSF as shown in Fig. 1 (m-o). The overall signal reflected from the reference mirror at the camera is still homogenous but we displayed it by combining multiple reference PSFs reflected from different CAs that have different spatial modes. When aberrations exist in the sample arm, the various CAs in the object plane will have larger sizes and will overlap each other ( Fig. 2(b)). This result in the backscattered signal of the single point scatterer in the sample arm containing not only the spatial mode of the targeted focus CA but also the modes from the overlapped adjacent CAs. Thus with aberrations that create a broadened sample PSF, interference will happen not only with the reference beam corresponding to the targeted CA, but also with the beams corresponding to the adjacent CAs. What we want to demonstrate and to illustrate by an experiment is that the interference signal with the targeted focus CA gives a much stronger signal than the one with the adjacent CAs resulting in an "interference" PSF that is much thinner than the one of the classical broadened sample PSF. At the level of the image plane, the interference between the sample aberrated beam and the non aberrated reference beam is only constructive in a zone limited by the spatial coherence of the reference beam. In order to be more quantitative we are going to compare this by the Strehl ratio approach. The "best focus" signal intensity damping compared to the diffraction limited PSF is given (for small aberrations) by the Strehl ratio that is proportionnal to the peak aberrated image intensity = − 2 , where is the root mean square deviation over the aperture of the wavefront phase 2 = ( ( )) 2 . In our case, suppose is the phase of the interference wavefront between the sample signal and the reference signal corresponding to the targeted focus CA, then the phase of the interference wavefront with the reference signals corresponding to an adjacent CAs is + 1 , where 1 is a phase that varies linearly from one edge of the pupil to the other in the range of ±2 . A comparison between the signal ratio of the interference signal with the targeted CA and the one with an adjacent CAs = −( ( )) 2 ≫ = −( ( + 1 )) 2 ( ) shows that the influence of off axis Cas is negigeable . Let's consider various aberrations leading to a significant Strehl ratio of 0.03, numerical calculations results are shown in Fig. 3. For defocus, the intensity ratio of the interference with adjacent CAs is damped for about 740 times compared with the interference with the targeted focus CA, resulting in a signal damping or an amplitude damping of 27.1 times. The amplitude damping ratio is calculated by = √ / ( ) as amplitude instead of intensity is obtained in full-field OCT signal. It's easy to prove that this value is fixed for all the axisymmetric aberrations like defocus, astigmatism, spherical aberrations, etc. While for coma with a Strehl ratio of 0.03, the simulated amplitude damping ratio is 8.2 − 86.1 times depending on the spatial position of the adjacent CAs. In another word, the interference signal was severally damped going from the targeted CA to the adjacent CAs. Thus in the camera plane, as shown in Fig 1. (p-r), the interference signal result in a dot product of the aberrated sample PSF with the reference PSF corresponding to the targeted focus CA since the interference with the reference PSFs corresponding to the adjacent CAs are significantly degraded. This actually matches with equation (4) for non-aberrated situation, the system PSF could be calculated by the dot product of the sample PSF and the reference PSF. For distorted sample PSF (mostly broadened), its interference with the reference channel conserves the main feature of an unperturbed PSF with only a reduction in the FFOCT signal level. We mentioned "almost" for the resolution conservation, because there are situations in which the product of the reference arm PSF with off-center aberrated sample arm PSF may results in losing some sharpness due to the high side lobes of the Bessel PSF function. With the commercial LLtech full-field OCT system Light-CT scanner [17], we have also conducted experiments with gold nanoparticles to check how the system PSF would be affected by inducing different level of defocus. 40nm radius gold nanoparticles solution was diluted and dried on a coverslip so that single particles could be imaged. By moving the sample stage, 10um, 20um and 30um defocus was induced to the targeted particle. The length of the reference arm was shifted for the same value in order to match the coherence plan of the two arms for imaging. Theoretically, the system resolution was 1.5um corresponding to about 2.5 pixels on the camera. By adding 10um, 20um and 30um defocus, the sample PSF would be broadened by 2.3 times, 4.6 times and 6.9 times. Experimental results are shown in Fig. (4). Full-field OCT images (Fig. 4(a-d)) and the corresponding signal profiles (Fig. 4(e-h)) of the same nanoparticle were displayed. It's obvious that with more defocus added the signal level of the gold nanoparticles is reduced, but the normalized signal profiles graph ( Fig. 4(i)) shows clearly that the size of the particle that corresponds to the system PSF width keeps the same for all the situations. In conclusion, we have shown for the first time to our best knowledge that in spatially incoherent illumination interferometry like full-field OCT, the system PSF width is almost insensitive to aberrations with only signal amplitude reduction. This is demonstrated by a simple theoretical analysis as well as numerical simulations for different aberrations, and confirmed by experiments with a full-field OCT system. More precisely the aberration-induced reduction in signal is roughly proportional to the square root of the Strehl ratio. Let us consider the realistic case of a diffraction-limited imaging system with a PSF width of 2 µm that allows for instance resolving the cones in retinal imaging. With a Strehl ratio of 0.1, which is considered to give a low quality image, the PSF would be broadened to about 6 µm that would mask the cell structures. But in full-field OCT system, the same Strehl ratio would only reduce the signal by a factor of 3.1 while keeping the image sharpness. As we intended to apply full-filed OCT system with adaptive optics for eye examination, this specific merit of spatially incoherent illumination could simplified the in vivo observation of the eye. We think that we could restrict the aberration corrections to the main aberrations (e.g. focus and astigmatism) that will improve the signal to noise ratio and skip the high order aberrations. This would also increase the correction speed thus reducing the imaging time. A large number of experiments of USAF resolution target and biological samples with induced or natural aberrations have confirmed that the resolution is maintained and only signal-to-noise ratio was degraded. These results will be submitted soon.
2017-09-24T08:31:21.401Z
2016-06-22T00:00:00.000
{ "year": 2016, "sha1": "4281c951e47aa0652493e6188afbfacff66e48f8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1606.06894", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4281c951e47aa0652493e6188afbfacff66e48f8", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
55824290
pes2o/s2orc
v3-fos-license
Analysis of abrasive wear behavior of PTFE composite using Taguchi’s technique Polymeric composites are widely used for structural, aerospace, and automobile sectors due to their good combination of high specific strength and specific modulus. These two main characteristics make these materials attractive, compared to conventional materials like metal or alloy ones. Some of their typical benefits include easy processing, corrosion resistance, low friction, and damping of noise and vibrations. Wear behavior of Polytetrafluoroethylenes (PTFE) and its composites including glass-filled composites and carbon-filled composites are investigated using a pin-on-disc configuration. A plan of experiments in terms of Taguchi technique is carried out to acquire data in controlled way. An orthogonal array (L9) and the analysis of variance are employed to investigate the influence of process parameters on the wear of these composites. Volume loss increased with abrasive size, load, and distance. Furthermore, specific wear rate decreased with increasing grit size, load, sliding distance, whereas, slightly with compressive strength. Optimal process parameters, which minimize the volume loss, were the factor combinations of L1, G3, D1, and C3. Confirmation experiments were conducted to verify the optimal testing parameters. It was found that in terms of volume loss, there was a good agreement between the estimated and the *Corresponding author: Yusuf Şahin, Department of Manufacturing Engineering, Gazi University, 06500 Beşevler, Ankara, Turkey E-mail: ysahin@gazi.edu.tr; yusufsahin1954@gmail.com PUBLIC INTEREST STATEMENT Due to their good combination of high specific strength and specific modulus, polymer composites (PMCs) are widely used for structural, aerospace and automobile sectors such as gears, cams, wheels, brakes, clutches, bearings, etc. and also in other engineering applications like conveyor aids. Also, polymer composites are subjected to abrasive wear in many applications. Most of abrasive wear problems arise in chute liners in power plants, mining and earth moving equipments. Wear behavior of PTFE and its composites including Glass-filled composites and Carbon-filled composites are investigated against SiC abrasives. Taguchi technique is carried out to acquire data in controlled way because this method eliminates the need for repeated experiments and thus saves time, material and cost. An orthogonal array (L9) and analysis of variance are employed to investigate the influence of process parameters on the wear of composites. Optimal process parameters, which mimimise the volume loss, is determined using this technique. polytetrafluoroethylene (PTFE) and their various composites in abrasive wear under dry and multipass conditions against SiC paper. The polymers without fillers had better abrasive wear resistance than their composites (Suresha, Chandramohan, Siddaramaiah, & Samapthkumaran, 2007). Suresha and Kumar (2009) studied the three-body abrasive wear behavior of particulatefilled PA66/PP composites at different conditions. It is indicated that addition of nanoclay/short carbon fiber in PA66/PP had significant influence on wear under varied abrading distance/loads. Further, it was found that nanoclay-filled PA66/PP composites exhibited lower wear rate compared to short carbon fiber-filled PA66/PP composites. Liu, Ren, Arnell, and Tong (1999) concluded that the applied load was the main parameter and the abrasive wear resistance improvement of filler reinforced UHMWPE polymer was attributed to the combination of hard particles, which prevent the formation of deep, wide, and continuous furrows. Ravi Kumar, Suresha, and Venkataramareddy (2009) revealed that the wear volume loss increased with increase in abrading distance/abrasive particle size for the two-body abrasive wear behavior of glass/carbon fabric reinforced vinyl ester composites. However, the specific wear rate decreased with increase in abrading distance and decrease in abrasive particle size. The results showed that the highest specific wear rate was for glass fabric reinforced vinyl ester composite with a value of 10.89 × 10 −11 m 3 /N m and the lowest wear rate was for carbon fabric reinforced vinyl ester composite with a value of 4.02 × 10 −11 m 3 /N m. Yousif, Nirmal, and Wong (2010) exhibited higher values in frictional coefficient when it was subjected against coarse sand of the treated betel nut fiber reinforced epoxy (T-BFRE) composite. Besides, higher weight loss was noticed at high sliding velocities. Recently, some attempt has been taken to study the wear anisotropy of natural fibers like cotton (Eleiche & Amin, 1986), bamboo (Chand & Dwivedi, 2007b;Chand, Dwivedi, & Acharya, 2007), sisal (Chand, Naik, & Neogi, 2000), jute (Tong, Ren, Li, & Chen, 1995). Raju, Suresha, and Swamy (2012) investigated the abrasive wear behavior of SiO 2 filled glass fabric reinforced epoxy (G-E) composites containing 5, 7.5, and 10 wt.%. The results showed that as the filler loading increased, the wear volume loss decreased and increased with increasing abrading distance (Patnaik, Satapathy, & Biswas, 2010). The wear behavior of polymer composites indicated that tribolfilms were formed on the counter face surface (Bahadur & Sunkara, 2005;Bahadur, Zhang, & Anderegg, 1997;Vande Voort & Bahadur, 1995). Apart from experimental studies, several numbers of models, which attempt to relate the abrasive wear resistance of polymer composites, have been proposed. Three-body abrasive wear behavior of carbon fabric reinforced epoxy composite filled with graphite filler using Taguchi analysis was investigated by Sudarshan, Varadarajan, and Rajendra (2013). They reported that applied load showed the major impact on abrasive wear, followed by abrading distance and filler content. A similar result on the glass-epoxy polymer composites with SiC and Graphite particles as secondary fillers was obtained under dry conditions (Basavarajappa, Arun, & Davim, 2009). Later on, however, effect of filler material on three-body abrasive wear behavior of glass-epoxy composites was investigated by Basavarajappa, Joshi, Arun, Kumar, and Kuma (2010) using a L9 orthogonal array and analysis of variance (ANOVA). The result shows that the abrading distance has more effect on the wear compared to other parameters. The filler material (SiC) contributes a significant wear resistance of the G-E composites. Sahin (2005) developed weight loss model of aluminum alloy composites with 10 wt.% SiC particles using Taguchi method. They reported that the abrasive grain size was the major parameter, which affect the abrasive wear, followed by the reinforcement size. Chauhan, Kumara, Singh, and Kumar (2010) concluded that, the sliding wear of the of glass fiber reinforced vinyl ester composites filled with fly ash particulate composites was affected by the (pv) factor and filler content, whereas the effect of sliding distance was insignificant. The effect of the filler weight fraction, normal load, and sliding distance on the abrasive wear behavior of glass-epoxy composite showed that among the control parameters, sliding distance had the highest statistical influence on the abrasive wear of the composites, followed by normal load and filler content (Sudarshan, 2013). It was also found that the specific wear rate for all the vinylester composites decreases with the sliding distance and after certain duration attains approximately a steady state value (Chauhan & Thakur, 2013). Materials The tribological behavior of polymer-based composite materials sliding against SiC paper on hardened steel under dry conditions is studied using a unidirectional pin-on-disk tribometer. For glass-filled polymer composites, E-glass milled fibers with nominal diameter of 13 μm, nominal length of 0.8 mm, and aspect ratio minimum 10 were used while carbon which is amorphous petroleum-coke with a particle size less than 75 μm, and purity of 99% C were utilized for carbonfilled composites. In this study, three polymeric composite (PMC) materials are tested namely; PTFE, known with the trademarks Teflon, carbon-filled composite (C25) including 25 wt.% C and 75 wt.% PTFE; glass-filled composite (G15) including 15 wt.% glass and rest of it is PTFE. The characteristics of the polymer-based composites are shown in Table 1. Polymer Chemical Industry Ltd. (Polikim A.Ş., Gebze/Turkiye) in Turkey provides the materials in the form of rods. Molded rod's size is about Ø15-Ø425 mm in size, and 100-150-200 mm in length. Experimental design An orthogonal array and ANOVA are applied to investigate the influence of process parameters on the wear behavior of composites. The Taguchi design of experiment approach eliminates the need for repeated experiments and thus saves time, material, and cost. Taguchi approach identifies not only the significant control factors, but also their interactions influencing the wear rate predominantly. The most important stage in the design of experiment lies in the selection of the control factors. In the Taguchi method, the experimental results are analyzed: (1) to establish the best or optimum condition for a product/process, (2) estimate the contribution of individual factors, and (3) estimate the response under the optimum conditions. The limitation of this method is the need for timing with respect to product development. The technique can only be effective when applied early in the design of the product/process. Otherwise, it cannot be cost effective. This experiment specifies four principle wear testing conditions including the applied load (L), grid size (G), sliding distances (D), and compressive strength (C) of the tested materials as the process parameters. Codes and levels of control parameters are shown in Table 2. This table shows that the experimental plan has three levels. A standard Taguchi experimental plan with notation L9 (3 3 ) is chosen, as shown in Table 3. Each combination of experiments is repeated twice to acquire a more accurate result in the process. In the Taguchi method, the experimental results are transformed into a signal-to-noise (S/N) ratio. There are several S/N ratios available depending on the type of characteristics. The S/N ratio for minimum wear rate coming under smaller the better characteristic, which can be calculated as logarithmic transformation of the loss function by the equation: where "n" is the number of observations and "y" is the observed data. ANOVA is performed using S/N ratio. The objective of ANOVA is to evaluate the significance of testing parameters and their interactions on the wear performance of polymers. If some testing parameters do not have considerable impact on wear, they can be kept within a suitable range for the test and can be excluded in building future prediction and optimization models. The percentage contribution of variance can be calculated through ANOVA. In an ANOVA table, there is a p-value for each independent parameter in the model, which is used to test the significance of each parameter and interaction between parameters. Smaller the p value, greater is the significance of the factor/ interaction corresponding to it. In conjunction with an ANOVA, main effect plots are used to examine differences among level means for one or more factors. When the effect of one factor on the level of other factor, interaction plots are used. A design factor with a large difference in the signal to noise ratio from one factor setting to another indicates that the factor or design parameter is a significant contributor to the performance characteristic. The final step in design of experiment approach is to predict and verify the arrived values for the optimal combination level of control factors. Wear test The experiments are carried out using polymer-based composites in a pin-on-steel disk configuration in accordance with the ASTM standard G99 (Figure 2). The counter surface material for the wear testing is a steel disk 160 mm in diameter by 12 mm in thick, which is heat-treated to give a surface hardness of 59-63 RC. This is ground to a surface finish of approximately 0.15 μm centerline average. The composite bars are machined into small cylindrical shapes with lathe machine for the pin-on-disk wear testing. The samples are loaded against the SiC abrasives fixed on the hardened steel disk with the help of a cantilever mechanism. The pin is then mounted in a steel holder in the wear machine so that it is held firmly perpendicular to that of the flat surface of the rotating counter disk. The specimen of 6.5 mm in diameter for composites tested under different loads against smooth hardened steels. The wear tests are carried out at a sliding speed of 0.8 m s −1 . The experiments are carried out at normal loads of 5, 10, and 20 N and the abrading distances chosen are 45, 90, and 120 m. The grit size is about 400, 800, and 1200 mesh during the test. The wear is measured by the loss in weight. The wear pin is cleaned in acetone prior to and after the wear tests, and then weighed on a microbalance with 0.1 mg sensitiveness. Each test is performed with new track of disk. The wear rate is calculated by measuring the mass loss, density and known sliding distance and load. The specific wear rate (Ks) is then expressed on volume loss basis (Chand & Dwivedi, 2007b): (1) = −10 log 10 1 n n ∑ i=0 yi 2 where M is the mass loss in test duration (gm); ρ is the density of composite (gm/cm 3 ); F n is the applied normal load (N); and D is the sliding distance (m). Two replicates are carried out for each material and results are averaged from the two test runs. Table 1 shows some properties of polymer composites. Analysis of wear results The experimental lay out and results of the abrasive wear of polymer composites are shown in Table 4 ( Figure 3). Linear increases with load, distance, and decreases with compression strength and grit size are also evident here, although a slight change occurs with decreasing the grit size. With a rise in testing load, the wear loss of the polymer matrix increases slightly higher than that of the composites. This phenomenon reflects that different wear mechanisms might be involved. Furthermore, the carbon-filled composites give the slightly lower wears than that of other sample due to its intrinsic properties, but no significant variation is observed between them. In other words, the fillers used in the present study reduce the wear rate, which indicates that the role of tribofilm was important. It is well known that the wear resistance of polymer composites depends on the ability of the composite to form a thin, uniform, and adherent transfer film on the counter face (Bahadur & Sunkara, 2005;Bahadur et al., 1997;Schwartz & Bahadur, 2001;Vande Voort & Bahadur, 1995). This transfer film on the counter face prevents the direct contact between the polymer pin and the metal counter face, which avoids abrasive action and therefore these results in reduced wear. The transfer film is also observed for PTFE matrix, but its adhesion to counter face was not good and so it was detached easily (Bahadur & Sunkara, 2005;Basavarajappa et al., 2009;Chand et al., 2000). On the other hand, it is observed that the weight loss is maximal for the run 1 due to using the bigger grit size of 400 (~16 μm), the first level of sliding distance (45 m), the lowest load (5 N), and compressive strength (4.8 MPa), which is followed by run 9. The main effects plot gives the optimal combination of testing parameters Main Effects Plot for Means for minimum volume loss. The slope of the main effect plot for each parameter determines this. Figure 4 shows the mean volume loss as a function of grit size, applied load, sliding distance, and compressive strength of the samples, respectively. The volume loss decreases with increasing grit sizes and compressive strength, but increases with increasing load, and sliding distance for all materials. These are clearly related to the Archard's equation (Harsha & Tewari, 2002). Archard's equation is generally used to describe the sliding wear of metal caused by adhesion, but it has proven very useful in abrasive wear as well. The equation states that where V is the volume loss of the material; L is the applied load; D is the sliding distance; k is a constant called wear coefficient, and H is the hardness of the materials. This equation clearly indicates wear volume is directly proportional to both the load and sliding distance and it is inversely proportional to hardness. The average size of contact area increases and the volume loss is independent of the apparent area of the contact. In this current case, the wear resistance of the materials is also related to their compression strength apart from load and distance, but the relation between the volume loss and compression is found to be weaker than that of hardness effect (see Figure 3(d)). It is observed that with increasing the load and sliding distance, the penetration of hard asperities of the counter surface to the softer pin surface increases and the deformation and fracture of asperities of the softer surface increases (Harsha & Tewari, 2002;Ravi Kumar et al., 2009;Sudarshan, 2013;). Again, the grit size is dominant factor on wear resistance of materials among the other parameters (Chand & Dwivedi, 2007;Chand et al., 2000;Sahin, 2005;Unal, Sen, & Mimaroglu, 2005) because the plot having higher inclination will have higher influence. It is followed by the sliding distance. The size of the abrasive particle and applied load tends to increase abrasive wear volume of the composites (Chand et al., 2000), whereas wear rate tends to decrease with increasing sliding velocity at constant applied load. Secondly, higher weight fraction of glass fibers in the composite improves the abrasive wear resistance because high energy is required to facilitate failure in glass fibers, which is the case for the current study in terms of volume loss. However, carbon fabric reinforced vinyl ester composite revealed better wear resistance than that of glass fabric reinforced vinyl ester composite ). Parallel lines in any interaction plot indicate no interaction. Non-parallel lines are indicative of the presence of interaction while intersecting lines are indicative of the presence of strong interaction. Figure 5(a-c) shows the interactions plot of PTFE matrix and its composites. Figure 5(a-c) shows grit size vs. load, sliding distance vs. load and sliding distance vs. grit size, respectively. It is evident that some interactions appear for distance vs. load. There is decreasing trend with 5 and 10 N load, but the volume loss increases with sliding distance. In a similarly, it increases with 400 and 800 grit sizes of paper, but no interaction is observed while it is decreased with 1,200 grit, some interaction occurs. Figure 5(d-f) indicates compression strength vs. load, compression strength vs. grit size, and compression strength vs. distance, respectively. With increasing load, especially 20 N associates with 6.3 MPa compression strength, but 10 N load reaches to 9.8 MPa compression strength, indicating a strong interaction effect. Here, 800 and 1200 grit sizes of SiC shows parallel lines, it means that no interaction happens. However, 400 grit size indicates interaction effect, which is especially true when the compression strength is 6.3 MPa. A similar effect is also exhibited by increasing the sliding distance of 150 m (Figure 5(f)). The more interactions are observed for the combination of lower grit size under the higher load associated with the higher sliding distance. This might be because of relating to the initial stage of the wear process although the lower sliding distance results in the lower volume loss of the samples. The normal probability plot for tested samples is shown in Figure 6 with the cumulative distributions of the residuals. The error distributions seem to be normal. However, the error distribution may be slightly skewed, with the right trial not being longer than that of the left one, but there are few extreme points in the right side of the trail because of the nature of the tested samples. In generally, the residual is around ±0.5. Figure 7 shows the specific volumetric wear rate as a function of abrasive size, applied load, sliding distance, and compressive strength of the samples. The specific wear rate decreases with increasing grit size, loads, sliding distance for all the materials, but partly decreases with compressive strength. Among the parameters, again, the grit size is dominant factor on wear resistance of materials (Chand & Dwivedi, 2007b;Sahin, 2005;Unal et al., 2005), which is followed by the load. The values are in the range of 2.27 × 10 −4 mm 3 /N m-7.05 × 10 −5 mm 3 /N m. The specific wear rate is very high initially for load, distance, grit size, and material's type. The wear rate decreases sharply when tested against from 400 to 800 grit due to increase in the penetration ability on the sample, and thereafter it decreases. This is followed by the load, sliding distance and compression strength, respectively. In addition, a more decreasing trend is observed with increasing the load from 10 to 20 N, sliding distance from 90 to 150 m. The wear rate again decreases with increasing the running distance (Chauhan et al., 2010;Suresha et al., 2007;Unal et al., 2004;Yousif & El-Tayeb, 2009). However, it increases slightly with changing the filler type from glass to carbon filed composite. In other words, with an increase in applied load from 5 to 10 N, there is a reduction in the wear rate because the apparent contact area is greatly increased at higher applied loads. Since there is an increase in contact area, it allows a large number of particles to encounter the interface and share the stress (Anand & Kumaresh, 2012). This, in turn, leads to a steady state or reduction in the wear rate. The specific wear rate strongly depends on the applied load and abrading distance for all the tested materials. The load has stronger effect on the wear behavior of PTFE composites than the sliding velocity (Liu et al., 2001;Tevrüz, 1998;Unal et al., 2004), but the best wear resistance is achieved for PTFE + 18 % C + 7 % Gr composites due to using 7 % G filler in that matrix (Unal et al., 2005). However, the carbon fiber reinforced polyaryletherketone (PAEK) matrix composite had worse abrasion resistance as compared to glass fiber reinforced PAEK composites (Harsha & Tewari, 2002). Moreover, the wear rate decreases with the increase in grit grade number for the polymers like APK, POM, UHMWPE, PA66, and PPS + 30% GFR polymer composites. Moreover, the content of filler and bonding between the particles and the polymer matrix seems to be important for the wear reduction. For example, when there is a strong bond between the fillers and the matrix, separation of material from the pin surface becomes more difficult and hence contributes to high wear resistance. The weak bonding between the fillers of particles and the polymer matrix presumably leads to a considerable reduction in wear resistance because of the C particles, which are easily separated from the polymer, and/or three-body abrasion by their in contact zone (Schwartz & Bahadur, 2001). This is so because, as the filler proportion increases, the number of filler particles in the transfer film also increases, and it causes the disruption of transfer film due to the increased number of hard particles. Wear rates Experimental observations are transformed into signal to noise ratio (S/N = "μ"). The S/N ratio is computed using Equation 4 for each of 9 runs. The minimal and maximal S/N ratios are found to be about 73.391 and 94.609 dB, respectively. They correspond to run 1 and run 9, respectively. The optimum mean response value is found to be L3 G3 D3 C2. These results reveal that carbon-filled PTFE composites show slightly lower wear rate than glass-filled composites, but no significant variations appears. The penetration ability of SiC abrasives decreases with increasing the sliding distance of samples. Analysis of variance The ANOVA results for the abrasive sliding wear behavior of PMC materials are listed in Table 5. This analysis is undertaken for a level of significance of 10%, that is, for a level of confidence of 90%. It is clear from Table 5that factors L, G, and D have statistical and physical significance on the specific wear rate. As noted in the last column of ANOVA table, P of each factor on total variations shows the Main Effects Plot for Means degree of influence on the wear results. It can be observed that the factor L (p = 27.77%), the factor G (p = 51.14%), and the factor D (p = 14.50%) have the effects on the abrasive wear rates, but the factor C is not significant effect on it. The error associated to the ANOVA table for the mean S/N ratio is approximately 6.59%, which is highly above the volumetric wear rate analysis. The correlation of R 2 (adj.) is found to be about 0.87. On the other hand, the last column of the table indicates p-value for the individual control factors. It is known that smaller the p-value, greater the significance of the factor/interaction corresponding to it. The ANOVA table for S/N ratio (Table 5) indicate that the grit size (p = 0.003), sliding distance (p = 0.044), and load (p = 0.066) in this order, are significant control factors effecting the wear rate while the compression strength (p = 0.392) is insignificant effect on the wear rate of the tested samples. It means that the grit size is the most significant factor, which is followed by the sliding distance and load. The present study indicates that the wear behavior of PTFE matrix and its composites under abrasive sliding conditions using the Taguchi approach reveals that the wear property not only depends on the tribological system, abrasive grit, load, and sliding distance, but also among material's properties, a slightly compression strength. Confirmation test The final step is to verify the improvement of the quality characteristic using the optimal levels of design parameters (L1G3D1P3). The S/N ratio is calculated as the following formula. where m is the total mean S/N ratio, i is the mean S/N ratio of the result at the optimum level and n is the number of the main design parameters. Based on the S/N ratio analysis, the optimal testing parameters for the volume loss of tested materials were the factor L at 1 level, factor G at 3 levels, the factor D at 1 level and finally, the factor P at 3 levels. According to this prediction, the theoretical value of S/N Ratio is calculated about 39.459 dB. It corresponded to about 0.0106 mm 3 , which is the lower value within the obtained experimental results (Table 6). This table indicates that a comparison of the predicted volume loss with the actual volume loss using the optimal testing parameters. It can be seen that the difference between verification and calculation is in the reasonable limit (1.604 dB). However, optimal volume loss is calculated based on ANOVA results, using the significant factors (L1G3D1). In this case, the S/N ratio is about 38.99 dB and its corresponding value is about 0.01123 mm 3 . In case of the specific wear ratio analysis, the optimal level of design parameters is found to be L3G3D3P2. The S/N ratio can be calculated from Equation 5 formula. The theoretical predicted value is about 101.146 dB. It corresponds to about 8.45 × 10 −6 mm 3 , which is the lower value within the experimental results (Table 6). Conclusions The following conclusions were drawn in terms of the experimental and analytical results for wear of polymer-based composites. (1) The experimental results showed that the weight loss and volume loss of the samples highly influenced by abrasive size, sliding distance, and load, but slightly with the compression strength of tested sample. (2) The average specific wear rate decreased with increasing the grit size, load, and sliding distance, but partly decreased with the compressive strength for tested samples when tested against SiC abrasives. (3) Inclusion of carbon-filled composites contributed slightly in reducing wear and these composites exhibited a slightly better wear resistant property than that of glass fiber reinforced and PTFE matrix. (4) ANOVA indicated that abrasive size, applied load, and sliding distance exerted a great effect on the specific wear rate at 51.14, 27.77, and 14.50%, respectively. However, compression strength had a neglecting effect (6.50%) towards the quality characteristics. (5) Optimal process parameters, which minimize the volume loss was the factors of combinations of L1, G3, D1, and C3. That was, the experiment carried out at 5 N load against a 1,200 grits for 25% carbon-filled composite running for 45 m sliding distance would lead to a minimum volume loss. (6) Confirmation experiments were also conducted to verify the optimal testing parameters. The predicted volume loss and specific wear rate of the samples were found to lie close to that of the experimentally observed value of S/N ratio with an error of 1.604 and 2.81%, respectively.
2018-12-11T06:49:05.546Z
2015-01-20T00:00:00.000
{ "year": 2015, "sha1": "528b159598ed74cd6a7c994d58226a04b17b5c26", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1080/23311916.2014.1000510", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "528b159598ed74cd6a7c994d58226a04b17b5c26", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
4185326
pes2o/s2orc
v3-fos-license
Atomic-level structural correlations across the morphotropic phase boundary of a ferroelectric solid solution: xBiMg1/2Ti1/2O3-(1 − x)PbTiO3 Revelation of unequivocal structural information at the atomic level for complex systems is uniquely important for deeper and generic understanding of the structure property connections and a key challenge in materials science. Here we report an experimental study of the local structure by applying total elastic scattering and Raman scattering analyses to an important non-relaxor ferroelectric solid solution exhibiting the so-called composition-induced morphotropic phase boundary (MPB), where concomitant enhancement of physical properties have been detected. The powerful combination of static and dynamic structural probes enabled us to derive direct correspondence between the atomic-level structural correlations and reported properties. The atomic pair distribution functions obtained from the neutron total scattering experiments were analysed through big-box atom-modelling implementing reverse Monte Carlo method, from which distributions of magnitudes and directions of off-centred cationic displacements were extracted. We found that an enhanced randomness of the displacement-directions for all ferroelectrically active cations combined with a strong dynamical coupling between the A- and B-site cations of the perovskite structure, can explain the abrupt amplification of piezoelectric response of the system near MPB. Altogether this provides a more fundamental basis in inferring structure-property connections in similar systems including important implications in designing novel and bespoke materials. Ferroelectric materials with a composition-driven structural crossover, commonly known as morphotropic phase boundary (MPB), have become an indispensable part of many modern devices, particularly used as sensors, actuators and memories, utilizing their superior properties at the MPB. The term MPB, which literally refers to a boundary between two forms, was first coined to describe the chemically induced change in the ferroelectric long-range order of the famous PbZr x Ti 1−x O 3 (PZT). Since then, it is now a well established fact that invoking a structural instability by tweaking the composition may result in anomalous characteristics similar to PZT. The current understanding of an MPB and the associated enhancement of certain physical properties of a ferroelectric material primarily relies on the fact that the system acquires a state where the rotation of the unit-cell polarization vector becomes easier due to the development of additional degrees of freedom either in a single low-symmetry phase or in several coexisting phases. The concept of a bridging low-symmetry phase became noted after the discovery of a monoclinic Cm phase at the MPB of PZT 1-3 and manifested a renewed interest in studying as well as in designing bespoke complex ferroelectric materials. However in recent years it has also been demonstrated that the analysis of the gross average structure broadly simplifies the ubiquitous and varied complexity of the mesoscopic-scale structural features, which are crucial to understand the occurrence of anomalous properties at the MPB [4][5][6][7][8][9][10] . Especially, with many competing interactions and structural frustrations at the atomic level, multi-component ferroelectric materials in general provide a unique challenge in developing precise structural models that would correspond to the physical properties 11 . Consequently, it is still a riveting topic to investigate aspects of structural behaviour even for immensely studied systems like PZT, in search of a more rigorous model than the existing concept of easy rotation of the net polarization [12][13][14] . It is also highly anticipated that the understanding would not be complete unless the static structural models are equally complemented by dynamical information, typically obtainable from inelastic scattering processes [15][16][17][18] . Driven by the motivation of finding Pb-free or reduced-Pb alternatives of PZT, there has been a strong interest in Bi-containing ferroelectric solid solutions with the general formula xBiMeO 3 -(1 − x)PbTiO 3 , Me = Sc, Fe, Mg 1/2 Ti 1/2 , Ni 1/2 Ti 1/2 , Ni 1/2 Zr 1/2 etc following the revelation of the MPB features in those systems 19,20 . For example, xBiScO 3 -(1 − x)PbTiO 3 (BS-PT) exhibited even superior physical properties with a higher operational range of temperature than PZT 21,22 . The successful partial substitution of Pb by Bi without compromising cherished propertied of pure-Pb containing systems was considered as an important milestone for developing eco-friendly materials. Therefore BS-PT, in the form of ceramics, thin films as well as single crystals with desirable properties, gained rapid interest as a potential replacement of PZT. However the high price of Sc 2 O 3 is making this particular system less accessible. Although not as attractive as BS-PT, xBiMg 1/2 Ti 1/2 O 3 -(1 − x)PbTiO 3 (xBMT-PT), first reported in 2004, provides a reasonable alternative to the expensive Sc with promising MPB-properties at x ≈ 0.63 23 . It is also considered as an interesting system as BMT was reported to be a structural analogue of anti-ferroelectric PbZrO 3 24 . In addition, xBMT-PT has also been uniquely shown to possess zero thermal expansion coefficient in the range 0.2 ≤ x ≤ 0.4, and highly stable piezoelectric properties at non-ambient temperatures 25,26 . In terms of the structural phase transition driven by the composition, very recently Upadhyay et al. 27 proposed that there is a tetragonal (P4 mm) to monoclinic (P m) phase transition through a mixed phase (P4 mm + P m) region that exists in the range 0.60 ≤ x ≤ 0.67, based on the Rietveld refinements of the powder XRD pattern. Morphotropic phase boundary in ferroelectric solid solutions has so far been detected through the typical average structural investigation, mostly applying standard powder diffraction technique for perovskite based oxide systems. Although there have been rigorous attempts to study particularly the local structural correlations, such as diffuse scattering studies on Pb-based complex systems revealing strong evidence for large deviations from the average structure [8][9][10][28][29][30] , there is still lack of an experimentally conceived model at the atomic level to identify and correlate the properties to the different facets of the structure, which can then serve as a more fundamental basis in finding and designing superior as well as eco-friendly materials. Hence, there is a pressing need to elucidate mesoscopic-scale atomic correlations in ferroelectric solid solutions to better understand the physics of MPB. With the present availability of high energy x-ray synchrotron facilities and spallation neutron sources, it is nowadays possible to obtain data for a wide range of reciprocal-lattice vectors Q for powder samples, and thereby to consider Fourier transformation of the data, taking both Bragg diffraction and diffuse scattering on equal weight. This is known as total scattering method which manifests pair distribution functions (PDFs). PDFs essentially describe the whole structure in terms of atom-atom distances weighted by their scattering power. Hence PDFs are critically sensitive to variations of the local correlations and are considered categorically as a powerful local probe 31 . The total scattering method has already been applied to a number of popular ferroelectric systems including both Pb-based and Pb-free compounds, and revealed hitherto unseen structural characteristics, such as large and persistent static displacements of cations from their crystallographic sites, distinct local and average polarisation, chemical ordering, formation of polar nano-regions as well as their developments with composition and temperature 4,5,11,13,17,[32][33][34][35][36][37] . In this report we show the composition-driven evolution of the local cation-environment in xBMT-PT as well as their dynamical behaviour through a combined analysis of neutron PDFs and Raman scattering data at ambient conditions for the compounds covering the whole composition-range of stability across the phase diagram, which has not been reported so far. Our experimental results provide deeper insights into the structural phenomena occurring at the MPB of a perovskite-type ferroelectric solid solution and help to establish a comprehensive structure-property relationship for a broad range of systems. Results and Discussion Pair distribution function analysis. Figure 1(a and b) shows the development of the PDFs for xBMT-PT as a function of composition derived from the neutron total scattering data along with the {001} pc (pc refers to the pseudocubic setting) Bragg peaks extracted from in-house XRD data (Fig. 1c). The long-range correlations in the PDFs as shown in Fig. 1b in the range 40-46 Å exhibit a distinct change at the MPB composition, which can be easily linked to the observed trend in the Bragg peaks with x. However, the short-range correlations up to 8 Å, where peaks can be uniquely associated with the different first-neighbour atom-atom distances in the perovskite-type structure, do not imply for an abrupt change to single out the MPB composition (Fig. 1a). Nevertheless, it appears that the peaks of the PDF at x = 0.10 are most pronounced and then it gradually broadens with increasing x. The evident gradual changes continues up to the MPB composition and become almost negligible for the compositions x = 0.63, 0.65 and 0.70 in the distances longer than approximately 5.1 Å. The changes seen in the peak shape with increasing x, suggest enhanced structural disorder, which indicates the composition-induced structural phase transition is more of a order-disorder type than a displacive type. This distinction from a pure displacive-type structural phase transition is an important tag for a ferroelectric system and has impact on the properties especially under external stimuli and non-ambient conditions. In order to extract more specific and quantitative information about the local structural changes across the MPB, we have carried out typical big-box modelling based on the reverse Monte Carlo (RMC) method against the experimental PDF. The analyses were restricted to the atomic-distance range 1-20 Å to ensure that the resulting model solely reveals the local correlations 38,39 . Very recently, there have been a few reports on popular Scientific RepoRts | 7: 471 | DOI:10.1038/s41598-017-00530-z ferroelectric systems containing Pb and/or Bi using similar big-box modelling applying RMC technique, which have shown how the local structural diversity could be related to the anomalous macroscopic properties 32,34,36,37,40 . A similar method, first described by Keeble et al. 35 , was adopted here to map the distribution of the cationic displacement directions with respect to the centre of their corresponding oxygen polyhedra on a stereograph. These graphs essentially help to evaluate the behaviour of the individual cations with x. The magnitude and the direction of the polar displacements of the cations are both crucial for a ferroelectric material since it affects the polarization as well as the structure of the system, and consequently asserts the ensuing macroscopic properties 6 . Figure 2 shows such stereographs for different cations as a function of x. For both Pb and Bi, it is apparent that at low values of x the directions of off-centre displacements are consistently along the [001] pc direction (tetragonal distortion) with a gradual dispersion of the high-density region as x increases. In addition, an abrupt enlargement of the dispersion can be seen particularly for the A-site cations from x = 0.60 to 0.63 (see Fig. 2d). The strong coupled behaviour of the A-site cations could be justified with their similar electronic properties, however it should be noted that neutron scattering cannot distinguish well between Pb and Bi as they have very similar scattering lengths. Although Ti seems to follow the A-site cations with the composition, the change in the direction-distribution is however smoother, with a relatively higher level of dispersion on the stereographs for all x compared to that of the A-site cations. Mg seems to behave uniquely as it tends to scatter initially on the {001} pc planes. However for x ≥ 0.50 that preference becomes weaker, and at x = 0.70, the Mg shifts are predominantly along the [001] pc . Apart from these direct observations, the maps can also help to derive further characteristics of the local structure. On the basis of the density distribution of the points in Fig. 2, it is apparent that the inclusion of Bi 3+ and Mg 2+ in PbTiO 3 triggers a structural instability at the local level, and the system is evolving from a strongly anisotropic to a rather isotropic in terms of the displacement directions of the individual cations, which could equally be seen as a reduction of the energy difference between the different orientation-states with the increase of x. In addition, this statistical variation of directions with x suggests the system gradually becomes more pliable to accommodate non-collinear polar shifts of the cations while approaching MPB from the tetragonal side. This feature in general has direct consequences on the properties of a system, where polarization is coupled to strain and the strain is considered as a primary constraint to the cooperative switching of the dipoles under external fields 41 . Altogether this can be seen as a direct observation of a thermodynamical picture where the individual potential functions for cations are flattened, which in turn provides greater flexibility for the local polarization 6,42 . Considering the densities of the favoured directions for x ≥ x MPB , the maps cannot refer a specific average symmetry for the system. Although the x-ray powder diffraction results suggest a single monoclinic Pm phase above the MPB (x = 0.70), where the polarization vector should remain preferably within the {100} pc planes (see x from room-temperature XRD data. The composition-induced change in the average structure is apparent: the strong tetragonal splitting diminishes near the MPB, leading to a single peak at x = 0.70. The inset shows the ratio between the unit cell parameters c/a refined in a tetragonal metrics, which is indicative of the average strain of the system. The entire 2 θ range of the laboratory XRD pattern and the x-ray pair distribution functions obtained from synchrotron XRD (APS facility at the Argonne National Laboratory) data as a function of composition can be found in the supplementary information (Figs S2 and S3). In order to quantify the observed distribution of the points on the stereographs, we have used a common orientation order parameter S = (1.5〈cos 2 θ〉 − 0.5) 43 , where θ represents the angle between the cation displacement direction and the [001] pc direction. The cosine is averaged over all displacement directions, and weighted by the corresponding density from the stereographs. The variation of values of S in Fig. 3 depicts the sequence of transition with x by a measure of ordering of the cations with respect to our chosen direction [001] pc , and intriguingly, it reaches its minimum value of around 0.4 at x MPB for all ferroelectrically active cations. This provides an unequivocal evidence that the system acquires a state with maximum structural instability at the MPB, where the piezoelectric coefficient d 33 reaches its maximum value, while the Curie temperature T c abruptly decreases. Figure 4 demonstrates the mean values of the polar shifts of the cations with x as determined from the refined structural model, which can be directly related to the values of intrinsic polarization of the system. The shifts are comparable with the reported values based on experiments as well as theoretical calculations on several similar Pb and Bi containing solid solutions [44][45][46] . Apart from Mg, cations show a gradual increase in the average shifts with increasing x. However, this increase cannot account for the abrupt rise of the d 33 coefficient at the MPB. This is highly significant, because it indirectly underlines the critical influence of enhanced randomness of the shift-directions on the response functions driven by an external stimulus. Moreover, it reinstates the fact that the ferroelectric Curie temperature is heavily coupled to the microscopic strain of the system, and hardly influenced by the magnitudes of polar shifts of the cations. It is further interesting to note that the mean shifts become almost equal in magnitude for the A-site and the Ti cations on the onset of the MPB, which characterise competing Aand B-site driven ferroelectricity 47,48 . The magnitudes of the displacements and the standard deviations for the Mg are unexpectedly large considering the chemical behaviour of the cation 24,46,49 . For low values of x however, bigger uncertainty can be expected in Mg-O distances, especially because the Mg-O peak sits on the shoulder between the negative Ti-O and the positive Pb/Bi-O peak (see Fig. 1a). However as the x increases, the respective mean shifts seem to become smaller gradually and therefore the values should be more reliable. 23,25 . The apparent link between these physical parameters and the observed structural behaviour has been discussed in the text. Figure 5 shows the composition-dependent Raman spectra for the xBMT-PT system with the corresponding assignment of different modes in complex perovskite-type ferroelectrics 50,51 . It is evident that there is a gradual change in the spectra as a function of x and therefore, it is difficult to pin-point the MPB intuitively. However the detailed analysis of the individual spectra reveals distinct anomalies for several phonon modes. The lowest energy modes near 44 and 88 cm −1 (peaks 1 and 2), which are commonly dominated by A-site cationic vibrations 50-52 , soften around x = 0.4 − 0.5 < x MPB (see Fig. 6a). In particular peak 2, which is observed in undoped PbTiO 3 , shows a distinct minimum in the wavenumber vs x together with a maximum in FWHM vs x. This provides a clear evidence for a local-scale structural transformation driven by a rearrangement of heavy A-site cations 18 . Raman scattering analysis. It is worth noting that in single crystals of PbTiO 3 , the Raman scattering near 88 cm −1 corresponds to the E(TO) soft mode. Polycrystalline ferroelectric solid solutions however possess oblique phonon states of mixed character. Hence, a direct 'one-to-one' assignment to a certain irreducible representation is not appropriate. Nevertheless, the x value at which peak 2 softens, matches the composition exactly at which σ seems to reach a level of saturation for all ferroelectrically-active cations (see Fig. 4), on the onset of an enhanced orientation disorder in polar shifts of the cations (see Fig. 3). The Raman scattering at 44 cm −1 (peak 1) appears only in the presence of BMT. However, it cannot result exclusively from Bi vibrations developing a two-mode behaviour of the peak around 88 cm −1 , because if that would be the case, then the intensities of both peaks would vary gradually with x 53 . This is evidently not the case here (see Fig. 6a), and indicates that the new peak should be related to a different structural state of A-site cations rather than to a different chemical species. In fact, the intensity of peak 1 abruptly increases between x = 0.5 and x MPB , along with a sharp hike in the peak width. Altogether this affirms that the addition of BMT induces a distinct energetically favourable state in the A-site cation subsystem, which becomes dominant at x ≥ x MPB . Besides, larger peak widths at compositions x ≥ x MPB for both peak 1 and 2 indicate strong structural disorder, which could be equally conceived as a frustration in the coupling of coherent local distortions. This conforms wonderfully to Figure 6. Composition dependence of (a) the phonon modes dominated by A-site cationic vibrations (peaks 1 and 2), (b) the average squared wavenumber and splitting of phonon modes dominated by B-site cationic vibrations (peaks 6 and 7), (c) the wavenumber of peak 3 generated by BO 3 vibrations against A-site cation vibrations (the inset represents the atomic vector displacements in the aristotype cubic structure), (d) the intensity of peak 8 corresponding to A-O stretching vibrations, which can also be thought as BO 6 tilting vibrations (the inset represents the atomic vector displacements in the prototype cubic structure). Peak numbering are as given in Fig. 5. The dashed lines mark the MPB. The solid lines in (a) are merely guides for the eye; the solid line in (d) is a linear fit to data points with x between 0 and 0.5. the features of the orientation order parameter S (Fig. 3) and σ (Fig. 4b), revealed from the PDF analysis. The occurrence of the MPB is actually mirrored by the A-cation dynamics following the fact that the squared wavenumber difference reaches its minimum value (see the inset in Fig. 6a), i.e. the difference between the two energy states becomes minimal precisely at the MPB. The phonon modes around 230 (peak 6) and 280 cm −1 (peak 7) are related to the B-site cationic vibrations [50][51][52] . These two modes evidently merge at x = 0.4, which could be related to the suppression of the prominent tetragonal distortion within the BO 6 octahedra. It is important to note that at x = 0.4 the mean polar shifts of the A-site cations get to its maximum value but the Ti polar displacements continue to increase and become almost equal in magnitude with those of the A-site cations exactly at the MPB. This holds a favourable condition to constitute a strong coupling between the off-centred A-and B-site cations. Evidently, the phonon mode involving the BO 3 vibrations against the A-site cation vibrations has a well pronounced minimum at x MPB (see Fig. 6c), which apparently drives the system to a phase transition. The Raman peak near 350 cm −1 increases steadily in intensity with the increase in x, and it has even higher intensity than the B-cation vibration mode near 280 cm −1 for compositions above the MPB (see Fig. 4). Lattice-dynamics calculations for Pb-based complex perovskite-type oxides reveal that the Raman scattering near 350 cm −1 arises from a point phonon mode, which is silent (T 2u symmetry) in the aristotype structure Pm m 3 but may generate Raman intensity in a distorted double-perovskite structure 54 . The doubling of the unit cell may be induced by NaCl-type local chemical order at the B site and/or antiferrodistortive structural order, but only the latter may generate Raman activity of the phonon states near 350 cm −1 . The inspection of the atomic vector displacements indicates that this mode is comprised of oxygen vibrations along the A-O bonds in the {111} pc planes, but it can be also thought as a rotation of the BO 6 octahedra about the 〈111〉 pc directions (see inset in Fig. 6d). Consequently, the intensity of the Raman peak near 350 cm −1 is expected to increase with the development of the BO 6 tilts. This was in fact detected by combined Raman scattering and neutron/synchrotron x-ray diffraction in a few relaxor ferroelectrics under high pressure [55][56][57] . Therefore, the gradual increase in the intensity of the phonon mode near 350 cm −1 with increasing x suggests that the dynamic BO 6 tilting i.e. the antiferrodistortive ordering becomes significant for large values of x. A further crosscheck of this fact was found in the analysis of the RMC refined structural models where a gradual decrease of the B-O-B bond angle, which is typically used as a measure of the static tilts of the BO 6 octahedra 58 , was recorded with the increase in x (see Supplementary Figure S5). Hence this ensuing development of local dynamic antiferrodistortive order upon doping can effectively interfere with the cationic polar shifts through suppressing the flexibility and the affinity to reorient under external field. Summary and Implications On the whole, our results provide an atomistic view of the development of composition-driven structural phase transition of a ferroelectric solid solution based on the perovskite structure with chemical disorder on both A and B sites, based on a combined analyses of the PDF and Raman scattering data. It is noted that the apparent change in the average structure with increasing content of BMT can be envisaged essentially as concurrent increase in local structural disorder in terms of off-centre static displacements of cations and randomness in their directions. However, it is the degree of stochasticity of the polar-shift directions (parameter S) which evolves systematically with composition and describes the development of structural instability, leading to the morphotropic phase boundary. But this parameter alone cannot explain the structure-property connections. We found that at MPB, all ferroelectrically active cations (A-site Pb 2+ and Bi 3+ , B-site Ti 4+ ) acquire very similar off-centre displacements, which together with the enhanced flexibility lead to a strong dynamic coupling between the polar shifts of the A-and B-site cations. This suggested that the combination of such structural instability and dynamic coupling is pivotal and an absence of any one of these factors may not bring about the expected boom in the properties, following the fact that the abrupt fall in the piezoelectric coefficient perfectly correlates with the diminished coupling factor for x > x MPB . These specific aspects of atomic-level structural correlations have not been conceived so far for a ferroelectric system, and should be however distinguished from the random orientation of the so-called polar nanoregions or local electric fields, commonly attributed to relaxor ferroelectrics. Our refined structural models did not suggest any notable clustering or chemical ordering at the A-or B-sites in support of that (See Supplementary Figure S7). The present random-direction model does conforms to the concept of polarization rotation or polarization extension but further put forward the necessity of the collaborative dynamic coupling effect, which should be an important part of the atomistic driving force for the enhanced properties often seen around an MPB. Generally speaking, the proposed model featuring distinct static and dynamic characteristics might be applicable for a broad range of perovskite-based ferroelectric solid solutions especially with Pb and Bi where similar behaviour of the average structure have already been reported. However it entreats on a number of colligated issues, such as the distinction of the composition-driven phase boundaries from the temperature or pressure-induced phase boundaries, nature of the B-site chemical disorder (here Mg acts as a modifier of the coupling processes between the ferroelectrically active elements), combination of ferroelectric-antiferroelectric order (as in PZT), and especially the MPBs of Pb-free systems in order to develop efficient design rules. Methods Samples. Ceramic samples of xBMT-PT with x = 0.10, 0.20, 0.30, 0.40, 0.50, 0.63, 0.65, and 0.70 were prepared following the conventional solid-state synthesis route, details of which can be found elsewhere 25 . Roomtemperature powder x-ray diffraction data (Stoe Stadi-MP powder X-ray diffractometer) were collected to verify the formation of a single perovskite phase. Electron microprobe analyses (wavelength-dispersive Cameca Microbeam SX100 SEM-system) averaging over 50 points from each compound were conducted to confirm the expected chemical compositions (see Supplementary Material Fig. S1). Commercially available powder of PbTO 3 (Sigma Aldrich, purity ~99.9%) was used as a reference sample in Raman spectroscopic analyses. Scientific RepoRts | 7: 471 | DOI:10.1038/s41598-017-00530-z Total neutron scattering and RMC modelling. Room-temperature neutron total scattering data were collected at the Nanoscale Ordered Materials Diffractometer (NOMAD) at the Spallation Neutron Source of Oak Ridge National Laboratory. NOMAD is a dedicated instrument for the total scattering experiments and allows to collect data for a wide range of reciprocal-space vector Q (=4π sinθ/λ), which is a necessary condition to produce reliable PDFs. For our measurements the Fourier transformations were done with Q max = 31.4 Å −1 , which provided a real-space resolution of around 0.1 Å. RMC modelling of the structure against the PDF data was performed using the RMCprofile package 39 . The starting models for different compositions were built using the structural parameters determined from prior Rietveld refinements of the neutron powder diffraction pattern. The modeling box size was approximately 54 × 54 × 54 Å 3 and consisted of ~13000 atoms. There were 20 independent runs for each composition in order to have good statistics of the structural parameters. The DISCUS software 59 was used to extract the various structural parameters from the refined models. Raman scattering. Raman spectra were collected with a Horiba T64000 triple-grating spectrometer equipped with an Olympus B41 confocal microscope (50x objective), and a Symphony liquid-N 2 -cooled CCD detector. The spectra were recorded with a laser wavelength of 514.5 nm, on plate-shaped pellets in backscattering geometry, with a spectral resolution of ~2 cm −1 , and peak-position precision of 0.35 cm −1 . No polarization, orientation, and spatial dependence of the Raman spectra were detected. The measured spectra were temperature-reduced to account for the Bose-Einstein phonon occupation factor and fitted with pseudo-Voigt (PV) functions (PV = q * Lorentz + (1 − q) * Gauss) to determine the peak positions, full widths at half maximum (FWHMs), and integrated intensities. The criterion for the maximum number of fitted peaks was dI/I < 0.5 for all peaks, where I and dI are the calculated integrated intensity and the corresponding uncertainty, respectively 60 . In fact, for all compounds the achieved ratios dI/I were less than 0.25.
2018-03-28T13:24:03.627Z
2017-03-28T00:00:00.000
{ "year": 2017, "sha1": "4e5a2d139385748c290771962fb242f0dc21f7a3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/s41598-017-00530-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "011d92500c369ecbb6aaab4f670e906987f47af0", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
215729260
pes2o/s2orc
v3-fos-license
Estimates of Volume and Carbon Stock Removals in Miombo Woodlands of Mainland Tanzania Miombo woodlands are major vegetation type covering about 93% of the forest land of Mainland Tanzania. It forms an integral part of the rural landscape in Tanzania and plays a crucial role in providing a wide range of goods and services including carbon sequestration. However, the sustainability of forest resources is mostly affected by the magnitude of its utilization.)ere should be a balance between the forest growth and removals. Nevertheless, the magnitude of removed volume and carbon in the country is not known. Quantification of volume, biomass, and carbon stocks removals is vital in developing effective climate change mitigation strategies, decision making, and promoting sustainable forest management. Based on the National Forest Resources Monitoring and Assessment data (NAFORMA) comprising 7,026 stumps collected from 16,803 circular plots of 10m and 15m radii established in Miombo woodlands of Mainland Tanzania, volume and carbon stock removals were estimated with the use of models that utilize stump diameter (SD) as the sole predictor. Results indicate that the annual volumes, aboveground biomass removed, and belowground biomass removed were 1.71± 0.54m ha year, 1.23± 0.37 t ha year, and 0.43± 0.12 t ha year, respectively. In addition, the corresponding aboveground and belowground carbon removed were found to be 0.6± 0.18 tC ha year and 0.21± 0.05 tC ha year −1 respectively. Since the estimated annual volume removals exceed estimated mean annual increment of 1.6± 0.2m ha year inMiombo woodlands, the removals indicate unsustainability that would end up into forest degradation. )e results also show that removals are more prominent in the following categories: shifting cultivation, production forest, grazing land, general land, village land, and Eastern and Southern zones. )is paper calls for increased appropriate management strategies to ensure sustainability in these land categories and in the entire Miombo woodlands of Mainland Tanzania. Introduction Miombo woodlands are important vegetation type in the world, playing a vital role in social, economic, and environmental aspects [1]. Miombo woodlands are broadly divided into wet (annual precipitation >1000 mm) and dry (annual precipitation <1000 mm) [2]. In dry Miombo, aboveground woody biomass averages around 55 t ha −1 , whilst in wet Miombo, 90 t ha −1 is typical [3]. In addition, root biomass can comprise between 20% (in Eastern Tanzania [4]) and 32% (in Zambia [5]) of total woody biomass. Although Miombo woodlands serve as a reservoir of above and belowground carbon and thereby mitigate the effects of climate change, they are, however, undergoing great change due to deforestation and forest degradation [6]. While deforestation refers to a permanent or long-term conversion of forest to nonforest land [7], forest degradation is defined differently for different purposes. According to IPCC [8], forest degradation involves changes within the forest that negatively affects the structure or function of the stand and/or site, thereby lowering the capacity to supply products and/or services. e levels of forest removals depend much on management categories that Miombo woodlands fall into. In Tanzania, these categories include ownership, land use, and vegetation types [9]. Under ownership types, Miombo woodlands fall into Central Government, Local Government Authority, Village Government, Private and General Land. While, under land use, they fall into production forest, protection forest, wildlife reserves, shifting cultivation, agriculture, grazing land, built-up areas, and water bodies or swamps [9]. Furthermore, in terms of vegetation types, they are subdivided into closed, open, and scattered woodlands [9]. It is anticipated that these management categories could have different levels of volume, biomass, and carbon stock removals. e reason behind this is different degrees of exposure to biological and anthropogenic processes that act as agents for removals [10]. In addition, the magnitude of harvesting shows that woodlands in public lands are more affected by anthropogenic activities than those in the forest reserves [10]. Monitoring volume, biomass, and carbon removals under different management categories of Miombo woodlands are important because they provide information on forest use that is important in the management decisions to ensure sustainability [11]. Furthermore, it is important to establish baselines for Reduced Emissions from Deforestation and Forest Degradation plus the role of conservation, sustainable management of forest, and enhancement of forest carbon stocks (REDD+) [6,11]. For baseline establishment, participating countries ought to assess their carbon baseline/reference levels through Measurement, Reporting and Verification (MRV) systems [12][13][14]. Forest reference emission level (FREL) sets a benchmark for assessing country's performance in implementing REDD+ activities. Tanzania has established her FREL [15] which has been approved [16]. FREL has only included two RED-D + activities, i.e., deforestation and conservation [15]. Forest degradation was not included due to inadequate national inventory data for establishing baseline and monitoring [15]. Nevertheless, the National Forestry Resources Monitoring and Assessment in Mainland Tanzania (NAFORMA) that was conducted over the period of 2009 to 2014 included assessment of harvesting through stump measurements. e stumps are an indication of removals and can be used to estimate volume, biomass, and carbon removals in different forest management categories. On the other hand, tree stumps can further be used to indicate tree species that are highly removed. Such information is important in forest management where appropriate intervention may be implemented to sustain utilization of specific tree species. erefore, taking advantage of NAFORMA data, this study aimed at estimating volume, biomass, and carbon stock removals in the entire Miombo woodlands and its subsequent management categories by applying models that utilize only stump diameter (SD) in Mainland Tanzania. e volume and biomass models were developed in Miombo woodlands of Mainland Tanzania covering Miombo-rich regions [11]. Understanding volume, biomass, and carbon stocks removed is an essential step in accounting for ecosystem goods and services. Such estimates are important in designing management plans for the Miombo woodlands that will ensure a sustained potential of this ecosystem's contribution to emission mitigation. Furthermore, understanding the rate of removals in each category will aid in prioritizing mitigation measures so that more efforts are targeted to Miombo category with higher removals. Study Area Description. e study was conducted on entire Miombo woodlands of Mainland Tanzania that covers 44.7 million ha which is equivalent to 93 percent of the total forest area and 73.9 percent of the total growing stocks ( Figure 1). Miombo woodlands occur in different administrative regions in Mainland Tanzania that are characterized by both tropical and subtropical climates. ey are mainly found in the western zone (Tabora, Rukwa, and Kigoma regions) and the southern zone (Iringa, Lindi, Mtwara, and Ruvuma regions) ( Figure 1) where vast areas occur in the village lands [9]. e weather conditions for all regions may be divided into three distinct seasons: a hot dry season from mid-August to the end of October, a hot wet season from November to the beginning of April, and a relatively cool dry season from April to the beginning of August. Furthermore, two rainfall regimes exist. In the southern, southwestern, central, and western parts of the country, including Lindi, Rukwa, and Tabora, the rainy season starts in mid-November and ends in mid-May. In the north and in the northern coastal zones, the rain is distributed over two shorter periods (October-December and March-May) [17]. Sampling Design. is study was based on sampling design implemented by NAFORMA [9]. NAFORMA sampling design was double sampling for stratification and was designed based on a simulation study described by Tomppo et al. [18]. e first-phase sample consists of dense grid of L-shaped clusters overlaid on the map of Mainland Tanzania at distances of 5 km × 5 km between the clusters. e first-phase clusters that contained 6-10 plots per cluster were assigned to 18 predefined strata based on predicted growing stock, time consumption for cluster measurement, and slope of the terrain. Since each stratum had unique sampling intensity, the second-phase samples were systematically selected from the first-phase sample, based on sampling intensities in each of the 18 strata. Only the clusters selected during the second phase of sampling were measured in the field. e distance between field plots within a cluster was 250 m, while the distance between clusters varied from the shortest possible distance (5 km) to 45 km depending on the second phase selection. Data Acquisition. In the NAFORMA, concentric circular plots of 15 m radius were used as the sampling units. All stumps with diameter ≥5 cm within plot radius of 15 m were measured for diameter and height using calliper or measuring tape. However, after May 2011, all measurements of stumps were changed to minimum stump diameter of ≥10 cm within a plot radius of 10 m. ey were done in order to improve speed in data collection in the smaller plot and by avoiding measuring smaller trees stumps, which are not resistant to annual fires and thereby cause inefficiency in estimating volume and biomass of trees in Miombo woodlands [19]. We acknowledge that, by increasing SD threshold, some of the small stumps would be left unmeasured and hence underestimation of volume and biomass would occur. However, its inclusion would not mean much since small-diameter stumps have smaller contribution to volume and biomass than larger trees. Furthermore, the decrease of size of the plots from 15 m to 10 m would not matter much because all values would be calibrated at per unit area (ha). e SD was measured outside bark immediately under the cutting point (felling cut). If the bark was damaged or missing, logical additions for bark were done. When a stump was taller than 1.3 m, the diameter was measured at the 1.3 m height (DBH). e age of a stump since harvesting was recorded. e precise estimation of age of stump as numerical value may be subjective but necessary to determine rate of removals. We used all possible means for estimating the numerical value of stump age. ese included the colour and freshness of the exposed wood, the size of the sprouts/coppices, and the presence of fire scorch on exposed wood. In addition, the local people who were involved in the data collection assisted the process of stump age determination. e names of the harvested trees species and SD were recorded. e criteria used for identification of the harvested species were coppice growth and wood and bark characteristics of the stump. Identification of species names (vernacular) and confirmation of stump age were done with the help of local tree identifier experienced in ethnobotany and aspect of wood utilization. Allocation of species botanic names to the vernacular names was done later using appropriate species checklist. For the purpose of this study, all the plots that were surveyed for stumps measurement in Miombo woodlands were extracted from NAFORMA database. In total, 7,323 stumps from 16,803 plots were extracted. Data Analysis. Data cleaning to remove obvious outliers due to measurement or recording errors was done before importation into R software for analysis [20]. After data cleaning, 7,026 stumps from 16,803 plots were left. e minimum and maximum SD were 5 cm and 240 cm, respectively, while the mean was 16.868 cm. Likewise, most of the SD (6889 stumps) had a diameter between 5 cm and 50 cm, while stumps of greater than 50 cm diameter were very few (only 137 stumps). All tree stumps with the age records of more than 5 years were dropped from the analysis. is is because it is considered difficult to correctly estimate the age of the stump that was harvested more than five years. In total, 297 tree stumps of this category were not included in the analysis. On the other hand, all stumps that were International Journal of Forestry Research measured at 1.3 m were omitted from the analysis. In total, 449 stumps of this category were dropped. Estimating Volume Removals. To estimate volume removals per tree, we used allometric equation developed for Miombo woodlands [11] (Table 1). e estimated individual tree volume was divided by respective estimated age of the stump to obtain rate of volume removals per year. e estimated individual tree volumes per year were summed up and expressed on per plot and per ha basis. Since each stratum had unique sampling intensity, it was necessary to calculate Expansion Factor (EF) for each respective stratum. We avoided using simple mean volume and carbon because the estimated values would ignore the nature of the sampling design upon which the data were collected. e EF describes the area in which a sample plot represents in each stratum. Since first-phase sampling units were distributed proportionally to stratum area, the area of the stratum k (A k ) was calculated as follows: where n k is the number of first-phase plots in stratum k (ha); n 1 is the total number of first-phase plots; and A is the total inventory area (Mainland Tanzania area). Practical sequences of computation are shown below and further described [21]: where k is the area of stratum k and n k is the total number of plots observed in stratum k. Consider that n t,k number of plots of land cover subclass t falling in stratum k. e area t,k of land cover subclass t in stratum k was computed as follows: where n t,k is the number of plots of land cover subclass t in stratum k and EF k is the expansion factor of stratum k. Area of land cover subclass t in the country is the summation of areas of land cover subclasses t found in each stratum; i.e., t � t1 + t2 + t3 + · · · + tk , where tk is the land cover subclass t in stratum k. Plot-level values were multiplied by respective plot EF values corresponding to each stratum. e estimated individual plot values were summed up and expressed on a per ha basis. Moreover, the volume removals were then expressed based on land use types, i.e., production forest, protection forest, wildlife reserve, agricultural, shifting cultivation, grazing land, built-up area, water bodies or swamps, and other lands. Furthermore, they were calculated according to vegetation types (closed woodlands (crown cover >40%), open woodlands (crown cover between 10 and 40%), and woodlands with scattered cropland). ey were also calculated based on ownership types (central government land, local government land, village land, private land, general land, and not known), regional division, and tree species. On the other hand, we tested for significance to explore if there were differences in mean volume between the different categories. We conducted two-way analysis of variance (ANOVA). Analysis of mean volume across categories was then done using Duncan's multiple range test for ratio to pinpoint which means of categories are different. Biomass Removal Estimation. We estimated both AGB and BGB removals per tree using allometric equations developed for Miombo woodlands in Mainland Tanzania [11] that utilize SD as the explanatory variable (Table 1). Although BGB is not removed from the woodlands, it is assumed that when the tree is cut, the stump and roots will eventually rot and decompose to release carbon. e estimated individual tree AGB and BGB were divided by respective estimated age of the stump to obtain rate of AGB and BGB removals per year. Plot-level estimates were calculated and expressed on a per ha basis. Biomass removals by woodlands types, land use types, ownership types, zones, regions, and tree species were determined using similar procedure as that used to quantify volume removals. On the other hand, we tested for significance to explore if there were differences in mean biomass between the different categories. We conducted two-way analysis of variance (ANOVA) by applying Duncan's multiple range test for ratio to pinpoint which mean biomasses are different among categories. Carbon Stock Removal Estimation. Carbon stocks are widely estimated from forest biomass estimates [22]. Many authors assume the carbon concentration of tree to be between 45% and 50% of the dry biomass [22][23][24][25]. In this study, we estimated tree carbon concentration by multiplying AGB and BGB by 49% [25]. en, we estimated plotlevel by summing all the trees carbon in the respective plot. e carbon removals by zones, regions, tree species, vegetation, land use, and ownership types were estimated in a way as same as that for volume and biomass removals. Table 2 presents volume, biomass, and carbon removals based on vegetation, land use, and ownership categories of Miombo woodlands. Highest-volume removals per hectare per year were observed in woodlands with scattered cropland followed by open woodlands while closed woodlands were the least ( Table 2). In addition, findings showed highervolume removals in shifting cultivation land as expected followed by agricultural land, production forest land, and grazing land (Table 2). Moreover, we observed higher-volume removals per hectare in private land followed by general land and village land. e least-volume removals were found in central government land followed by local government land (Table 2). Interestingly, none of the categories (Table 2) had volume values that were statistically significantly different (p < 0.05). Removals by Administrative Zones and Regions. Regarding administrative zones, we observed higher volume removals in the eastern zone followed by the southern zone (Table 3). Conversely, the zone with leastvolume removals was the lake zone followed by central zones (Table 3). Interestingly, none of the zones (Table 3) had volume values that were statistically significantly different (p < 0.05). Considering administrative regions, the highest volume removals were experienced in Dar es Salaam and Pwani regions, followed by Tanga, Lindi, Mtwara, Morogoro, and Iringa regions, while the regions with the lowest removals were Mara and Arusha followed by Manyara (Table 4). In terms of biomass and carbon stock removals, the patterns observed in terms of species, vegetation, land use and ownership types, regions, and zones are the same as for volume removals (Tables 2-5). Discussion is study utilized NAFORMA field data collected over a period between 2009 and 2014. e NAFORMA was the first ground-based national forest inventory that was conducted across Mainland Tanzania. e data were collected from more than 30,000 sample plots, and they provided a baseline data to allow informed decisions to promote sustainable management of the national forest resources. For the purpose of the present study where it aimed at assessing removals from Miombo woodlands, only stump data from Miombo woodlands among other land cover types were used. Appropriate allometric equations developed locally which use SD as the explanatory variable were applied to estimate volume, AGB, BGB, and carbon stock removals. International Journal of Forestry Research 5 year −1 in Bereku, Haraa, Riroda, and Bubu forest reserves, respectively, in Babati district, Manyara region, and northern Tanzania. ese reported findings are slightly lower than the current study because they were carried out in reserved forests with some sort of protection, while NAFORMA data, in addition to protected area, represents large forest area which remains unprotected and therefore vulnerable to tree cutting. Since limited studies have been conducted on the volume removals per ha per year in Miombo woodlands in Tanzania, comparison across studies was also based on studies that were conducted to determine growth rate. e estimated volume increments of Tanzanian Miombo woodlands are estimated to range from 0.8 to 3.3 m 3 ha −1 yr −1 with a mean of 1.6 ± 0.2 m 3 ha −1 yr −1 [26,28]. Other studies report the annual volume increment of woodlands to be in the range between 0.57 and 4.35 m 3 ha −1 year −1 [29][30][31]. In the Miombo woodland categories where volume removal exceeds increment rates, the removals are deemed unsustainable and the woodlands may consequently be depleted over time. Volume removals were further presented according to zones, regions, vegetation, land use, and ownership types. Regarding land use types, much of the volume removals were found to be in the shifting cultivation lands followed by agricultural lands, production forestlands, and grazing lands. e highest removals in these woodland categories were expected since in the shifting cultivation and agricultural lands, farmers clear-fell the trees in favour of agricultural crops. On the other hand, since production forests are set for timber and charcoal production, these activities explain why this category has high volume removals. For the grazing land, domesticated and wild animals are the main drivers of removals in which grasses and shrubs are grazed out. is leads to exposure of trees that are easily cut for timber, charcoal, poles, and firewood. Moreover, elephant grazing can also reduce tree populations significantly [32,33]. e last 50 years have witnessed an intensification of these land use activities driven by increasing human and livestock populations, as well as the human-induced concentration of wildlife herbivores into small conservation areas [34][35][36][37]. Regarding vegetation of Miombo woodlands subdivision, there was higher volume removals in the woodlands with scattered croplands because in this subdivision farmers Note: volume and/or biomass with the same superscript letter are not statistically significantly different and vice versa is also true (p < 0.05). clear-fell trees in favour of agricultural crops similar to shifting cultivation. Furthermore, open woodland was the second to woodlands with scattered croplands mainly because of the exposure of trees to activities such as tree cut for timber, firewood, and charcoal. Closed woodlands had the lowest removals mainly because most of the woodlands fall under protection ownership types and are found far away from human settlements. For ownership types, higher-volume removals were documented in private lands followed by general lands and village lands. Under private tenure regime, individuals or groups with user rights of occupancy exclude others from forest resource use [38]. e highest volume removals documented in this regime are because the majority of private owners have rights to extract all resources in their own forests in spite of the forest policy of Tanzania restricting extraction of forest resources without legal permits from the government for conservation purposes. In the general lands, the main reason for this situation is that forests are rather under an open access regime in which people are free to extract forest products from the woodlands [39,40]. In contrast, the higher volume removals in village land were probably contributed by allowed harvesting coupled by lack of management plans. is is because out of 1,821 village forest lands involved in participatory forest management (PFM), only 531 village forests have approved management plans [41]. Tanzania forest Act No. 14 of 2002 [42] recognizes communities, through their village councils as the sole managers of village land forest reserves. Evidence from communities that reserved their own forests in the mid-1990s clearly showed that forests were being restored, unregulated activities were being reduced, and encroachment was declining [41]. Forests also continued to provide local subsistence benefits and opportunities for regulated commercial harvesting [41]. e findings of this study clearly indicate that there are higher volume removals in village land perhaps because it is expected that trees should be harvested from these forests. On the other hand, the least-volume removals were found to be in central government land followed by local government land. Forest policy and act that enable the use of management plans explain this. is also implies that there is somewhat effective law enforcement in both central and local government forests compared to forests under other regimes. Most of central government forests are for protection purposes unlike village forests that are for production purposes. In practice, no tree cutting is allowed in protection forests. However, a forest inventory carried out in 11 most forested districts in Tanzania showed that encroachment and tree cutting are common even in protected forests [43]. In the future, harvesting pressure is likely to increase in the central and local government reserves following resource depletion in private, general, and village land. In terms of zones, woodlands located in the eastern zone had noticeable higher removals (4.59-6.7 m 3 ha −1 year −1 ) compared to other zones. is is in line with the results of 6.7 m 3 ha −1 year −1 reported by Treue et al. [26] For AGB and BGB, tree removals represented an average of 1.23 ± 0.37 t ha −1 year −1 and 0.43 ± 0.12 t ha −1 year −1 of AGB and BGB, respectively. A study conducted by Zahabu [40] recorded aboveground biomass removals of 1 t ha −1 year −1 for the Miombo woodlands at Kitulangalo in eastern Tanzania that are similar to the results from this study. On the other hand, the biomass increments are between 0.58 and 3 t ha −1 year −1 in mature Miombo woodlands [44,46]. e increment is higher in young Miombo woodlands which may range from 1.2 to 3.4 t ha −1 year −1 [44]. Considering the mean annual AGB increment (MAI-AGB), removals exceeding increment rate in the woodland categories studied indicate unsustainability. In terms of carbon stock removals, we estimated total average aboveground carbon (AGC) and belowground carbon (BGC) removals of 0.60 ± 0.18 tC ha −1 year −1 and 0.21 ± 0.05 tC ha −1 year −1 , respectively. However, the mean annual AGC increment (MAI-AGC) in Miombo woodlands of Tanzania ranges between 0.111 and 0.404 tC ha −1 year −1 [47]. In addition, 0.9, 0.75, and 0.58 tC ha −1 year −1 increments were found in Miombo woodlands in Zambia, Mozambique, and South Central Africa, respectively [5,48,49]. e differences in carbon densities might be attributed to varying degree of exposure to drivers of forest degradation, difference in age of the tree species, and the type of Miombo woodlands involved [50]. Considering the MAI-AGC in Miombo woodlands in Tanzania, it is very clear that removals exceed increments that could end up into woodlands depletion over time. Moreover, since biomass and carbon stocks were expressed according to zones, regions, species, land use, ownership, and vegetation types, much of the AGB, AGC, BGB, and BGC removals followed similar trends as for volume removals and thus the same explanations applied to volume removal patterns held. Conclusion is study reported annual volume, biomass, and carbon stock removals per ha that consequently result into carbon dioxide (CO 2 ) emissions in Tanzania. Volume and carbon removals in all the categories of Miombo woodlands of Mainland Tanzania are unsustainable except in the closed woodland, protection forest, wildlife reserves, swamps, central government land, local government land, southern highland zone, western zone, central zone, and lake zone. For reducing emissions emanating from removals and by considering national circumstances, all categories of Miombo woodlands should be managed although the management intensity and priorities should consider those categories with unsustainable removals. In addition, we recommend the use of stumps data collected by NAFORMA to estimate removals in other vegetation types like mangrove forest, lowland forest, humid montane forest, and thickets. is would bring information on the national status of removals, thereby understanding inferred forest degradation and hence improving future FREL. Data Availability e data are available at the database of the Ministry of Natural Resources and Tourism, Tanzania. Conflicts of Interest e authors declare that they have no conflicts of interest regarding the publication of this paper.
2020-04-02T09:27:01.447Z
2020-03-29T00:00:00.000
{ "year": 2020, "sha1": "1c2bf0cb6f2939df363d224997c6fbeb52c1437b", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijfr/2020/4043965.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2fbf6b07598ceb0f3534876778721978ba63e713", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
5746555
pes2o/s2orc
v3-fos-license
Asymptomatic Mediastinal Hematoma as a Complication of Ultrasound-Guided Internal Jugular Vein Catheterization Central venous catheterization is a frequently performed procedure in the intensive care units (ICU) for various treatments such as iv therapy, parenteral nutrition and hemodialysis but occasionally encountered complications can be fatal. Therefore, safe insertion with confirmation of correct positioning of the catheter is vital. Ultrasound (US)-guided insertion of catheters has been used widely, and its safety and efficacy have been demonstrated in several studies. However, this technique is not free from complications, such as carotid artery puncture, hemothorax, pneumothorax and infection. In this case, anterior mediastinal hematoma proceeded after USguided internal jugular vein (IJV) catheterization. Key Word: Mediastinal hematom, ultrasound, internal jugular vein catheterization Ultrasonografi Rehberliğinde İnternal Juguler Ven Kateterizasyonu Sonucu Gelişen Asemptomatik Mediastinal Hematom INTRODUCTION Central venous catheterization is a frequently performed procedure in the intensive care units (ICU) for various treatments such as iv therapy, parenteral nutrition and hemodialysis but occasionally encountered complications can be fatal.Therefore, safe insertion with confirmation of correct positioning of the catheter is vital.Ultrasound (US)-guided insertion of catheters has been used widely, and its safety and efficacy have been demonstrated in several studies.However, this technique is not free from complications, such as carotid artery puncture, hemothorax, pneumothorax and infection.In this case, anterior mediastinal hematoma proceeded after US-guided internal jugular vein (IJV) catheterization. CASE A 74-year-old woman was referred to Gazi University Emergency Department with the triad of symptoms of headaches, petechia and thrombocytopenia for management.A computarized tomography (CT) of brain examination revealed a combined epidural and subdural haematoma.She was managed conservatively with observation by the neurosurgical unit.She was admitted to haematology unit and treated with intravenous (IV) steroid and IV immunoglobulin for immune mediated thrombocytopenia.Subsequently she developed tonic-clonic epileptic seizures and required intubation and airway maintenance.She was admitted to ICU for management of low Glasgow coma scale and poor urine output.For the acute renal impairment, following the conservative therapy it was decided to proceed to initiate hemodialysis.She was given IV platelets replacement but there was no rise in the platelet count above 5000 units.Despite the most sensitive test to visualise localised hematoma and visualisation of the mediastinum.US guided IJV catheterisation may significantly reduce the rate of complications however it does not prevent them completely (4,5).This case demonstrates the development of a mediastinal hemotoma despite the use of US guidance without any evidence of clinical or hemodynamic signs.This case suggests that using US does not completely guarantee a complication-free outcome of IJV catheterization and, that catheter placement should be carefully confirmed. IV steroid, immunglobulin therapies and platelet replacement, still patients' platelet count was lower than 5x10 3 / UI.Therefore catheter had to be placed under low platelet count.The patient had the uneventful (first attempt) insertion of a double lumen hemodialysis catheter (12F, 16 cm) into the right IJV under real time US-guidance by the ICU physician.A post catheter chest X-ray was unremarkable with no mediastinal widening or gas (Figure 1).She was hemodynamically stable and not symptomatic for the next 24 hours.She had two episodes of hemodialysis in this period.The patient subsequently developed endotracheal hemorrhage and bilateral pulmonary infiltrates with a presumptive diagnosis of alveolar hemorrhage.A high resolution CT of the chest revealed hematoma surrounding the IJV catheter tract and the anterior mediastinum (Figure 2).This catheter was removed and a right sided femoral catheter was inserted. DISCUSSION Internal jugular vein is often the preferred vein catheterization for temporary hemodialysis in the ICU's, but some complications can be seen.Commonly reported complications include carotid artery puncture, arterial pseudoaneurysm, vascular injury, hemothorax, pneumothorax, thrombosis, stenosis, airway obstruction and infection (3).Ultrasoundguided catheterization has been reported to effectively decrease complications compared to conventional landmark technique (1,2).However, complications have been reported even when the procedure is guided by US.Mediastinal hematoma is an uncommon complication of central vein catheterization with only few cases reported.The hemotoma can develop in any location and structures within the mediastinum leading to differing presentations.The presentation may range from chest pain through to haemodynamic instability or sudden death (6).Chest CT examination is the Figure 1 . Figure 1.Antero-posterior chest radigraphies of patient a) Before catheterization b) After catheterization Figure 2 . Figure 2. Hematoma around the IJV catheter extending to the anterior mediastinum
2017-08-27T12:42:56.448Z
2016-01-16T00:00:00.000
{ "year": 2016, "sha1": "84f2e85faf0fdc2cd979c52aa00a508e693633d5", "oa_license": "CCBY", "oa_url": "http://www.ejgm.co.uk/pdf-81953-17414?filename=Asymptomatic%20Mediastinal.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "84f2e85faf0fdc2cd979c52aa00a508e693633d5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233446502
pes2o/s2orc
v3-fos-license
The Single Match: Reflections on the National Resident Matching Program’s Sustained Partnership With Learners In 2020, the National Resident Matching Program (NRMP) sponsored the inaugural “Single Match”—the first time that seniors and graduates of U.S. MD-granting and DO-granting schools participated in one Match. In honor of the Single Match milestone, the authors examine the NRMP’s history, reflecting on the organization’s efforts since the 1950s to support learners and the graduate medical education community by fostering a responsive, robust matching program while remaining true to its founding principles to provide parity of experience for applicants and reduce coercive practices. The chaos and stress associated with the pre-Match days in the 1920s and 1930s that led to the call for a national clearinghouse are highlighted as are significant NRMP accomplishments, from the organization’s incorporation as a 501(c)(3) organization in 1953 as a simple internship placement system through the first Single Match. Recognizing that the current transition to residency is not without its stressors, the authors note that the NRMP remains committed and willing to continue to evolve and identify innovative and meaningful ways to address learner needs and improve the transition to residency. Abstract In 2020, the National Resident Matching Program (NRMP) sponsored the inaugural "Single Match"-the first time that seniors and graduates of U.S. MD-granting and DO-granting schools participated in one Match. In honor of the Single Match milestone, the authors examine the NRMP's history, reflecting on the organization's efforts since the 1950s to support learners and the graduate medical education community by fostering a responsive, robust matching program while remaining true to its founding principles to provide parity of experience for applicants and reduce coercive practices. The chaos and stress associated with the pre-Match days in the 1920s and 1930s that led to the call for a national clearinghouse are highlighted as are significant NRMP accomplishments, from the organization's incorporation as a 501(c)(3) organization in 1953 as a simple internship placement system through the first Single Match. Recognizing that the current transition to residency is not without its stressors, the authors note that the NRMP remains committed and willing to continue to evolve and identify innovative and meaningful ways to address learner needs and improve the transition to residency. The 2020 Main Residency Match marked a significant milestone for the National Resident Matching Program (NRMP) and the medical education community. The 2020 Match was the inaugural "Single Match"-the first time that seniors and graduates of U.S. MD-granting and DO-granting schools participated in one Match. The Single Match reflects the realization of the Single Accreditation System for U.S. residency programs, which was created and promoted by the Accreditation Council for Graduate Medical Education (ACGME), the American Osteopathic Association (AOA), and the American Association of Colleges of Osteopathic Medicine. 1 In honor of this NRMP milestone, we felt it appropriate, as current and former learner members of the NRMP Board of Directors, with support of NRMP staff, to reflect on the NRMP's history-namely, its relationship with learners and the critical role it has played and continues to play in learners' transition from undergraduate to graduate medical education. Before the Match There are few practicing physicians today who can accurately describe for medical students what the high-pressure struggle for internships was like before the Match. 2 The internship was formalized in the early 1900s as a critical component of medical education. By the 1930s, hospitals' race to sign medical students to training had become fiercely competitive. As internship positions outnumbered the graduating medical school seniors available to fill them, hospitals extended offers to students (via telegram and, more urgently, by telephone) as early as their second year. Students had only hours to accept or reject these offers. Mullin noted that the competition and absence of structure bred unfairness, inequality, and unwarranted pressure. 3 In 1927, the Bulletin of the Association of American Medical Colleges published a letter from Dr. William Darrach, dean of Columbia University College of Physicians and Surgeons, to Dr. Fred C. Zapffe, executive secretary of the Association of American Medical Colleges (AAMC), announcing a plan for deferred acceptance of interns (i.e., waiting to appoint interns to residency positions until all candidates had been considered) at Presbyterian Hospital in New York City. 4 The letter was published as support for changes to a process described in an editorial note as having "proven to be a most vexatious matter in the past. Courses have been disrupted by the scramble for hospital positions; the [students'] work has suffered and hospitals have not profited. " 4 In the 1930s and 1940s, others advocated for changes to the "prevalent disorder" in intern selection and "chaotic situation" in schools and hospitals that led to "an epidemic of worry" among students. 5,6 National organizations and associations passed resolutions calling for a streamlined process for internship placement or attempted "fixes" at the regional level, but none of the efforts were successful. 5 A Match to Support Learners By 1950, a centralized clearinghouse for internship placement had been proposed. 3 The early model was endorsed by national medical and medical education associations to facilitate matching students to internship positions based on confidential rank order lists created by both hospitals and students. The aim was to establish a uniform timeline for all intern appointments. Dissatisfied with the design of the proposed model, a group of Harvard Medical School students, led by W. Hardy Hendren III, approached the school's leadership in 1951 to oppose the algorithm. 7 They believed it inadvertently penalized students for using the first choice on their rank order lists to "reach" for positions they wanted but for which they might be less qualified and thus unlikely to obtain. Hendren and colleagues rallied the class presidents at the 79 existing U.S. medical schools to push for proposed modifications that would make the algorithm more equitable for students. The students' efforts were successful, and the National Interassociation Committee on Internships (NICI)-comprising leaders from national medical education organizations including the AAMC, American Medical Association, and American Hospital Association-agreed to modify the model in time for the 1952 NICI Match. After the first NICI Match, the NRMP (initially known as the National Internship Matching Program) was incorporated as a 501(c)(3) organization in 1953 and over time built a matching program to support learners. In 1984, couples matching was introduced so partners could try to obtain training at a pair of programs, usually in the same geographic location. In 1988, advanced specialties were added to the Match so applicants could attempt to secure positions for postgraduate years (PGYs) 1 and 2 simultaneously to achieve a full course of training. That year, the NRMP also introduced WebROLIC, the first web-based iteration of the Registration, Ranking, and Results (R3) system, which provided students 40 more days to consider and input their ranking preferences. In 1995, another significant learnercentered change took place with the commissioning of a new "applicant proposing" algorithm by the NRMP Board of Directors. A study comparing the new algorithm with the former also was commissioned to determine whether the former algorithm favored hospital preferences over student preferences. 8 Although the investigation found that the new algorithm would have changed Match outcomes for only 1 of 1,000 applicants participating in prior Matches, the NRMP adopted the new algorithm at its May 1997 Board Meeting and has used it since the 1998 Match. A Single Match for the Graduate Medical Education Community The AOA Match in the form most people in medical education today would recognize began in 1995, but the AOA Match had served as an osteopathic internship placement system since the 1950s. 9 Although a relatively small number of residency programs dually accredited by the ACGME and AOA had participated for years in both the NRMP and AOA matching programs, it was not until the transition toward the Single Accreditation System was underway that the number of positions in the NRMP Match offered by osteopathic programs started to grow. By the end of 2019, 87% of positions in osteopathic programs were ACGME-accredited. 10 In the 2020 Match, 2,672 positions were offered by 520 programs previously accredited by the AOA. 11 Although osteopathic programs are relatively new to the NRMP, DO students and graduates have been a part of the NRMP fabric for at least as long as the NRMP has been reporting Match outcomes data. In the 2011 Match, 2,178 active DO applicants submitted certified rank order lists. 12 Five years later, that number had grown to 2,982, an increase of 37%. 13 In 2020 and the first Single Match, the number of active DO applicants had risen to 7,154, with DO seniors in particular earning a 90.7% match rate, the highest on record for that group. 11 As the transition toward the Single Accreditation System gained momentum and the Single Match became a growing reality with the planned shuttering of the AOA Match in 2019, the NRMP increased its commitment to supporting DO learners. In 2016, it expanded its reporting to target DO learner communities and highlight Match outcomes for DOs with publications like "Charting Outcomes in the Match for U.S. Osteopathic Medical Students and Graduates, " which presents the characteristics and qualifications of DO seniors who have matched to their preferred specialties. 14 With the 2020 Match, the NRMP expanded its definition of sponsored applicants to include DO senior students: Sponsored applicants are students at medical schools accredited by the Liaison Committee on Medical Education or the Commission on Osteopathic College Accreditation who can be offered training positions only through the NRMP or another national matching plan. As sponsored applicants, DO senior students are protected alongside U.S. MD senior students, through the NRMP's Match Participation Agreement, from being pressured to accept non-Match positions that could potentially limit their rights to freely and fully investigate all choices for training. In addition, the NRMP looked inward at its governing board to reflect on its diversity and ability to represent all stakeholders. In 2018, the NRMP Board of Directors elected the first DO student director, and in 2020, it revised its bylaws to include DO representation at the physician and resident physician levels. A Sustained Focus on Learners Responding to and supporting the needs of learners has remained a priority of the NRMP over time (see Chart 1). In 2008, the NRMP partnered with the AAMC to convene a work group to address the "Scramble, " the chaotic period during Match Week in which applicants who were unmatched when the matching algorithm was processed attempted to secure unfilled positions. The Scramble resembled the early days before the Match: a lack of stewardship over the process, no trust or transparency, and no binding nature of contracts. Thus, unmatched applicants were compelled to make decisions about their training in a very short time frame. Recognizing these applicants deserved a more organized method to secure training, the Supplemental Offer and Acceptance Program (SOAP) was launched as part of the 2012 Match Week and brought with it an extension of the rights and protections afforded under the Match Participation Agreement. In 2009, the NRMP Board of Directors requested an internal study of positions offered outside the Match and found that more than one-third of residency programs in Match-participating specialties offered non-Match positions and that 1 in 7 residents obtained positions outside the Match. Relying on the NRMP's founding principle-to ensure applicants are free to make training decisions without coercion-the NRMP Board implemented the All In Policy with the 2013 Match to mandate that programs electing to participate in the Match register and attempt to fill all positions through the Match or another national matching plan. Other, more recent accomplishments include the creation of The Match PRISM (Program Rating and Interview Scheduling Manager) smartphone application, development of a library of online learning videos and R3 system support guides, and publication of Tableau-based interactive data tools. All are available free of charge to learners and interested Match stakeholders via the NRMP's public website at nrmp.org. Reflecting on the Past to Guide the Future In the first Match in 1952, approximately 10,400 internship positions were offered to 5,800 graduating U.S. medical school seniors. 15 In the 2020 Match, the first Single Match, 34,266 PGY-1 positions were offered to 40,084 active domestic and international applicants. 11 Yet, throughout the NRMP's history and wfor all its growth, the organization has remained true to its roots. As Stalnaker and Smith 16 wrote in 1954: Thus, in full freedom of choice, the plan works as a clearing house, not interfering with, but giving effect to the choices of both hospital and student. It has removed, insofar as possible, the great pressures that caused recriminations once common to the internship placement scene. The broken contracts, the pressuring and signing up of students long before the senior year for internship commitments and other undesirable aspects have now largely disappeared.… The matching program does not allocate, distribute or otherwise control interns or internships. It does not set quota or approve hospitals for internship training. It does not, by its nature, favor any group of hospitals or in any way advise students where to intern. Those founding principles remain true today. The NRMP is not an application service, a recruitment company, or an accrediting body for graduate medical education. It is not a physician employer nor is it a financial planner for institutions. Through the Match, the NRMP strives for parity of experience and promotes uniform guidelines for all participants, protects applicants' rights to maintain confidentiality of their ranking and interview preferences, and reduces coercive practices by programs. As a result, the Match "dilutes the traditional power differential between employer and job seeker" by ensuring the matching algorithm achieves the most preferred outcomes for as many applicants as possible. 17 The NRMP has come a long way, but we recognize that the residency selection process still is fraught with stress and uncertainty, albeit for reasons different from those that prompted creation of the Match. Application inflation, debt, and a disproportionate reliance on licensure exam scores have contributed to a climate that makes the transition to residency perhaps as stressful as when the Match was created nearly 70 years ago. 18,19 However, as the NRMP moves beyond achievement of the Single Match milestone and we reflect on the organization's history of responding to the needs of its constituents, we believe the NRMP will continue to evolve and identify innovative and meaningful ways to address learner needs. We hope learners of all kinds value that commitment and stand ready to support the NRMP's efforts to continually improve the transition to residency.
2021-04-30T06:16:43.094Z
2021-04-27T00:00:00.000
{ "year": 2021, "sha1": "345278787ea7f77ce8c36537e61ef8b86010e96d", "oa_license": "CCBYNCND", "oa_url": "https://journals.lww.com/academicmedicine/Fulltext/2021/08000/The_Single_Match__Reflections_on_the_National.37.aspx", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c1a384dbd0261c4b108d96fb71d50f4aaaf5b460", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216450483
pes2o/s2orc
v3-fos-license
Rural development strategies in Indonesia: Managing villages to achieve sustainable development Rural development is an appealing phenomenon to be explored. After the Village Law was established in 2014, each village must manage its own village funds. This significantly impacts the spatial and a-spatial aspects of rural development, either creating opportunities in rural areas or even creating new problems. Currently, many villages carry out development without prudently considering spatial aspects. Villages as the main suppliers of various staple foods are unable to achieve self-sufficiency as part of sustainable development. Therefore, it is important to discuss governance aspects. This paper explains rural issues and problems and relates these to the development management framework. The paper uses a literature review and secondary data to identify issues and problems in villages. The paper found that each village can formulate strategic solutions by planning to increase information and public communication, organizing to strengthen systems and internal supervision, actuating to optimize the role of Village-Owned Enterprises (BUMDes) as well as controlling to strengthen spatial control. Introduction The village is the lowest government level that has the authority to manage its own budget. The Village Law No. 6/2014 stipulates that villages have a unique role and village governance must follow the principles of recognition, subsidiary, diversity, kinship, cooperation, deliberation, democracy, independence, participation, equality, empowerment, and sustainability. In addition, Law No. 23/2014 concerning Regional Government, details the distribution of authority from the national government to regional governments. Rural areas are homogeneous in nature and have an emphasis on cooperation in agricultural activities and these areas have a strong kinship factor [1]. Generally, rural communities work in the field of agriculture which is influenced by natural and weather factors [2]. Villages have two conceptual functions, i.e., executing the village government (local self-government) and handling local community affairs following the rights of origin and traditional rights (self-governing community). In the context of the rural-urban linkage, villages produce vegetable and animal food products and raw materials and work in villages is in the agricultural, manufacturing, industrial and other sectors [3]. As a result, rural areas are made up of a collection of villages with similar characteristics. Rural and urban concepts refer to the characteristics of the community, while villages and cities are the administrative or territorial units that are the settlements of farmers [4]. There are two main approaches to the concept of rural development, i.e. development from above and development from below [5]. Development from above is associated with external supervision such as formal directives from higher hierarchy governments. These directives hold great control over the administrative system of the villages. Whereas development from below is initiated by individuals and groups of community members who come up with innovative solutions and have indigenous design and construction methods [6]. Based on the concept of rural development, there are two major paradigms in Indonesia's village development: 'Membangun Desa' and 'Desa Membangun'. Villages should adhere to the principles of rural development. These principles are that 1) development should improve the conditions of most local residents; 2) more people should benefit from development than those who are negatively impacted by it; 3) development should ensure the basic needs of the community are fulfilled; 4) development must be conform people's needs; 5) development should encourage self-sufficiency; 6) development should bring continuous improvement; and 7) development should not damage the environment [7]. Continuous participatory meetings are important tools of rural development that allow the sharing of information to increase the ability of local communities to improve their own lives. Through the participatory rural appraisal (PRA) method, the community can carry out its own analysis to plan and take beneficial actions in line with their abilities [8]. In the development planning process, villagers must strive to be more creative, dynamic and flexible in dealing with the difficulties they face, so that they could further boost development [9]. Sustainable development is a principle orientation for the world to follow. In a rural context, sustainable development relates to poverty eradication, zero hunger, healthy living and welfare, quality education, gender equality, and decent work and economic growth. However, the world would face a disaster when people would not understand the importance of environmental issues [10].Therefore, [11]urged the implementation of policies for strengthening sustainable forms of agriculture. This could be done by shifting from conventional practices to sustainable activities, replacing industrial farming practices with systems that preserve biodiversity, upgrading soil fertility, and ensuring safe and nutritious food for all humans. Compared to the urban context, the rural environment offers greater natural diversity, healthier areas, cultural habitude, nurtured traditions, traditional values, and affluent heritage [12]. This paper aims to uncover the characteristics of village development within the Indonesian governance framework. The main objective of this paper is to identify the issues and problems of rural development. The paper seeks to provide alternative strategic solutions in responding to in village and rural development governance. Methods This paper follows a deductive approach, establishing elaboration from an assumption or speculation towards finding the data that will be explained [13]. This research is also aided by a descriptivequalitative analysis of information found in the literature. Qualitative research emphasizes the reality and social phenomena as holistic, complex and dynamic [14].The paper uses a literature review to find secondary data to identify rural issues and problems. The data is taken from SUSENAS (national survey), BPS (Statistics Indonesia) and from the Ministry of Villages, Disadvantaged Regions and Transmigration for the past five years. The paper analyzes the obtained data based on the principles of management: planning, organizing, actuating and controlling [15]. As such, the paper classifies all of the problems according to the management framework and describes the data based on literature. Then, all the stages are combined using Logical Framework Analysis (LFA)for a deeper understanding of the real conditions, and to build a logical hierarchy based on goals, to identify potentials and risks. Lastly, the paper proposes solutions to the problems in village and rural development governance [16]. This research follows the stages of analysis below: a. Identifying issues and problems based on a synthesis of various literature. This data originates from scientific papers and reports by relevant government agencies. b. Formulating strategies. Villages face many problems. Therefore, it is essential to formulate strategies to deal with the main problem that has the greatest impact on village development. The use of problem trees or problem tables facilitates the identification of these problems. c. Formulating the best solution to these strategic issues. By referring to the problem table or problem tree, the study develops general solutions. d. Formulating a strategy based on the management framework. In doing so, the paper follows the stages of planning, organizing, actuating and controlling. An overview of rural development in Indonesia The Village Law gives villages the authority to develop all of its service sectors independently. To monitor the development of the villages, the Ministry of Villages, Disadvantaged Regions and Transmigration uses five indicators to calculate the Village Development Index (Index Pembangunan Desa -IPD). These five indicators are basic services, infrastructure conditions, transportation, public services, and village government administration. Table 1 presents the growth rate of the IPD and compares the index between the years 2014 and 2018. Table 1 show that the average growth rate of Indonesian IPD is 6% with the greatest improvements in the aspects of village government administration and in village infrastructure conditions. This proves that Village Funds have a positive impact on solving rural problems. However, villages still face many problems, especially in alleviating poverty and reducing unemployment in rural communities. Identification of issues and problems After the issuance of the Village Law, Village Funds became an important topic. Amid the complex problems faced by rural communities, the village government must take strategic steps in dealing with village problems. This highlights the importance of the governance of villages and rural areas. There are seven main factors that cause problems in rural areas, as described in the following section. The readiness of government agencies Knowledge and technological capacity are basic capitals that governments require to be able to offer excellent public services and aid the progress of the village. The Village Head election is one of the strategic efforts to improve the performance of village officials. This is related to leadership factors, which are found to be crucial determinants for the success of development efforts in the village [17,18]. The village leadership plays a significant role in realizing trust-based community development because these developments increase honesty. Notable, the greatest improvements occur when more village meetings are organized [19]. There are some legal requirements to become a village leader, i.e. the Village Head and official employees must have attained senior high school diploma. The Village Law also demands villages to implement orderly administration and careful planning. The village leader must have these skills so that he can influence the community's thinking patterns. Conversely, village governments that are led by people with low levels of education and lack of experience will potentially allocate their budget for the wrong things. Low participation in the village society Community participation in village development is an important factor to ensure that development programs target community needs effectively. However, many village communities do not take this opportunity to influence infrastructure development programs and rural community empowerment programs. Generally, the level of community participation in village development affairs remains low [20][21] [22][23]. Low productivity of human capital Most people living in rural areas work in the agricultural sector. This sector provides work and income during specific seasons. However, outside the harvesting season, farmers' income becomes uncertain. In 2015, only 35% was absorbed in the agriculture sector which caused low investment in human capital and decreased entrepreneurial activity [24]. The unemployment rate rose from 4.01% to 4.04% in 2018. Unemployment is a joint challenge as the efforts to create jobs in rural areas through Village Funds are currently ineffective [25]. Lacking human capital and the inability to access workplaces are the main causes of poverty; smart growth increases property values and makes it difficult to live near areas that are developing economically [26]. Poverty rates in Indonesia show a reduction in poverty levels, but poverty in villages is still double that of urban areas. Besides the high unemployment rate, education is a major concern, especially vocational education and entrepreneurship which allows the youth to be absorbed in the labor force. Conversion of agricultural land The rapid growth of the population working in urban areas has increased the need for housing. Since land in urban areas is limited, housing is constructed in the hinterlands of cities where land is still cheap and there is still a trade-off with transportation costs. These problems originate from the unbalanced development between current land functions and human needs. Market conflicts between rural residential land functions even hamper land use management in rural areas [27]. This phenomenon is known as urban sprawl. The impact of urban sprawl on the rural environment is quite serious, especially for the productivity of the land and the level of agricultural production. Regardless of promises of investments in infrastructure and reduced transportation costs, rural areas rarely notice these infrastructure improvements [26]. Based on Indonesia's Sustainable Food Agricultural Land (Lahan Pertanian Pangan Berkelanjutan -LP2B) regulation, the government protects food agriculture as farmers have the right to receive government assistance to ensure that their land remains to be used for agricultural purposes. Table 3 shows that the number of farmers who switched to non-agricultural professions has increased. Notably, by 2017 the agricultural share of employment had dropped to 30%. Socioeconomic needs make people think the agricultural sector is not a promising sector in the future. This effect is even stronger when the development control by the village government to protect agricultural land is ineffective. The percentage of farmers working in the agricultural sector continues to decline, which threatens the food supply to the community. The intensity of disasters In villages located in the upstream area of rivers, a strong conversion from forest land to horticultural agriculture occurs. This is directly related to the many disasters that occur in rural areas, especially those caused by the behavior of rural communities in the highlands. If these disasters are not supported by mitigation efforts, the material and non-material losses will increase and affect the most vulnerable groups in rural areas. A lack of funds for economic development The Village Law provides great opportunities for the people of Indonesia, especially to improve the economic level of the village community. In accordance with the directives for rural and underdeveloped regions, villages are encouraged to establish Village-Owned Enterprises (BUMDes) that are mostly funded from Village Funds. These BUMDes are expected to encourage the creation of new jobs and increase village income. However, the implementation of BUMDes still needs improvements. Althoughthe fact that their budget is limited to 30% of Village Funds (Village Law No. 6 of 2014), many Village Funds are corrupted by village governments. Corruption, low levels of education, lack of awareness, and various regulations make investors unable to contribute to real participation [28]. Based on data from the Ministry of Villages, Disadvantaged Regions and Transmigration, sixty-one percent of villages in Indonesia have established BUMDes (a total of 45,549 enterprises). This is a sharp increase from 2017 when only 24.62% of villages had established BUMDes. Moreover, only 11.63% of villages have been successful in managing goods and services in 2017. Evidently, the presence of BUMDes has not significantly improved the local economic situation of rural communities. The utilization of migrant workers A lack of jobs in growing villages to accommodate the economic needs of its communities is the main reason for people to leave their villages [29] [30]. They move to become unskilled laborers in urban areas, while most mothers prefer to stay at home and some become an abroad housemaid. However, a study found that once mothers leave their husbands and children, this will improve their financial situation but it causes a poor perception especially towards children [31]. Based on data from [32]in 2035-2045, Indonesia will experience a demographic bonus(Working-Age Population) but children and the younger generation must be educated and prepared to make an impact in this situation. The implication of the sustainable development goals (SDGs) The Sustainable Developments Goals directed countries to resolve a set of development issues, which are translated into eighteen SDG's pillars. Six of these pillars strongly affect the development of villages and rural areas. Table 5 provides an overview of the pillars that are relevant to rural development. The food supply from villages must be maintained. Thus, the certification of Sustainable Food Agricultural Land (LP2B) needs special attention from the government. Good Health and Well-Being There is a need for the prevention of activities that endanger public health in villages such as cigarettes. Quality Education No children should drop-out of school because of the costs of education and all members of village communities have the opportunity to go to school. Gender Equality Women must be involved in the decision-making process in villages and the Family Welfare Program (PKK) must be strengthened and easily accessible. Decent Work andEconomic Growth The expertise of the village community is needed in accordance with village development or market demand for jobs. Disaster risk reduction strategies must also be improved. This is in accordance with goal 13 target 3, i.e. improving education, raising awareness and human and institutional capacity related to disasters, adaptation, impact reduction and early warning systems. Strategic solutions A number of strategic actions are needed to deal with the problems presented in the previous section, especially those related to the governance of rural villages and rural development. This study develops strategic solutions using the POAC approach [15], where each problem can be responded to by positioning it in the development management framework. Planning: increased access to information and improving public communication Indonesia ranks 88 th on the global gender disparity index [33]. In answering the aspirations of lowincome villagers, each village must provide information either on request or through another form of aspiration. To achieve equitable information provision, village governments can communicate through online media channels, local mass media, and routine information dissemination by villagers. In the context of village development, the leadership of the village head is needed to make this program succeed [20]. In addition, specific policies that are affirmative and inclusive must support the vulnerable groups in society such as mothers, special needs groups, and the homeless. Women's access to development can be strengthened through the community of Family Welfare Program PKK mothers, communities and other natural leaders or influencers. Regarding disasters, villages must optimize access to information and increase institutional and community capacity. In 2018, only eleven percent of villages had an early warning system to detect disasters while seven percent of villages had new evacuation routes [34].Therefore, the Medium-Term Village Development Plan (RPJMDes) must include aspects of disaster management to better anticipate disasters through mitigation and adaptation. Organizing: Strengthening systems and internal supervision It is a challenge for villages to manage their Village Funds. In 2017, the Corruption Eradication Commission (KPK) caught 900 village officials on corruption charges with Village Funds [35]. Factors of political interest and administrative and system preparedness did not deter the village heads and their teams from committing corruption. Therefore, the strategic steps that can be taken are to strengthen the system in the form of the Regional Consultative Body (BPD), special task forces, and drafting SOPs that support transparency and accountability in the village government. Community participation and initiatives are needed to oversee the village government's performance in using Village Funds and the monopoly of the private sector that sets up private businesses in the village. The community can also report to a higher government at the local level. Applying Industry 4.0 can aid the information system to become more effective and efficient. Actuating: Optimizing the role of village-owned enterprises Productivity, agency, and connectivity form the foundation for rural economic development [36]. Village-Owned Enterprises (BUMDes) must implement these concepts to increase their productivity and rural income. The Ministry of Villages, Disadvantaged Regions and Transmigration has strengthened the role of BUMDes in village development for the 2014-2019. BUMDes have the potential to improve the socio-economic well-being of rural communities and village governments. In addition, they can improve skills and knowledge related to BUMDes governance and those needed by raw material suppliers [37]. If BUMDes can be optimized by increasing capital from the community, then this effective governance will significantly reduce poverty and unemployment through the creation of new jobs. Training vulnerable groups of people and using BUMDes to employ them can bridge the gap between HR competency needs for jobs and, as such, reduce inequality in rural areas. Villages can strengthen their BUMDes by focusing on the production and sale of village products that could become the villages' leading sector. Moreover, the villages can focus on creating a product diversification map, compiling BUMDes business plans and offering human resources training so the BUMDes can manage and utilize superior village products through administrative and marketing management. Furthermore, each village can increase the coverage of BUMDes services by establishing Joint BUMDes so village products can be utilized by the surrounding villages. Controlling: strengthening spatial control The Spatial Planning Document (RTR) requires a series of careful considerations in terms of provision for natural and environmental resources (sustainability) in each region. However; it should be noted that the scale of the spatial plan map must be sufficiently detailed so that the village block and its zones are visible on the map. One import aspect of this document is that spatial planning in the village cannot be fully conducted according to the plan without the participation of the local community. Village regulations (Perdes) must reduce and regulate certain specifics of the spatial documents, albeit not spatially. In relation to protecting agricultural land for food needs, village governments should urge their higher-level regional government to issue LP2B certification to control the conversion of agricultural land into housing or other functions. In a broader context, the local government needs to be sensitive and anticipate the directions for the use of Village Funds, especially with regard to basic infrastructure which is a project that is vulnerable to elite interests. Some regency that has detailed spatial plans will find it easier to regulate zoning in rural areas. Moreover, green infrastructure should replace conventional infrastructure because green infrastructure can greatly contribute to the social, environmental, and economic sectors. Conclusion The governance of village and rural development is a process that is improved continuously starting from planning, organizing, actuating and controlling. Various sources of literature show that the major issues and problems that occur in a village environment are the readiness of government agencies, low participation by the village society, low productivity of human capital, the conversion of agricultural land, the intensity of disasters, a lack of capital for economic development and the utilization of migrant workers. Using the management framework, this paper concludes that each village should formulate the following strategic solutions: planning to increase access to information and improving public communication; organizing to strengthen systems and internal supervision; actuating to optimize the role of Village-Owned Enterprises; and controlling and strengthening spatial control.
2020-03-05T11:08:07.338Z
2020-03-04T00:00:00.000
{ "year": 2020, "sha1": "ce7cfa12008067ff9b1a3a36e1e94c03cd9d16ab", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/447/1/012066", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c035de0d0ad9ca1dd4bc74daa7cadb87b3f2f3f0", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Physics", "Business" ] }
226244327
pes2o/s2orc
v3-fos-license
Serum lipid levels correlate to the progression of gastric cancer with neuroendocrine immunophenotypes: A multicenter retrospective study Highlights • The serum lipid patterns of GCNEI differed from those of pure gastric adenocarcinoma significantly.• Serum lipid levels correlated to the progression of GCNEI.• Serum lipid levels impacted the risk of the occurrence of GCNEI. Introduction Gastric cancer with neuroendocrine immunophenotypes (GCNEI) is a distinct and heterogeneous cohort of gastric malignant tumors, characterized by the varying expression of the neuroendocrine-associated protein. According to the 2019 World Health Organization (WHO) Classification of Tumors of the Digestive System, GCNEI includes neuroendocrine carcinoma (NEC) and mixed adeno-neuroendocrine carcinoma (MANEC), of which the entire or partial tumor showed NE morphol- Fig. 1. Flowchart of study object selection GCNEI: gastric cancer with neuroendocrine immunophenotypes; GC-NENM: gastric adenocarcinoma expressing neuroendocrine markers but no neuroendocrine morphology; GC-NEC: mixed carcinoma with adenocarcinoma and neuroendocrine components; NEC: neuroendocrine carcinoma; PAC: pure adenocarcinoma; PSM: propensity-score matching. In this study, we sought to investigate the relationship between serum lipids levels and the clinicopathological features or the occurrence of GCNEI. Samples of patients with NEC, GC with varying amounts of NEC components (GC-NEC), or GC-NENM were collected from three centers. The serum lipid levels were analyzed and compared with the matched PAC or a background population selected from 201 PAC and 10,061 health-check people by propensity-score matching (PSM). The risk factors of clinicopathological features and the occurrence of GCNEI were also explored. Patient and control case selection Pathological files of patients who underwent radical gastrectomy between 2010 and 2019 in Second Affiliated Hospital of Zhejiang University School of Medicine, Union Hospital of Fujian Medical University or Second Affiliated Hospital of Fujian Medical University were reviewed. Cases were selected according to the following criteria: i. neoadjuvant chemotherapy had not been applied; ii. pathological diagnoses were NEC, NEC with adenocarcinoma components, adenocarcinoma with NEC components, MANEC, or GC with NE differentiation, which were confirmed by immunohistochemical (IHC) staining for synaptophysin (Syn) and chromogranin A (CgA) [1] ; iii. Preoperative lipid profile test was performed and the data were available ( Fig. 1 ). The IHC staining was performed on Ventana BenchMark XT (Roche Diagnostics, USA), BOND-MAX (Leica Biosystems, USA), or Lab Vision Autostainer 720 (Thermo Scientific, USA) according to the corresponding protocols. The information on primary antibodies was listed in Table S1. A total of 201 patients who underwent radical gastrectomy and were pathologically diagnosed as PAC (negative for NE markers in IHC staining) between 2015 and 2019 in Second Affiliated Hospital of Zhejiang University School of Medicine were selected as the control groups of PAC. Another 10,061 people who underwent health check between 2015 and 2019 in the Center for Health Management, Second Affiliated Hospital of Fujian Medical University were selected as the control groups of background people ( Fig. 1 ). Most of these people were employees of local government or companies, and the blood test was part of their annual health check program provided by employers. This study was approved by the Institutional Review Board of Second Affiliated Hospital of Zhejiang University (2020-ERR-031), Union Hospital of Fujian Medical University (2020KY047), and Second Affiliated Hospital of Fujian Medical University (2020-SAHFMER-228). Patient consent was waived by the institutional review boards, as this study was retrospective and patients' information was protected by a blind method. Data collection and normalization Information on age, sex, body mass index (BMI, weight(kg)/ height(m) 2 ) and preoperative distal metastasis was obtained from the Electronic Medical Record System of each center. Clinicopathological features, including tumor location (cardia and fundus, body and angle, or antrum were classified into the upper, middle or lower third stomach, respectively), tumor size, histological type, depth of invasion (T), lymphovascular invasion (LVI), node metastasis (LNM) and the results of IHC staining, were obtained from the Electronic Pathological Report System of each center. TG, TCHO, LDL-C, and HDL-C levels were retrieved from the Lab Information System of each center directly. Non-HDL-C was calculated as total cholesterol minus HDL-C, and its thresholds were defined by adding 0.777 mmol/L to the LDL-C thresholds [4] . The raw data of serum lipid levels were normalized with min-max normalization using the following formula [7] , and the reference ranges of serum lipids were shown in Table S2. Propensity-score matching and statistical analysis PMS was applied between each GCNEI subtype and PAC or the health-check people. For the control groups of PAC, the matching was according to age, sex, and pathological (pTNM) stage by the matching ratio of 1:1; for the control groups of background people, the matching was according to Age and Sex by the matching ratio of 1:10. The matching algorithm was Nearest Neighbors with calipers of width equal to 0.05 [17] . The distribution of demographic, BMI, clinicopathological features, and serum lipid levels were compared using the 2 test or Fisher's exact test. One-way Analysis of Variance (ANOVA) was used in the analyses of data following a(n) (approximately) normal distribution, including age, tumor size, and the normalized level of TCHO, LDL-C, HDL-C and non-HDL-C of GCNEI patients. The non-parametric test was used in the analyses of data following an abnormal distribution, including TCHO, LDL-C, HDL-C, and non-HDL-C of the background population and all TG data. Risk analysis was performed with binary logistic regression. A P value less than 0.05 was considered statistically significant. Statistical analyses and PSM were performed with SPSS 26.0 (SPSS Inc., Chicago, IL, USA). The baseline characteristics of GCNEI patients A total of 342 GCNEI patients, including 148 GC-NENM, 114 GC-NEC, and 80 NEC patients, were enrolled in this study, of which 126 were from Second Affiliated Hospital of Zhejiang University School of Medicine, 195 were from Union Hospital of Fujian Medical University, and 21 were from Second Affiliated Hospital of Fujian Medical University. The representative histological features of GCNEI were shown in Fig. 2 . The overall GCNEI patients aged 22-84 years with a mean of 59.98 ± 10.41 years, and were composed of males predominantly. The BMI of GCNEI patients distributed between 16.04 kg/m 2 and 38.05 kg/m 2 , of which 76.6% was in the normal range (18.5 kg/m 2 -25.0 kg/m 2 ). The upper 1/3 of the stomach was favored by nearly half of the tumors, and the mean tumor size was 5.10 ± 2.40 cm. Over 80% of the tumors invaded the muscularis propria and/or deeper layers. LVI and LNM were found in more than 50% and 75% of the cases, respectively, but distal metastasis only occurred in 4.1% of all GCNEI ( Table 1 ). The demographic and clinicopathological features were comparable among different GCNEI subtypes. Significant differences were only found in the terms of tumor location and depth of invasion: over half of GC-NEC and NEC were seen in the upper 1/3 of the stomach, while the locational distribution of GC-NENM was relatively even; the proportion of tumors invading serosa was significantly higher in GC-NEC than in other subtypes ( Table 1 ). Comparison of serum lipid levels between GCNEI patients and the background population With PSM, the confounding factors of age and sex were adjusted, and there was no statistical difference of these factors between GCNEI subtypes and the background population after matching (Table S3). The serum lipid patterns of GCNEI patients were distinct. Compared with the matched background population, the TCHO and HDL-C levels were significantly lower in all GCNEI subtypes ( Fig. 3 . A2, A4, B2, B4, C2, and C4) with additional lower TG ( Fig. 3 . B1) and higher LDL-C levels ( Fig. 3 . B3) in GC-NEC. However, there was no statistically significant difference in terms of non-HDL-C between all GCNEI subtypes and the matched background population ( Fig. 3 . A5, B5, and C5). Comparison of serum lipid levels between GCNEI and PAC The confounding factors, including sex, age, and tumor stages, were minimized by PSM. No difference was found between each GCNEI subtype and the matched PAC (Table S4). Compared with PAC, TG, TCHO, and non-HDL-C levels were significantly lower in GC-NENM ( Fig. 4 . A). GC-NEC possessed the most distinct serum lipid patterns, characterized by elevated LDL-C, HDL-C, and non-HDL-C levels, but a reduced TG level ( Fig. 4 . B). NEC was the subtype with a minimum difference from PAC, as only its LDL-C level was higher ( Fig. 4 . C). On the whole, all the differences above made a "colder " lipid patterns in GC-NENM patients ( Fig. 4 . D1), but much "hotter " counterparts in GC-NEC and NEC patients ( Fig. 4 . D2 and D3), which indicated that GC-NEC and NEC might be associated with a significantly different lipid microenvironment than that of GC-NENM, and that GC-NEC might have the closest relationship with serum lipids. Comparison of serum lipid levels among GCNEI subtypes According to the reference ranges in the center samples originated, the distribution of TCHO, TG, and HDL-C levels were comparable among different GCNEI subtypes: the TCHO and TG levels of most GCNEI patients were normal, and a decreased HDL-C level was seen in 20% − 35% of GCNEI patients. Significant differences were found in the distribution of LDL-C and non-HDL-C levels, whereas more GC-NENM patients had decreased levels than the other two subtypes (Table S5). The TG, TCHO, and HDL-C levels were similar among GCNEI subtypes ( Fig. 5 , A1, A2, and A4), while, the LDL-C and non-HDL-C levels of GC-NEC were significantly higher than those of GC-NENM or NEC ( Fig. 5 , A3, and A5), which was reflected by the "lipid shapes " in the radar chart more prominently ( Fig. 5 , A6, LDL-C: * GC-NENM vs GC-NEC, △NEC vs GC-NEC; non-HDL-C: ○GC-NENM vs GC-NEC, ◇NEC vs GC-NEC). To evaluate if the differences of serum lipid levels changed among GCNEI subtypes with tumor progression, GCNEI were stratified by pathological stages (due to the small number of cases at stage IV, they are merged with those at stage III), and statistical differences between GC-NEC and GC-NENM increased with pathological stages. The LDL-C ( * ) level of GC-NEC was significantly higher through all stages ( Fig. 5 . B1, B2, and B3), and non-HDL-C ( △) or HDL-C ( ○) showed significantly higher levels in GC-NEC from stage II ( Fig. 5 . B2 and B3) or at stage III + IV ( Fig. 5 . B3), respectively. Further, to evaluate the correlation between serum lipids and tumor progression of each GCNEI subtype, patients were stratified by pathological stages, and significant differences were only found in the HDL-C levels of GC-NENM between stage III + IV and stage I ( Fig. 5 . C1 * ) or stage II ( Fig. 5 . C1 △). The significance of serum lipid levels on the occurrence and progression of GCNEI Logistic regression analysis based on GCNEI patients and the background population showed that serum lipids were associated with the occurrence of GCNEI independently ( Table 2 and Table S6). The risk of all GCNEI subtypes was increased by a lower level of TG or HDL-C, and the risk of GC-NEC was also negatively associated with the non-HDL-C level, but positively with the LDL-C level. In addition, age showed a preventive role in the occurrence of all GCNEI subtypes but the hazard ratios were weak. In the GCNEI cohort, the serum lipid levels showed independent significance on tumor size and tumor stages ( Table 3 and Table S7-S9). A lower TG or HDL-C level increased the risk of large tumor size ( > 5 cm) in GC-NENM, but only the latter was significant in GC-NEC. A larger tumor size or lower TG level increased the risk of both advanced (pT > 1) and late (III + IV) tumor stages of GC-NENM, but a lower HDL-C level or higher LDL-C level only increased the risk of late tumor stages. Moreover, younger age was also associated with late tumor stages independently. However, serum lipid levels were not associated with tumor size and tumor stages in NEC (Table S7-S9), as well as LNM (Table S10) and LVI (Table S11) in all GCNEI subtypes. Discussion In the present study, the serum lipid levels of GCNEI patients were analyzed and compared with matched PAC or a background population selected by PSM. To the best of our knowledge, this is the first study to reveal the significance of serum lipid levels in the whole spectrum of GCNEI subtypes. Generally, the serum lipid patterns of GCNEI differed from those of PAC or the background population, and GC-NEC had the most distinct serum lipid pattern. As a heterogeneous cohort, although all GCNEI tumors express NE markers, their composition is complex. In the 2019 WHO classification, NE differentiation (NED) is defined by the presence of both morphological and immunohistochemical phenotype of NE, and it would not change the designation of an adenocarcinoma, only if the NE differentiation reached 30% [1] . However, a growing amount of evidence supported that a < 30% NEC component or a component only with NE immunophenotypes in GC could contribute to more malignant biological behaviors and worse prognosis [14 , 23] . To help clarify this controversy, we inves- GC-NENM: gastric cancer expressing neuroendocrine markers but no neuroendocrine morphology; GC-NEC: mixed carcinoma with adenocarcinoma and neuroendocrine components; TG: triglyceride; HDL-C: high-density lipoprotein cholesterol; LDL-C: low-density lipoprotein cholesterol; BMI: body mass index; HR: hazard ratio; CI: confidential interval. tigated them from the aspect of patients' serum lipid metabolism and enrolled GC with any amount of NE differentiation (GC-NEC) and GC with NE immunophenotype but no NEM (GC-NENM), as well as MANEC and NEC. The results demonstrated that mixed GCNEI with NEM (GC-NEC) was indeed an entity associated with a distinct serum lipid patterns from that of GC-NENM, especially at higher pTNM stages, however, the serum lipid patterns of GC-NENM still differed from that of PAC by lower TG, TCHO, and non-HDL-C levels. These data support the demarcation in WHO classification between GCNEI with or without NEM to some de-gree, but GC-NENM was still not the same as PAC from the aspect of serum lipid patterns. The close relationship between neuroendocrine neoplasms (NENs) and serum lipid levels has been reported. Bai et al. [2] found that a higher serum LDL-C level was associated with a better survival rate and median survival time of NENs in the digestive system (G1, G2, G3 NET, and a few MANEC). In Pereira's research (Pereira,[15] ), gastrointestinal NET patients with a low serum HDL-C level showed significantly higher peritumoral expression of IL-6 which was associated with sys- temic inflammatory status and tumor progression. Benslama et al. [3] revealed that serum cholesterol level was a predictor of the response to everolimus in metastatic NET patients, and the occurrence of hypercholesterolemia was associated with longer progression-free survival, which implied that the NET sensitive to treatment might be related with some specific characteristics of lipid metabolism. However, rare studies focused on the whole spectrum of GCNEI, especially mixed ones. The present study showed that TG and HDL-C levels were negatively associated with tumor size and(or) tumor progression of GC-NENM and GC-NEC, whereas a higher LDL-C level could increase the risk of progressing to late tumor stages in GC-NENM patients. These results accorded with Pereira's finding (Pereira,[15] ) of the protective role of HDL-C but were inconsistent partially with Bai's conclusion [2] that a higher LDL-C level was accompanied by a better prognosis. The discrepancy might be caused by the different cohorts in the studies, as pure NETs (97.6%) was the majority of Bai's research while mixed carcinomas (76.6%) were predominant in our study. Furthermore, Bai's conclusion was based on the overall entity of NEN, but ours were specific to a certain subtype of GCNEI. In this study, LDL-C showed significantly different levels among GC-NEI. The LDL-C levels of GCNEI with NEM (GC-NEC and NEC) were higher than that of the matched PAC, which was not present between GCNEI without NEM (GC-NENM) and its matched PAC, meanwhile, inside GCNEI, the LDL-C level was also significantly higher in the overall entity of GCNEI with NEM than that of GC-NENM (0.515 vs − 0.036, P < 0.001). These differences divided GCNEI into two parts from the aspect of serum lipids and were consistent with their divergent outcomes that the prognosis of GCNEI with NEM was significantly worse than those without NEM, but no survival difference existed inside the former (GC-NEC vs NEC) [14] . This phenomenon implied a correlation between a high LDL-C level and the worse prognosis of GCNEI with NEM, and in our study, another significance of LDL-C on the risk of late stages was also found in GC-NENM, which showed a higher LDL-C level might also cause a worse prognosis inside GCNEI without NEM. Linstedt et.al [11] found that Syn-bearing intracellular vesicles were closely related to LDL receptors (LDL-R) in either prostate NEC cells or Syn-transfected cells. In Loeper's study [12] , the enhanced LDL endocytosis mediated by the increased activity of LDL-R could promote the formation of secretory granule prominently in NE cells. [11] . And the decreased expression of LDL-R could inhibit the cholesterol endocytosis of NET and lead to significant tumor regression [3] . Hence, a chain from LDL-R mediated cholesterol uptake to cholesterol-related regulated exocytosis [19] might exist in NEC cells and be stimulated by a high LDL-C level, resulting in increased plasma membrane replenishment, cell proliferation, and malignant behaviors. This hypothesis might explain the relationship between LDL-C and NE phenotypes or NEC cell activity partially, but the underlying mechanism of how cholesterol metabolism affected the formation of NEM still stays unclear. Serum lipid data of a background population were also collected and compared with the patients in the present study. To minimized the confounding factors which would affect serum lipid levels significantly, PSM was used to select the background population matched to each GC-NEI subtype. Compared with them, significantly lower TG, TCHO, and HDL-C levels, and a higher LDL-C level were observed in different GC-NEI subtypes, and TG, HDL-C, and(or) non-HDL-C, LDL-C levels were associated with different GCNEI subtypes independently. In two previous larger-scale studies, a lower HDL-C level [8] and a higher TCHO [16] or TG [10] level were identified as the independent risk factors of rectal NET. And in Bai's research [2] , a lower LDL-C was associated with a mixed entity of NEN of the digestive system. Our finding that a lower HDL-C increased the risk of every subtype of GCNEI was consistent with the aforementioned reports about rectal NET, but the role of TG and LDL-C was on the opposite side in our GCNEI cohort, which indicated the potential different lipid metabolic manners between NET and NEC. There were several limitations in our study. Firstly, the lipid profile identified to represent a risk for GCNEI is the typical atherogenic pro-file observed in patients with metabolic syndrome, thus suggesting that this cancer population was enriched in metabolic syndrome/disorders as compared to the background population. However, other metabolic parameters were out of this study focus. Meanwhile, this lipid profile often indicated a systemic inflammatory status which has an impact on tissue inflammation. This being said implies that lipid profile could be a surrogate marker of an inflammatory condition, which in turn is a recognized carcinogenic factor for several tumors. Secondly, due to the rarity of GCNEI, the sample size of each subtype was still too small, so that subgroup analyses were limited. thirdly, the percentage of NEC components in GC-NEC was not recorded, which hindered the discussion of the relationship between the amount of NEC components and serum lipid levels. Lastly, as a multicenter study, the serum lipid data from different centers were normalized before statistical analyses, so that the exact level corresponding to certain risk strength could not be determined. In summary, the present study firstly reported the distinct and heterogeneous serum lipid patterns of the whole spectrum of GCNEI and found the associations between serum lipid levels and the clinicopathological features of each GCNEI subtype. However, due to the lack of data, the causality between lipids and GCNEI was not yet demonstrated well. Further studies are needed to evaluate the direct effect of lipids on GCNEI and to explore the underlying mechanism, which might help develop potential anticancer drugs or therapies targeting metabolism for GCNEI patients. Declaration of Competing Interest The authors have no conflicts of interest to declare.
2020-11-04T14:08:19.379Z
2020-10-30T00:00:00.000
{ "year": 2020, "sha1": "549dc3ee14b4288153ef46d0efe6efe097f36cbe", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.tranon.2020.100925", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9f9820566e2ad442713976f4f14bb48e9ee61d8c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248262229
pes2o/s2orc
v3-fos-license
Limited Neutralization of Omicron by Antibodies from the BNT162b2 Vaccination against SARS-CoV-2 Since early December 2021, the omicron variant has posed additional challenges to the world-wide management of the SARS-CoV-2 pandemic. Immune evasion is a key factor for its increased transmissibility. While serological studies have measured levels of neutralizing antibodies in response to vaccines, our understanding of the humoral immune response to omicron on a single-antibody level is limited. Here, we characterize a set of BNT162b2 vaccine-derived antibodies for neutralization of omicron pseudovirus. We show that approximately 50% of neutralizing anti-RBD antibodies cross-neutralize omicron, albeit with lower potency than the original Wuhan-Hu1 strain. All investigated neutralizing anti-S2 antibodies cross-neutralize omicron, however all of them are less potent than anti-RBD antibodies. While additional booster immunizations of the current vaccine generate increased antibody levels and better protection, we anticipate that the second generation of vaccines will yield more high-affinity antibodies against omicron. Introduction Within less than two months, the B.1.1.529 variant (omicron) of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), replaced B.1.617.2 (delta) as the most dominant strain worldwide. With 37 amino acid substitutions in the Spike protein (S), it marks a major antigenic shift from the original Wuhan-Hu1 and the delta sequence. Immune evasion is a major contributing factor for omicron's improved transmissibility [1][2][3] . To date, all approved vaccines against SARS-CoV-2 are based on the Wuhan-Hu1 sequence of the S protein. Serologic studies have shown that convalescent patient plasma and plasma from individuals vaccinated with BNT162b2 neutralize omicron 44-fold and 12-fold less effectively than delta 2 . A third immunization with BNT162b2 (booster) increases neutralization e cacy by 10 to 100-fold 2,4 . Accordingly, the booster prevents 88% of omicronrelated hospitalizations, as opposed to only 52% at 25 + weeks post second vaccination 5,6 . This increased e cacy should be attributed to higher anti-S antibodies with increased a nity to Wuhan-Hu1 -and to omicron only by proxy. While sera and plasma neutralization levels have been established, the anti-omicron response to BNT162b2 is not well understood on a single antibody level. We recently investigated the acute B cell response to the BNT162b2 vaccine on a single-cell level and discerned the development of anti-receptor binding domain (RBD) and anti-S2 antibodies from naïve and memory B cells, respectively 7 . The RBD is located on the S1 domain of the S protein and interacts with human angiotensin-converting enzyme 2 (ACE2) to initiate viral cell entry, whereas the S2 domain facilitates viral cell membrane fusion 8, 9 . Most neutralizing antibodies found in COVID-19 patients target RBD 10 . Neutralizing antibodies against the S2 subunit have been described, but they generally neutralize SARS-CoV-2 less e ciently than anti-RBD mAbs 11,12 . However, we have shown that anti-S2 mAbs develop early in response to vaccination, and are cross-reactive to other betacoronaviruses due to higher structural conservation of the S2 subunit over S1 7,13,14 . Anti-S2 antibodies could therefore be crucial for the protection against novel variants. We previously expressed and tested 50 vaccine-derived monoclonal antibodies (mAbs) against RBD/S1 and S2, and identi ed 15 anti-RBD mAbs that neutralized Wuhan-Hu1 and delta 7 . Here, we added additional 55 mAbs from the same sequence dataset and investigated their binding to RBD and S2 as well as their neutralization potency to Wuhan-Hu1, delta, and omicron. Discussion We investigated vaccine-derived antibody neutralization of omicron on a single-antibody level and showed that approximately 50% of anti-RBD antibodies neutralize omicron, albeit with a 19-fold lower potency. Strikingly, the most potent neutralizing antibodies against omicron had a 46-fold decreased potency compared to the best neutralizing antibodies against Wuhan-Hu1. As highly potent neutralizing antibodies are important for immune protection, this difference is substantial, and while additional booster immunizations with the original BNT162b2 vaccine will increase antibody levels, it is unlikely to generate larger amounts of higha nity anti-omicron-RBD antibodies. For anti-S2 mAbs, the rate of cross-neutralization against omicron is higher (7 out of 7 mAbs in our study). Anti-S2 antibodies stem from a recall response of memory B cells against prior infections with heterologous betacoronaviruses 7,13,14 . Their reactivity is broader and less susceptible to immune escape by novel variants. The Evolution of omicron has likely been driven by anti-RBD antibodies rather than anti-S2 antibodies (mutation rate RBD: 7.77/100 amino acids, S2: 1.02/100 amino acids). However, overall neutralization potency is 16.9-fold lower than for anti-RBD antibodies, and anti-S2 antibodies therefore likely contribute limited protection. Vaccine strategies that generate broadly-neutralizing anti-S2 antibodies and therapeutic strategies with high-potency anti-S2 monoclonal antibodies 14 have promise to work effectively against novel variants. In conclusion, our neutralization data on vaccine-derived monoclonal antibodies is in line with serological studies, as we show antibodies with decreased neutralization potency against omicron. Vaccine updates will likely achieve high-potency anti-omicron antibody responses. Materials And Methods Recombinant expression and puri cation of monoclonal antibodies (mAbs). All heavy and light-chain (HC and LC) sequences were obtained by single-cell repertoire sequencing from individuals on days 7, 21, and 28 post initial BNT162b2 vaccination (second immunization on day 21) 7 . Codon-optimized HC and LC variable sequences were cloned into in-house vectors, containing human IgG1 and κ or λ constant regions, respectively. HC and LC plasmids were transfected into Expi293F cells using FectoPro (Polyplus transfection). Cell supernatants were collected after 7 days and puri ed with AmMag Protein A beads (Genscript). mAb concentrations were measured using a nanodrop spectrophotometer (Thermo Fisher Scienti c) and human IgG quantitation ELISAs (Bethyl Laboratories). which was then ampli ed in E.Coli and checked for correct insertion and sequence integrity by Sanger-sequencing. Pseudotyped lentiviral particles were generated as previously described 7,15 . Brie y, LentiX 293T cells in 10-cm tissue culture dishes were transfected 24 h post-seeding using Fugene transfection reagent (Promega) with pHAGE-CMV-Luc2-IRES-ZsGreen-W, lentiviral packaging plasmids (HDM-Hgpm2, HDM-tat1b and pRC-CMV-Rev1b) and wild-type or variant SARS-CoV-2 spike plasmids. 48-60 h post transfection, viral supernatants were collected and spun at 500g for 10 min. Lentiviral supernatants were concentrated with LentiX concentrator (Takara) according to the manufacturer's instructions. Pellets were resuspended in EMEM at approximately 1/100 of the initial amount of media, and stored at − 80°C until use. Virus was titrated on HeLa-ACE2 cells, provided by Dennis Burton at the Scripps Research Institute. Neutralization assays were performed as previously described 7,15 . Brie y, HeLa-ACE2 cells were seeded at 12,500 cells/well in atbottom 96-well plates 20h before viral transduction. mAbs were prepared in EMEM in eight ve-fold serial dilutions starting at 50 or 10 µg ml − 1 , incubated with SARS-CoV-2 pseudotyped virus for 1h at RT, and then added to HeLa-ACE2 cells in the presence of 5 µg ml − 1 polybrene (Sigma Millipore). After 48h, luciferase activity was measured using the Britelite plus Reporter Gene Assay System (Perkin Elmer), and read on a GloMax Explorer Microplate Reader (Promega). Neutralization assays were performed 1-3 times in at least triplicates for each dilution. mAbs were considered neutralizing only if a considerable decrease in luminescence was measured at concentrations < 10 µg/ml ( Supplementary Figs. 1-2). Human subjects. No human subjects were included in this study. Antibody sequences were derived from our prior study 7 values of anti-RBD mAbs against Wuhan-Hu1, delta, and omicron. Medians ± interquartile ranges are shown; ns, not signi cant, unpaired two-tailed Kruskal-Wallis test. Supplementary Files This is a list of supplementary les associated with this preprint. Click to download. 3SupplementaryInformationV1.docx
2022-04-20T13:10:07.984Z
2022-04-14T00:00:00.000
{ "year": 2022, "sha1": "fa10f93ff7bcab2b3d31872212489bef232db6f6", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "26d0d612d78c2b5ab751d588b91825bc28a3abea", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
233309487
pes2o/s2orc
v3-fos-license
COVID-19 Outbreak: The North versus South Epidemiologic Italian Paradigm Since COVID-19 outbreak has spread from Wuhan (China) to worldwide, many Countries are dealing with the impact of this pandemic on different aspects of their lives: sanitary, sociocultural and economic. Italy represents a paradigm of different effects of pandemic on citizens’ health. In fact, North Italy showed high rates of transmission [mean national transmission index (Rt) referred to the February–April 2020 period was always >1 in North Italy regions, while <1 in South Italy)] and mortality (5.4% of positive cases in Lombardia -North Italyvs 1.3% in Campania -South Italy-) due to severe pneumonitis while South Italy displayed a very low epidemic curve suggesting that contagiousness and/or virulence could be lower than North Italy [1–4]. Furthermore, both in the first phase of COVID-19-related crisis (February–April 2020) and in the second one started in September 2020 the COVID-19-specific death ratio North:South was 5:1 with a standardized mortality ratio from 3 to 7.5 in North versus 0.012 to 0.5 in South Italy [1–4]. Since COVID-19 outbreak has spread from Wuhan (China) to worldwide, many Countries are dealing with the impact of this pandemic on different aspects of their lives: sanitary, sociocultural and economic. Italy represents a paradigm of different effects of pandemic on citizens' health. In fact, North Italy showed high rates of transmission [mean national transmission index (R t ) referred to the February-April 2020 period was always >1 in North Italy regions, while <1 in South Italy)] and mortality (5.4% of positive cases in Lombardia -North Italy-vs 1.3% in Campania -South Italy-) due to severe pneumonitis while South Italy displayed a very low epidemic curve suggesting that contagiousness and/or virulence could be lower than North Italy [1][2][3][4]. Furthermore, both in the first phase of COVID-19-related crisis (February-April 2020) and in the second one started in September 2020 the COVID-19-specific death ratio North:South was 5:1 with a standardized mortality ratio from 3 to 7.5 in North versus 0.012 to 0.5 in South Italy [1][2][3][4]. None of the factors suggested initially accounted for the significant difference between North and South Italy. Some comorbidities such as diabetes, hypertension and Chronic Obstructive Pulmonary Disease have the same prevalence in North and South Italy [5][6][7][8] even if diabetes is more common in the regions of South. Overall smoking rates are similar across Italy, however, some gender differences are observed such as higher rates of smoking women in the north. However, most of the dead in North Italy are males suggesting that biological and/or immunological gender-related factors might be involved in determining pneumonitis severity. Conversely, there is a significant prevalence of overweight and obese people in South Italy, across all age groups. The last data should eventually promote the negative impact of the virus on health of South Italy people. Furthermore, aged people (>65 years old), where most of dead are concentrated, are also equally distributed in Italy considering the absolute numbers and regions' surface [2][3][4]. Contacts with Chinese entrepreneurship are also neither a discriminating nor a detrimental factor since relationships were intense in both North and South Italy. Very recently, some Italian researchers [9] indicated pollution as a possible major determinant of both contagiousness and severity of COVID-19 in North Italy. In fact, a large part of North Italy is constituted by a flat land called "Pianura Padana" where there are concentrated the most important industries and cities of Italy. High pollutants concentration and microclimatic condition (wet and cold air, fog formation, scarce wind remodeling) favors the wellknown phenomenon of "thermal inversion": a large mass of cold air in contact with the ground is trapped under a layer of warmer air. The density of these masses is so different that mixing is impossible in absence of significant rain or wind. Unfortunately, the pollutants trap, concentrate and concur to increase the density of the inferior layer [10][11][12]. This vicious circle produces a "pollution beret", visible from the space through satellite normal photos as well as through technical assessment of specific gasses' concentration (whose explanation is beyond the scope of this letter) ( Figure 1). Most of air pollutants (i.e., carbon monoxide, sulfur dioxide, nitrogen dioxide, ozone, polycyclic aromatic hydrocarbons, creosote, particulate, etc.) interact to form stable complex macromolecular "rafts". These "rafts" participating to the composition of Particulate Matter (PM) have been frequently associated with virus-related syndromes [13,14]. In fact, viruses can interact with these particles and be contagious at unexpected distances [15,16]. It has also been demonstrated a profound difference in North and South Italy for the microplastic waste distribution. In fact, a recent study demonstrated that the accumulation of microplastics among driftlines showed no consistent pattern, besides expanded polystyrene tending to accumulate backshore of the Po River Delta in northeast Italy. The accumulation hotspots within a single driftline can disrupt a general observed accumulation pattern [17]. In support of this model, a survey is presented in Table 1. COVID-19 is an enveloped and single-stranded ribonucleic acid virus with 9-12 nm-long spikes surrounding the surface and conferring it the form of a solar corona at electron microscope [18]. Spike glycol-protein S binds to Angiotensin-converting Enzyme 2 (ACE2) receptor on host cells triggering the subsequent fusion between the viral envelope and cellular membrane. ACE2, mainly expressed on lungs, vasculature and intestine, is an enzyme of the Renin-Angiotensin System [19,20]. The main enzymatic pathway involved in the catabolism of angiotensin peptides can be briefly summarized as follows: renin, secreted by juxtaglomerular kidney cells, cleaves angiotensinogen in angiotensin I (decapeptide with no direct biological activity), which is, in turn, cleaved by ACE in angiotensin II (Ang II) (which induces vasoconstriction). ACE2 converts Ang II to Ang-(1-7) a vasodilator, thus counteracting the activity of ACE. Interestingly, chronic inflammation is associated to increased expression of ACE2 [21] and impairments of T lymphocytes functions [22][23][24]. As already explained, ACE2 is a critical factor for virus pathogenesis. Thus, pollutants and microclimate may concur (1) to favor virus "transport" into lungs and (2) to Higher numbers of RNA copies were associated with larger particles. promote cells' infection by increasing the inflammatory status of lungs (and thus increasing ACE2 expression) and producing an immune depressive contexture (Figure 2). Beside the environmental reasons of the different severity of the syndromes associated to COVID-19 infection, genetic determinants can contribute to this different clinical outcome. In fact, it has been recently described a strong correlation among the interstitial pneumonitis induced by the treatment with Immunological Checkpoint Inhibitors (ICIs) in cancer patients (highly resembling the COVID-19-induced pneumonitis) and germinal expression of HLA-B*35 and DRB1*11 alleles associated to autoimmune diseases [25]. The expression of some HLA alleles was also correlated to the response to ICIs [26]. Moreover, a set of HLA alleles (A, B, C), known to be involved in the immune response against infections, correlates with COVID-19 incidence in Italy. COVID-19 data were provided by the National Civil Protection Department, whereas HLA allele prevalence was retrieved through the Italian Bone-Marrow Donors Registry. Among all the alleles, HLA-A*25, B*08, B*44, B*15:01, B*51, C*01, and C*03 showed a positive log-linear correlation with COVID-19 incidence rate fixed on 9 April 2020 in proximity of the national outbreak peak (Pearson's coefficients between 0.50 and 0.70, p < 0.0001), whereas HLA-B*14, B*18, and B*49 showed an inverse log-linear correlation. When the alleles were examined simultaneously using a multiple regression model to control for confounding factors, HLA-B*44 and C*01 were still positively and independently associated with COVID-19. Interestingly, their distribution in the different Italian Regions was prevalent in North Italy where the incidence of COVID-19 related pneumonitis was higher [27]. It cannot be excluded that also epigenetic markers (including different methylation patterns of gene expression influenced by different dietary habits or noncoding RNAs) may have a role in this phenomenon. We believe that the role of pollution and epi-and genetic factors should be further investigated and future interventions should be taken to prevent and/or reduce the negative impact of pulmonary-tropism pandemics.
2021-04-21T06:16:52.390Z
2021-03-23T00:00:00.000
{ "year": 2021, "sha1": "b14f53ce0c2d7d629b85a1ffedab0afc8bddf1bc", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125954434.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bacb57f4702996ed7a92e692a51948fbde003638", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255961864
pes2o/s2orc
v3-fos-license
Complete genome analysis of hepatitis B virus in Qinghai-Tibet plateau: the geographical distribution, genetic diversity, and co-existence of HBsAg and anti-HBs antibodies The genetic variation and origin of Hepatitis B Virus (HBV) in Qinghai-Tibet Plateau were poorly studied. The coexistence of HBsAg and anti-HBs has been described as a puzzle and has never been reported in the indigenous population or in recombinant HBV sequences. This study aimed to report geographical distribution, genetic variability and seroepidemiology of HBV in southwest China. During 2014–2017, 1263 HBsAg positive serum were identified and 183 complete genome sequences were obtained. Serum samples were collected from community-based populations by a multistage random sampling method. Polymerase chain reaction (PCR) was used to amplify the HBV complete genome sequences. Then recombination, genetic variability, and serological analysis were performed. (1) Of the 1263 HBsAg positive serum samples, there were significant differences between the distribution of seromarkers in Tibet and Qinghai. (2) Of 183 complete genome sequences, there were 130 HBV/CD1 (71.0%), 49 HBV/CD2 (26.8%) and four HBV/C2 isolates (2.2%). Serotype ayw2 (96.1%) was the main serological subtype. (3) Several nucleotide mutations were dramatically different in CD1 and CD2 sequences. Clinical prognosis-related genetic variations such as nucleotide mutation T1762/A1764 (27.93%), A2189C (12.85%), G1613A (8.94%), T1753C (8.38%), T53C (4.47%) T3098C (1.68%) and PreS deletion (2.23%) were detected in CD recombinants. (4) From the inner land of China to the northeast boundary of India, different geographical distributions between CD1 and CD2 were identified. (5) Twenty-seven (2.14%) HBsAg/HBsAb coexistence serum samples were identified. S protein amino acid mutation and PreS deletion were with significant differences between HBsAg/HBsAb coexistence group and control group. HBV/CD may have a mixed China and South Asia origin. Based on genetic variations, the clinical prognosis of CD recombinant seems more temperate than genotype C strains in China. The HBsAg/HBsAb coexistence is a result of both PreS deletion and aa variation in S protein. Several unique mutations were frequently detected in HBV/CD isolates, which could potentially influence the clinical prognosis. (Continued from previous page) Conclusions: HBV/CD may have a mixed China and South Asia origin. Based on genetic variations, the clinical prognosis of CD recombinant seems more temperate than genotype C strains in China. The HBsAg/HBsAb coexistence is a result of both PreS deletion and aa variation in S protein. Several unique mutations were frequently detected in HBV/CD isolates, which could potentially influence the clinical prognosis. Keywords: Hepatitis B virus, Mutation/mutation rate, Recombination, Hepatitis B surface antigen, Antibody to HBsAg Background Hepatitis B Virus (HBV) is considered to be a major global health problem with more than 250 million chronic HBV (CHB) carriers and more than one million HBVassociated human deaths per year [1]. In the nationwide investigation in 2006, Hepatitis B surface antigen (HBsAg) was identified in 7.2% of the whole population in China [2]. In Qinghai-Tibet Plateau, due to high altitude environment, religion issue, delayed vaccine inoculation or other unknown reasons, the HBV prevalence is over 10% according to our recent study (unpublished), which is much higher than in other areas of China. These carriers of HBV are at increased risk of developing liver cirrhosis (LC) and hepatocellular carcinoma (HCC) [3]. Qinghai-Tibet Plateau covers more than 2.5 million square kilometers, which is the second-largest plateau in the world and connecting south and northeast Asian [4]. In this area, a special recombinant of HBV genotype C and genotype D (HBV/CD) was reported [4]. However, the nature and origin of the recombinant are still poorly studied. At the same time, the coexistence of HBsAg and anti-HBs (HBsAb) in CHB patients has been occasionally reported [5,6]. Due to the low incidence of HBsAg/ HBsAb coexistence in HBV carriers, the sample sizes in many previous studies [5,7] were small (less than 20) and lack of adequate control subjects [8]. The mechanism underlying the emergence of anti-HBs in CHB patients remains unclear and related information has not been reported in the indigenous population or in recombinant HBV sequences. To gain deeper insights into HBV genomic diversity of the special recombinants, 1263 HBsAg positive serum samples were obtained from eight areas of Qinghai-Tibet Plateau and 183 complete genome sequences of HBV isolates were under further analysis. Twenty-seven HBsAg/HBsAb coexistence serum samples were identified and studied with genome variation. Methods Procedures to detect point mutations and recombination of HBV by PCR and direct sequencing analysis Sera samples from Qinghai-Tibet plateau → HBV infection serological markers detection→ DNA purification → HBV complete genome amplification → PCR-DNA purification → Automatic sequencing → Sequences assembling → Comparing with reference sequences. Subject sample collection During 2014-2017, the subjects were collected from community-based populations from eight regions of Qinghai-Tibet Plateau based on the population density and covered most of the indigenous habitations, including Hainan, Lhasa, Shannan, Nyingchi, Ali, Nakqu, Chamdo and Rikaze. In this study, the multistage random sampling method was used to ensure the sample representativeness of the whole area. Firstly, two or three counties were selected at random from eight areas, respectively. Secondly, two villages were selected from every county. Thirdly, populations of 18-59 years old were selected from every village. Basic information was recorded on the questionnaire prepared beforehand, including name, gender, birth date, address, and medical information, and 5 mL of venous blood was taken from each participant. HBV infection markers including hepatitis B surface antigen (HbsAg), anti-hepatitis B surface antibody (HBsAb), anti-hepatitis B core antibody (HBcAb), hepatitis B e antigen (HBeAg) and anti-hepatitis B e antibody (HBeAb) were detected by chemiluminescent assays (AXSYM; Abbott Laboratories, North Chicago, IL, USA). HBV DNA extraction, whole-genome amplification, and sequencing HBV DNA was extracted from 200 μL serum using QIAamp DNA Blood Mini Kit (Qiagen, Hilden, Germany). Full-length HBV DNA (about 3.2 kb) was amplified by Nested PCR finally performed in seven fragments. Based on the previously reported methods [9] used in our preliminary study [10], the primers and thermal profile were optimized in this study to adapt to HBV/CD isolates and identify more complete genomes. The basal core promoter (BCP) region was generated by two rounds. First round was conducted using the primer combination of BcpF1 and BcpR1 in a 25 μL reaction volume containing 5 μL extracted DNA and 12.5 μL premix Taq polymerase. The whole-genome (without BCP region) consisting of six fragments was also generated by two rounds of PCR. First round was conducted using the primer combination of HBV1799FLong and HBV1801RLong in a 50 μL reaction volume containing 15 μL extracted DNA and 25 μL premix Taq polymerase. All the primers and thermal profiles were listed in Table 1. After purification of the PCR products with a QIA Gel Extraction Kit (Qiagen, Valencia, CA), the sequences were determined using the Sanger dideoxy terminator sequencing method with DNA sequencer ABI 3700 (PE Applied Biosystem). Then the sequences were assembled using SeqMan II software (DNAStar Inc.), and the correct nucleotide position of the complete HBV sequence was revised through alignment with the reference sequence in the Gen-Bank database. The whole-genome nucleotide sequences reported in this article have been deposited in the National Center for Biotechnology Information GenBank database with accession numbers MN683570-MN683729, MN657315-MN657318, KX660674-KX660690. Recombination and serological analysis Genotypes and recombination in genome sequences of HBV were investigated using SimPlot v3.5.1 software and JPHMM (jumping profile Hidden Markov Model) method [11]. MEGA 7.0 software was used to calculate genetic distance and pairwise distance comparisons. The HBsAg serotypes were deduced from the sequences of the S gene region by the identification of amino acids at positions 122 (Lys-Arg for d-y determinants), 160 (Lys-Arg for w-r determinants), 127 (Pro-Thr-Leu/Ile for w2-w3-w4), and in the case of Arg122 Pro127 Lys160, also at positions 159 (Ala-not Ala for ayw1-ayw2 and ayw4) and 140 (not Ser-Ser for ayw2-ayw4) [12]. Mutation analysis Generally, the mutation definition and analysis were performed as previously described [13]. The whole genome, including S gene, PreC/C gene, reverse transcriptase (RT), X gene and basal core promoter (BCP) were under Table 1 List of primers used to amplify the different regions of the HBV genome and their respective thermal profile *Numbers within primer names represent the primer positions. An F after the primer position stands for sense primers while R stands for anti-sense primers **SP6 and T7 are tag sequences attached at the 5′ end of PCR primers used in this study,except four Bcp primers. SP6 and T7 primers were used to sequence PCR fragments nucleotide mutation analysis. Base on the structure of HBV/CD recombinants, genotype C and genotype D reference sequences were used as parental sequences for mutation detection [14,15]. Nucleotide sequences of datasets in this study and reference HBV sequences were analyzed using the MEGA 7.0 software and Mutation Reporter Tool [16]. The nucleotide PreS/HBsAg sequences obtained were translated into amino acid sequences, aligned and compared with reference sequences. Amino acid variability was defined as the frequencies of residue substitutions at each position [17]. For analysis, HBsAg was divided into subregions corresponding to structural and/or functional domains: the N-terminal region (aa 1 to 99), the major hydrophilic region (MHR, aa 100 to 169) and the C-terminal region (aa 170 to 226). The "a" determinant in MHR (aa 124 to 147), the first loop of "a" determinant (aa 124 to 137) and second loop of "a" determinant (aa 139 to 147) were also analyzed respectively. Statistical analysis Statistical analyses were performed using SPSS 22.0 (IBM Corp., Armonk, NY, USA). Chi-squared test and two-tailed Student's t-test were used for analysis, as appropriate. A P-Value of ≤0.05 was considered as statistically significant. Ethical approval The study protocol conformed to the ethical guidelines of the 1975 Declaration of Helsinki and was approved by the Ethics Committee of the Chinese Center for Disease Control and Prevention. The purpose of the study and the right to information were explained to the participants by research staff. Written informed consent was obtained from each participant before the interview and venous blood collection. Patient characteristics During 2014-2017, 1263 HBsAg positive serum samples were identified from the community population of Qinghai-Tibet plateau. There were no differences in age (29.29 ± 17.93 vs 30.56 ± 18.98, P = 0.211) and gender (449/404 vs 201/210, P = 0.206) distribution between study population in Tibet and Qinghai. Of all the 1263 positive serum samples, twenty-seven serum samples (2.14%) were HBsAg/HBsAb coexistence. There were significant differences between the distribution of seromarkers in Tibet and Qinghai. Details of serological information are shown in Supplement Table 1. One hundred and eighty-three samples were selected from the 1263 HBsAg-positive serum samples and then full-length sequences were amplified for analysis of the HBV genome. A total of 87 samples were HBeAg-negative, and 96 samples were HBeAg-positive. Most isolates had a genome size of 3215 nt, except eight sequences with evidence of deletion. Basic information of HBV/CD complete genome sequences was listed in Table 2. Recombination and serological results There were two types of HBV/CD recombinants detected in this study. HBV/CD1 was with around nt10-800 from genotype D integrated into genotype C to form the recombinant viral strain; HBV/CD2 was with a region around nt10-1500 from genotype D integrated into genotype C to form the recombinant viral strain. Graphical display of JPHMM and Simplot recombination analysis results were shown in Fig. 1. Of all the 183 HBV complete sequences, there were 130(71.0%) HBV/CD1 isolates, 49 (26.8%) HBV/CD2 isolates and four (2.2%) HBV/C2 isolates. As the results of evolutionary divergence over sequences showed, the genotype D fragment of HBV/CD is closest to HBV subgenotype D4, while the genotype C fragment of HBV/CD is closest to the subgenotype C2 (Supplement Table 2 and 3). Geographical distribution of HBV subgenotypes in the plateau HBV/CD1 recombinant sequences were predominantly identified in all the geographic regions analyzed in this study. However, most of the HBV/CD2 isolates (93.9%, 46/49) were identified in two regions (Shannan & Rikaze), which was significantly more than the other six regions in the plateau (P = 0). Four genotype C isolates were all identified in the eastern part of the plateau. The distribution of HBV subgenotypes in the plateau is shown in Fig. 2 and Supplement Table 4. Table 3 and Fig. 3. Compare to reference sequences of genotype D and genotype C, several nucleotide (amino acid) positions changed in nearly all the HBV/CD1 and HBV/CD2 sequences, such as A942T(aaL613QH for HBV/CD1 and aaH613K for HBV/ CD2), T1485A and T3210A(aaS272TN) in P gene, T1485C(aaS38P) in X gene. Amino acid substitution in PreS/S region One hundred and seventy-nine HBV CD recombinants with complete genome sequences were under analyses of amino acid substitution in PreS/S region. Amino acid substitution of 27 HBsAg+/HBsAb+ strains (Group I) were compared with 152 HBsAg+/HbsAb-strains (Group II). The distribution of different recombination type (HBV/CD1 and HBV/CD2, P = 0.677) and HBeAg status (P = 0.213) had no significant difference between Group I and Group II. DNA level of the 179 serum samples were all above 5 log10 and had no significant difference between Group I and Group II (P = 0.355). Significant aa substitution diversity was observed within S gene of HBV between Group I and Group II (1.03 vs. 0.39, for substitution per 100 aa, the same below, P < 0.001). Moreover, the aa variabilities in MHR (P < 0.001), "a" determinant (P < 0.001), the first loop (P < 0.001) and the second loop (P < 0.001) of "a" determinant were all more variable in Group I than in Group II. The frequency of PreS deletion was 2.23% (4/179) and also with significant differences between the two groups. Details were listed in Table 4 and Fig. 4. Discussion HBV genotypes are related to the severity of liver disease and response to clinical therapy [18]. Compare to other genotypes, HBV genotype C and genotype D carry a higher lifetime risk of liver cirrhosis and hepatocellular carcinoma development [19]. It is believed that recombination can exert an influence on clinical important properties more dramatically than the steady accumulation of natural mutations, which suggests the potential pathogen significance of the HBV/CD recombinants [20]. As far as we know, there was no detailed molecular epidemiology or genetic variability study carried out based on a large number of HBV/CD recombinant complete genome sequences. In this study, HBV/CD recombinant was the main genotype (179/183, 97.81%) isolated in the plateau. This result is different from recent but smaller sample size reports in Tibet that genotype C and genotype D were the most dominant HBV genotypes [21,22]. The HBV genotypes showed a distinct geographical distribution all over the world [19]. In both HBV CD1 and CD2 recombinant genomes, the 'C fragment' and 'D fragment' were genetically close to subgenotype C2 and D4, respectively. HBV/D4 isolates were mainly reported in South America and the Pacific islands [14]. The only report of HBV/D4 in Asia was in northeast India [23]. Specifically, subtype C2 was the main genotype in central and north China [24], so Qinghai-Tibet Plateau was found at the presumed geographical junction to the distribution of these two HBV subgenotypes (Fig. 2). In this study, recombinants CD1 and CD2 had significant different geographic distribution in the plateau. It could also partly explain the different distribution of seromarkers in Tibet and Qinghai. HBV/CD2 isolates were mainly identified in two regions (Shannan & Rikaze) which are close to the boundary of northeast India (Fig. 2). Genotype D fragment was in around nt10-800 of HBV/CD1 and around nt10-1500 of HBV/CD2 (Fig. 1). Then both CD1 and CD2 have the same initial recombination site at nt10. Based on the facts mentioned above, it can be inferred that these two types of recombinations might have a mixed China (subgenotype C2) and Indian (subgenotype D4) origin. Though lots of HBV genotype D or genotype C isolates were reported in India [23,[25][26][27], no HBV/CD recombinant strain was reported in south Asia until now. Actually, nearly all the HBV/CD recombinants in this study and in previous studies scattered in indigenous of high altitude areas, such as Qinghai-Tibet Plateau, Yunnan-Kweichow Plateau [28], Loess Plateau [29] and Mongolia Plateau (Accession Number: AB270534, AB270534). A possible reason for this phenomenon might be due to the genetic background or migration history of highlanders. As the diagnosis information in these rural areas is still not detailed enough to analyze the clinical feature of these HBV recombinants, the complete genome features in this study offer us a way to estimate the clinical prognosis of these HBV strains in the indigenous population. Serotype awy2 was the predominant serotype of HBV/ CD in this study, as same as the results of genotype D [30]. It indicates that the main serological character had not dramatically changed after the recombination. Previous studies reported that A1762T/G1764A double mutation in BCP (nt.1742-1849) was the strongest viral factor associated with the development of liver disease [20,31]. Other mutations, such as T53C, G1613A, C1653T, T1753C, A2189C, T3098C and PreS deletions were also reported associated with clinical progress [32][33][34][35][36][37][38]. In this study, the A1762T/G1764A double mutations were observed in 27.93% in HBV CD recombinant sequences. These frequencies were less than previous reports of genotype C in CHB carriers [39], which indicates a lower risk of HCC in the population. The same result was also found in frequencies of mutation A2189C (12.85%), G1613A (8.94%), T1753C (8.38%), C1653T (5.59%), T53C (4.47%), T3098C (1.68%) and PreS deletion (2.23%, 4/179), which were also lower than mutation frequencies of genotype C in CHB patients in Asia [32,[34][35][36][37][38][39][40]. This indicates the clinical progress of CD recombinant seems genetically more temperate than genotype C which had caused the most liver disease and related death in China. There were several mutations frequently identified in PreC/C and PreS/S regions, which potentially influence the clinical prognosis. However, many of these prevalent nucleotide mutations or amino acid substitutions were unique in the CD recombinants, which were not previously reported in genotype C or other genotypes of HBV genome. The function and influence of these mutations need further analysis. The prevalence of anti-HBs coexistence was 2.14% in the population of HBsAg-positive patients. The coexistence frequency is lower than previous reports [5][6][7]41], suggesting possible geographical variability or special recombinant characteristics in these statistical differences. HBeAg status and DNA level of the serum were the focus of the argument about comparability of the HBsAg/HBsAb coexistence studies [8]. In this study, no differences were identified in the distribution of DNA level, HBeAg positivity or recombination types between Group I and Group II. And the DNA level of serum samples in Group I was all above 5 log10 copies/mL. The aa variability of S region in Group I was Fig. 3 Distribution of wild type and nucleotide mutations (amino acid substitutions) in HBV/CD1 and HBV/CD2 genome. Each bar represents the percentage of isolates with mutated nucleotide (amino acid residues) in CD1 and CD2 recombinants significantly higher than that in Group II (Table 3, P < 0.001), which was consistent with previous reports [41]. In further analysis of each segment in the S region, the aa variability in Group I was significantly increased in N-terminal, MHR and C-terminal, compared with Group II (P < 0.001, P < 0.001, P = 0.013, Fig. 3). The "a" determinant in MHR is the hot zone of HBsAg/HBsAb coexistence studies, which is located in the hydrophilic region between aa124 and aa147 and act as the most important antigen-binding site in the S region of each HBV serotypes [17,41]. In this study, compared with Group II, the aa variability in the "a" determinant of Group I increased significantly (1.54% vs 0.06%, P < 0.001). This region is composed of two stemloop structures, which act as two binding sites of monoclonal antibodies and are of great significance for the effectiveness of hepatitis B vaccine and the clinical detection of antigens. In this study, the aa variation of the first and second loop of the "a" determinant were both statistically correlated with HBsAg/HBsAb coexistence, which consistent with previous studies [5,17]. There were different opinions about the effect of the second loop [42] or even the entire S protein variablity [7] on the HBsAg/HBsAb coexistence. However, former studies were about genotype B and genotype C [42], while HBV/CD recombinant was the main genotype in this study and the "a" determinant was located in genotype D fragment (Fig. 1). It may be due to the amino acid variation background in different genotypes. Moreover, due to the low incidence of HBsAg/ HBsAb coexistence in all HBV genotypes, the sample sizes in previous studies were less than 20 [5,7], and the insufficient number of samples may affect the stability of statistical results. The association between aa variation of S protein and HBsAg/HBsAb coexistence has been reported in many studies [5,6,17,41,42]. However, the aa mutations in any part of S protein is not a sufficient and necessary condition for the explanation of HBsAg/HBsAb coexistence enigma [8], so there should be at least one auxiliary or secondary condition in the emergence of the coexistence. Previous studies suggested that mutations or deletions in the PreS region were the reason for the coexistence of HBsAg/ HBsAb [43]. The significant difference was also found in the distribution of deletions in PreS region between Group I and Group II (Table 3), suggesting the change of PreS region is also associated with coexistence. Interestingly, the frequency of PreS deletion in HBV/CD recombinants was 2.23%, which is lower than the frequency in other genotypes (4.9%) [7]. Combine with the fact that the frequency of HBsAg/ HBsAb coexistence is 2.14% in this study, which is also lower than other genotypes [5][6][7]41]. PreS deletion may act as a genetic variation background which occasionally affects the interaction between antigens and antibodies, together with the aa mutation in MHR to cause the coexistence. This study has several limitations. First of all, multiple infections, minor populations of immune escape variants in viral quasispecies may not be identified in this study by PCR product sequencing. Secondly, the combined action of PreS deletion and aa variation in MHR need further support. Finally, the result would be more convincing if we could have a larger sample size of HBsAg/HBsAb coexistence, though 27 HBsAg/HBsAb coexistence in this study is more than most of the previous studies. Conclusions In summary, this study describes the geographical distribution, genetical variability and HBsAg/HBsAb coexistence phenomena of HBV isolates in Qinghai-Tibet plateau. HBV/CD recombinant has become the predominant genotype in Qinghai-Tibet Plateau. There were signs that HBV/CD had a mixed China and South Asia origin. PreS deletion and aa variation in S protein may cause the HBsAg/HBsAb coexistence together. Several unique nucleotide mutations were frequently detected in HBV/CD isolates, which could potentially influence the clinical prognosis.
2023-01-18T15:04:35.766Z
2020-06-12T00:00:00.000
{ "year": 2020, "sha1": "34e58c18a6636675c892fbd5035749acf6f0658c", "oa_license": "CCBY", "oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/s12985-020-01350-w", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "34e58c18a6636675c892fbd5035749acf6f0658c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
120854660
pes2o/s2orc
v3-fos-license
A dedicated device for measuring the magnetic field of the ND280 magnet in the T2K experiment This paper describes a dedicated device to map the magnetic field of the T2K near detector (ND280) magnet, which runs at a nominal field value of 0.2 T, with an accuracy of the order of a Gauss. A high accuracy mapping is a key ingredient in order to provide the required momentum accuracy in the ND280 time projection chambers (TPCs) allowing T2K to measure precisely the PMNS matrix parameters ?m232 and ?23. This paper describes the design and realization of the device as well as its performance during operation. The reported results show that the aimed at goals are reached. Introduction T2K [1,2] is the first long-baseline neutrino beam experiment which uses an intense off-axis neutrino beam with the goal of determining the unknown parameter θ 13 [3] of the PMNS neutrino mixing matrix [4]. T2K also aims to precisely measure the parameters ∆m 2 23 and θ 23 with an improvement of an order of magnitude on the current sensitivities. This is obtained from the comparison of the measured fluxes at a near detector station, 280 m away from the neutrino source (ND280), and at Super-Kamiokande, a 50 kton waterČerenkov detector, 295 km away. To achieve the aimed precision, a good knowledge of the neutrino flux and energy spectrum is essential, which can be obtained from the ND280 near detector station. The magnetic field for ND280 is generated by the former UA1/NOMAD dipole magnet [5,6]. This magnet (figure 1) was refurbished at CERN for the purpose of the T2K experiment and shipped in 2008 to the Japan Proton Accelerator Research Complex (J-PARC), situated in Tokai. The magnet has a weight of ∼900 tons and its inner dimensions are 3.6 m × 3.5 m × 7.0 m. The magnet consists of 16 C-shaped yokes and 4 coils made of 26 double "pancakes" for a total of 208 turns, generating a dipole magnetic field of 0.2 T for the purpose of T2K. Inside the coils the so-called basket structure is placed. This support frame is made out of stainless steel with inner dimensions of 2.3 m × 2.4 m × 6.6 m and holds most of the sub-detectors of ND280, as is shown in figure 2. Most upstream is the π 0 -detector (P0D), built mainly of scintillators interlaced with water, followed by the tracker which contains three time projection chambers (TPCs) and two fine-grained detectors (FGDs) in a sandwich structure. The downstream electromagnetic calorimeter (Downstream ECal) completes the detectors inside the basket. The basket is surrounded by further electromagnetic calorimeters and by the muon range detectors (SMRDs) which instrument the magnet yoke. The tracker is designed to study charged current neutrino interactions occurring in the FGDs and other detector parts. Each of the three rectangular TPC chambers has an outer dimension of 2.3 m × 2.4 m × 1.0 m and an inner sampling length of 700 mm. Each TPC consists of an inner box that holds an argon-based drift gas within an outer box that holds CO 2 as insulating gas. Copper strips of precisely 11.5 mm pitch in conjunction with a central cathode panel produce a uniform electric drift field, roughly aligned with the magnetic field. Bulk micromegas detectors are used for the gas-amplified readout of ionization electrons with an anode pad segmentation of 70 mm 2 . Each TPC has two readout planes with twelve micromegas modules for a total of 72 modules. The signal pattern in combination with the arrival time allows a 3D track reconstruction of charged particles. More details on the design and performance of the TPCs can be found in [7]. The precise mapping of the magnetic field in the instrumented region of the ND280 detector complex has several goals. The magnetic field itself is a key element for precisely measuring the momentum of the charged particles passing through the inner detector volume. The performance requirement of the TPCs corresponds to a momentum resolution of better than 10% and to a momentum scale known to better than 2%. This can be achieved by measuring the magnetic field with an accuracy of the order of a Gauss in the directions perpendicular to the drift direction of the electrons in the TPCs. These magnetic field components distort the drift of the electrons in the TPC gas volume. Besides distortions, it is also important to know the absolute scale of the measured magnetic field values. This is guaranteed by a careful calibration of the Hall probes as described in section 2.3. More details on the construction and operation of the device can be found in [8]. An additional publication is being prepared to show results from the full measurement campaign [9]. Mechanics and electronics The coordinate reference system used for the mapping device is the same as that of the ND280 detector ( figure 2). The x-axis is parallel to the main component of the magnetic field, the y-axis is 2012 JINST 7 P01018 antiparallel to the line of gravity, and the orthogonal z-axis is in the direction of the neutrino beam. However, small deviations of the order of a mrad exist and were determined by a series of surveys (see section 2.2). The mapping device was built by the CERN PH-DT group and the T2K Bern group. It was designed to measure a volume slightly larger than the instrumented region inside the basket. The device has a total weight of ∼700 kg and consists of three parallel arms. Two long arms cover the width (x-direction) of the detector region (2.2 m) and are movable in y (2 m in height) and z (6 m in length), and a shorter third arm of 1.8 m length (x-direction) is used to crosscheck the measurements at lower positions in y-direction. Movement in the z-direction required additional rails to be temporarily installed into the basket which were aligned with the mapping device. The whole equipment was built of non-magnetic materials as aluminum and stainless steel to prevent undesired field distortions. The device can be moved by means of three pneumatic motors (one for the movement in y-direction and two for the movement along the z-axis). A drawing and a photograph of the device are shown in figures 3 and 4, respectively. The position of the mapping device during its operation is read out by four optical encoders (two for the y-and two for the z-direction) which allow the user to control the movement of the equipment. The encoder resolution of 10 µm allows knowledge of the actual position of the mea- surement bench at the order of 0.1 mm. 1 This is an order of magnitude better than the required precision of ∼1 mm. In total 89 electronics cards [10] are installed onto the three arms of the device. Each card is equipped with three Hall probes, which measure the voltage induced by the Hall effect for each component of the magnetic field (B x , B y , and B z ), and also contains a temperature sensor as well as the necessary readout electronics (figure 5). The 89 electronics cards are distributed over four readout chains. The user can communicate with the cards via CAN-Bus [11] to activate the routines for the initialization and movement of the device and for data readout. The data of the three Hall probes and the temperature sensor are individually read out and stored. The total readout time per card is 270 ms. Each of the two longer arms of the device holds 39 cards covering a range of 2166 ± 1 mm with a distance of 57 ± 0.2 mm from center-to-center of each card in the x-direction. The arms are separated in z-direction by 383 mm. An additional parallel third arm holds 11 cards spread over 1710 ± 1 mm. With respect to the second arm, the third arm is installed at a distance of 201 mm in z-direction and 255 mm lower (y-direction). Device survey The positioning and angular deviations from the desired axes were obtained from several surveys [12] of: 1. the mapping equipment and rails with respect to the basket at CERN. 2. the device with respect to the general ND280 reference frame before the actual measurements. 3. the device after the field measurement campaigns. 4. the positions of the individual Hall probes on the electronics cards. The x-axis of the ND280 reference frame was chosen to coincide with the corresponding axis of the coordinate system of the mapping device. The second of the above mentioned surveys showed that the y-axis of the mapping device was rotated by 0.8 mrad with respect to the y-axis of the ND280 reference frame. The center of the magnet is the origin of the mapping device reference frame as well as of the ND280 coordinate system. The three Hall probes of each card are displaced relative to each other as can be seen in figure 6. The measured voltages were associated to the point calculated as the center of mass of the three probes. This method induces an error on the position of less than 1 mm. This is negligible, since the magnetic field changes with less than 2 Gauss per cm in the region with the strongest field variations. Hall probe calibration For the Hall probe calibration, which was performed at CERN [13], each card was placed in a highly uniform field whose strength was monitored by an NMR probe with a precision of better than 1 Gauss. This setup was originally designed for the calibration of the field mapping equipment of the LHC ATLAS experiment [14,15]. Pictures of the setup are shown in figures 7 and 8. Each card was turned to many different orientations with polar angle θ and azimuth angle φ that were Figure 7. Photograph of the calibration setup. In a homogenous magnetic field (directed bottom to top) a thermally isolated box is mounted which is temperature controlled by a Peltier element and a ventilator. Inside the box an NMR probe monitors the magnetic field and the head to hold the cards with the Hall probes is installed. The head can be rotated freely in three dimensions, which is done by two external motors via driving axes. precisely measured by three orthogonal pickup coils. The pickup coils rotate together with the card and monitor the change in the effective magnetic field, from which the rotational angle can be inferred. The measurements were repeated for several field strengths and temperatures. The Hall voltage (V) is decomposed into orthogonal functions. Spherical harmonics Y are used for θ and φ and Tschebyshev polynomials T for the modulus of the field B and temperature t: Using this series, a total of about 200 calibration parameters was calculated for each probe. A separate angular calibration was used to find the orientation of the calibrated coordinate system relative to three feet that support the card on the mapping device. All Hall probes were calibrated at 0.2 T and 1.14 T, and the probes which deviated more than 2 Gauss at a field of 0.2 T were rejected. A few probes were also calibrated at 0.1 T as a cross-check, the results were found to be consistent. Figure 9 shows the measurements for one of the probes. The accuracy for the angular alignment between the three probes on a electronic card was measured to be ±2 mrad. An improvement on both values, the accuracy of the magnetic field and the angular alignment, can be obtained from the actual mapping data, as described in [8,9]. All probes have an At a magnetic field of 0.2 T the deviation of B from the nominal magnetic field is below 2 Gauss for all measured angles, as can be inferred from the distribution on the right. intrinsic resolution of 0.2 Gauss, which was obtained from repeated B-field measurements under identical conditions, constant B-field, temperature and angle with respect to the magnetic field. Calibration data, equalization and alignment corrections To calibrate the mapping device as a whole, a dedicated run in the ND280 magnet was performed, during the measurement no other detectors were installed in the basket. A ramping cycle was done with 250 A steps from 0 A to 1000 A, and measurements were taken at points on a rectangular grid through the volume to be instrumented with ND280 detectors (see figure 2). The device was moved in steps of 10 cm along the beam direction z as well as in the vertical direction y. The 5.7 cm pitch of the probes in the third dimension x is determined by the precisely machined fixing points of the cards on the arms of the mapping device. The main aim of the mapping campaign was a detailed field map of the ND280 TPC region at a current of 1000 A equivalent to a field of ∼712 G in the center of the magnet. The map consists of more than 250,000 measurement points which corresponds to a 5 cm distance of measurements in y-and z-direction. The procedure for the calibration of this data is discussed in this section. For the equalization of the probes in the ND280 magnet an offset correction was applied. This offset c 0 is determined from the data of all probes by applying the following fitting function to the main B-field component: The transverse B-field components B y and B z were not considered to retrieve the fitting parameters c 1 and c 2 , since the amplitude of these field components is so small that the error is dominated by the intrinsic resolution of the Hall probes (0.2 Gauss). The purpose of the calibration procedure is to determine the fitting parameters c 0 , c 1 and c 2 of eq. (3.1). The c 2 parameter can be considered to depend solely on the magnetization fetaures of the iron yoke. In fact, it also takes into account the magnetization of all surrounding materials such as the basket or the mapping device itself, but the influence of those can be regarded as negligible. The mean value of c 2 is obtained from the fits at each measurement point, and with this given value of c 2 the fit is then repeated for each probe and each direction (B x , B y and B z ). The remaining two parameters c 1 and c 0 describe the strength of the field and the offset value for each probe, respectively. A distribution of the obtained offsets is shown in figure 10. Systematic effects of the mapping device geometry include the skewing of the mapping device itself and angular misalignments between the Hall probes. The data was corrected by measuring these effects and taking them into acount. When moving the device in the z-direction the two pneumatic motors are individually controlled to ensure a smooth movement along the rails. This feature allowed for a better control of the movement and avoidance of mechanical stress on the equipment. The two motors, separated by d z =2.2 m, may stop at slightly different positions in the z-direction for each measurement point (∆z = z 1 − z 0 < 1 mm). This introduces a systematic skewing of the device with an angle below 0.5 mrad, and we correct the angular alignment of the mapping device accordingly. As mentioned in section 2.1, optical encoders measure the stopping positions of both sides of the device. The data are used to determine the skewing angles. Additional skewing was observed in y which exhibited dependence on both y position and direction of movement and was found to be due to friction and mechanical stress on the device. The displacement ∆y was taken from two encoders on the y-axis of the device, separated by d y . The angles for the skewing δ i are small being of the order of a mrad, therefore, we obtain the following equations: The corrected values B ′ i are used for further analysis. As an example, figure 11 shows the effect of the correction for B y . With this method one can correct for effects at the level of the intrinsic probe uncertainty of 0.2 Gauss. The misalignment of each probe with respect to each other must be taken into account as well. From the calibration procedure (section 2.3) it is known that the probes should agree within ±2 mrad. For the transverse components B y or B z , the alignment can be further refined by exploiting the cartesian symmetry of the magnet geometry and the fact that the center of the magnet is defined as the origin of the ND280 coordinate system. In the horizontal symmetry plane of the coils at y = 0, we expect the vertical field distortions for B y to vanish. An analogue statement holds for the B z component. In these planes of symmetry (y = 0 or z = 0) either |B y | or |B z | should be minimal everywhere. The mean value of one B-field component of the 39 probes along one arm is taken and defined to be the reference value: where n is the number of measurement points per probe (n steps in z-direction for B y ). The measurements of each probe are corrected to equal this mean value obtained from the symmetry planes. In figure 12 the B x value is not minimal since this is the main component of the field, which is intended to be as large and uniform as possible. Hence, a correction for B x can only be obtained from a fitting procedure which is described in detail in [8]. The overall effect of the equalization and alignment corrections can be seen in figure 13, in which the differences for B y and B z between adjacent probes are shown. With RMS values of 0.17 Gauss and 0.24 Gauss the resulting deviations are compatible with the intrinsic resolution of the Hall probes. Conclusions A dedicated, novel device was built to map the magnetic field of the magnet of the ND280 detector complex of the T2K neutrino experiment. The mapping device was exclusively made of non- magnetic parts and completed with a set of high precision Hall probes. The absolute scale of the magnetic field is ensured by a careful calibration of each Hall probe in a reference magnetic field. With the help of a dedicated mapping campaign it is demonstrated that the required measurement accuracy of the order of a Gauss is achieved. This meets the requirements for the momentum accuracy for particles with the use of the ND280 TPCs. Figure 13. Deviations of the measured values between adjacent probes before (shaded black) and after the corrections (blue line) for B y (top) and B z (bottom). The fact that the distributions center around zero shows that there is no rotational bias along the x-direction.
2019-04-18T13:09:56.297Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "524192ee5e53a51cfae12b626538fbd1f2ccddff", "oa_license": "CCBY", "oa_url": "http://cds.cern.ch/record/1559804/files/1748-0221_7_01_P01018.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "f9c087375df03e0ad4cbf00ea7adafe52ceb9f04", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
54093744
pes2o/s2orc
v3-fos-license
Measurement of normalized differential $t\bar{t}$ cross sections in the dilepton channel in pp collisions at center-of-mass energy of 13 TeV Measurements of normalized differential cross sections for top quark pair production are performed in the dilepton decay channels in proton-proton collisions at a center-of-mass energy of 13 TeV. The differential cross sections are measured with data corresponding to an integrated luminosity of 2.1 fb$^{-1}$ recorded by the CMS experiment at the LHC. We have measured the cross sections differentially as a function of the kinematic properties of the leptons (electron or muon), jets from bottom quark hadronization, top quarks, and top quark pairs at the particle and parton levels. The $t\bar{t}$ differential cross section measurements are compared to several Monte Carlo generators that implement calculations up to next-to-leading order in perturbative quantum chromodynamics interfaced with parton showering, and also to fixed-order theoretical calculations of top quark pair production beyond next-to-leading order accuracy. I. INTRODUCTION Measurements of top quark pair (tt) production cross sections as a function of top quark related kinematic observables are crucial for testing perturbative quantum chromodynamics (QCD) calculations and for probing a variety of different properties of the top quark. Moreover, they can reveal hints of new physics phenomena beyond the Standard Model. In this document, recent results of normalized tt differential cross sections measured in dilepton (electron or muon) final states (see Figure I) are presented. The measurements are performed using proton-proton collision data produced at the CERN LHC at a center-of-mass energy of 13 TeV and recorded by the CMS experiment [1] in 2015. The analyzed data correspond to an integrated luminosity of 2.1 fb −1 . The measurements are performed at the particle and parton levels. The visible particle-level measurements use final-state objects that are experimentally measurable and theoretically well defined to minimize Monte Carlo (MC) modeling dependence and to avoid large extrapolation, so variables are corrected mainly for detector effects. In contrast, the parton-level measurements are derived in the full phase space to compare to predictions of perturbative QCD beyond nextto-leading order (NLO) accuracy. II. DATA SAMPLES Double lepton (electron or muon) triggered data are used for this analysis. MC techniques are used to simulate the signal and background processes. The simulation includes tt+jets, Z/γ * +jets, W+jets, single top quark production and diboson (WW,ZZ,WZ) processes. Simulated samples are generated by POWHEG and MG5 aMC@NLO and showered with either PYTHIA8 or HERWIG++. III. SIGNAL DEFINITION AND EVENT SELECTION The measured tt differential cross sections are presented at both particle and parton levels as a function of kinematic observables of the top quarks and the ttbar system, defined at generator level. The particlelevel top quark is defined at the generator level using the definitions of the final-state objects described as follows: • Prompt neutrino : neutrinos not from hadron decays • Dressed lepton : anti-kt algorithm with a distance parameter of 0.1 using electrons, muons and photons not from hadron decays, p T > 20 GeV, |η| < 2.4 • b quark jet : anti-kt algorithm with a distance parameter of 0.4 using all particles and ghost-B hadrons not including any neutrinos nor particles used in dressed leptons, p T > 30 GeV, |η| < 2.4 A W boson at the particle level is reconstructed by combining a dressed lepton and a prompt neutrino. A pair of particle-level W bosons is chosen among the arXiv:1801.03806v1 [hep-ex] 11 Jan 2018 possible combinations to minimise the scalar sum of invariant mass differences with respect to the W boson mass of 80.4 GeV. Similarly, the top quark at the particle-level is defined by combining a particle-level W boson and a b quark jet, with the minimum invariant mass difference from the correct top quark mass of 172.5 GeV. The visible phase space is defined to have a pair of particle-level top quarks, constructed from prompt neutrinos, dressed leptons, and b jets. In addition, the parton-level objects are defined before the top quark decays into a bottom quark and a W boson and after QCD radiation. The normalized differential cross sections at the parton level are derived by extrapolating the measurements into the full phase space. The dilepton decay channels consist of two leptons, at least two jets, and missing transverse energy (p miss T ) from two neutrinos. Events are selected using dilepton triggers, and additional selections are applied to filter signal event as follows: • Two opposite-charged leptons (ee/µµ/eµ) with invariant mass of the lepton pair M ll > 20 GeV The top quarks are reconstructed using the four momenta of all final-state objects by an algebraic kinematic reconstruction method. Constraints such as the balance of p T of the two neutrinos and mass of the W boson and the top quark are imposed. The tt system is reconstructed for 100 different random variations within their simulated resolution functions and varying the W boson mass to consider effects of detector resolution. In each trial, the minimum invariant mass of the tt system are selected, and a weight is calculated using the expected invariant mass distribution of lepton and b jet pairs. The lepton and b jet pairs with the maximum sum of weights are chosen, and the neutrino momentum is determined using the weighted average over the trials. Figure 2 displays the distribution of the transverse momenta of top quark (p t T ) and of top quark pair (p tt T ). The normalized differential tt cross sections (1/σ)(dσ/dX) are measured as a function of several kinematic variables X. The corrections for detector efficiencies, acceptances, and migrations are performed using a D'Agostini unfolding method. σ is the total cross section. X represents a variable such as top p T , p tt T , tt mass, and ∆φ tt . V. RESULTS The normalized differential cross sections at particle level are measured as a function of the top p T , p tt T , tt mass, and ∆φ tt , shown in Figure 3 (top and middle rows). Figure 3 (bottom) presents the normalized differential tt cross sections as a function of top p T and p tt T , measured at the parton level in the full phase space and compared to different perturbative QCD calculations of an approximate next-to-next-toleading order (NNLO) [3], an approximate next-to-NNLO (N 3 LO) [4], an improved NLO and next-tonext-to-leading-logarithmic (NLO+NNLL') [5], and a full NNLO [6]. VI. CONCLUSIONS Normalized differential cross sections of top quark pair production in the dilepton decay channel are measured at the particle level in the visible phase space and the parton level in the full phase space with respect to the top p T , p tt T , tt mass, and ∆φ tt . The measured differential cross sections are found to be in agreement with the standard model predictions, being the top quark p T distribution the only one observed to be in mild tension with the NLO predictions. More details can be found in [2]. FIG. 3: Normalized differential tt cross sections as a function of the top pT (upper left), p tt T (upper right), and tt mass (middle left) and ∆φ tt (middle right), measured at the particle level in the visible phase space. The measured data are compared to different MC predictions (see text). Normalized differential tt cross sections as a function of top pT (bottem left) and p tt T (bottom right), measured at the parton level in the full phase space and compared to different perturbative QCD calculations beyond NLO accuracy. The vertical bars on the data points indicate the total (combined statistical and systematic) uncertainties while the hatched band shows the statistical uncertainty. The lower panel gives the ratio of the theoretical predictions to the data. The light-shaded band displays the combined statistical and systematic uncertainties added in quadrature. Figure taken from [2]
2018-01-11T15:08:59.000Z
2018-01-11T00:00:00.000
{ "year": 2018, "sha1": "f5974bf1f91ea2e0c5c409be791547443fdaac13", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f5974bf1f91ea2e0c5c409be791547443fdaac13", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
212643480
pes2o/s2orc
v3-fos-license
Correlation between CD4 count and glomerular filtration rate or urine protein:creatinine ratio in human immunodeficiency virus-infected children Background Studies on kidney complications in human immunodeficiency virus (HIV)-infected children are lacking. CD4 T lymphocytes are an important immune functions regulator and used as a basis for initiating antiretroviral therapy (ART) and monitoring disease progression. This study aims to determine the correlation between CD4 and estimated glomerular filtration rate (eGFR) or urine protein:creatinine ratio (uPCR) as markers of kidney complications. Methods This cross sectional study was conducted on HIV-infected children aged 5 to 18 years who visited the Teratai HIV Clinic at Hasan Sadikin Hospital for monthly monitoring in June 2019. CD4 count, eGFR based on the Schwartz formula, and uPCR were obtained. Correlation analysis was performed with the Pearson test. Results Subjects were 42 HIV-infected children, consisting of 23 males (54.8%) and 19 females (45.2%). Most children (65.0%) were in an advanced clinical stage and had been diagnosed with HIV for an average of 8 ± 3 years. All subjects had received ART, and six received tenofovir. Compliance to medications were good, and most subjects (79.0%) had normal nutritional status and CD4 count. All subjects had eGFR > 90 mL/min/1.73 m2, of which 21 (50.0%) were above normal value. Proteinuria was found in 12 patients (28.6%), and it was not significantly associated with clinical stages of HIV infection. CD4 count was correlated positively with eGFR (r = 0.473, P = 0.001) and negatively with uPCR (r = -0.284, P = 0.034). Conclusion The degree of immunodeficiency appears to correlate with severity of renal injury. Screening at diagnosis and periodic monitoring of kidney functions are crucial in all childhood HIV patients. Kidney disorders are asymptomatic at early stages; hence, early detection is important for appropriate treatment to inhibit progression of kidney disease before it becomes irreversible [5]. Guidelines from the HIV Medicine Association of the Infectious Diseases Society of America in 2014 recommend screening for early detection of kidney abnormalities in asymptomatic HIV patients at diagnosis and at least twice a year thereafter [10]. However, these are no routinely implemented screens in many countries with a high prevalence of HIV including Indonesia. Such examinations are performed only if there are indications based on clinical symptoms and selected antiretroviral therapy (ART) [11]. CD4 T lymphocytes play an important role in regulating immune functions and are the main target of HIV. Continuous destruction of CD4 by HIV results in loss of a specific immune response to HIV and, ultimately, loss of non-specific immune responses to opportunistic pathogens that are characteristic of acquired immunodeficiency syndrome (AIDS). CD4 count is used as a basis for ART and as a tool to monitor disease progression and ART effectiveness. Change in CD4 cell count is an important indicator of patient response to ART. CD4 cell count varies according to age in children younger than 5 years. Therefore, tests that can be used in this age group involve CD4%. Meanwhile, in children aged ≥ 5 years, both CD4% and absolute CD4 tests can be used, although absolute CD4 tests are preferred [12]. Studies on kidney complications in HIV-infected children are lacking, particularly in Asia. Most studies that have been performed are from Africa, as the region with the highest global HIV prevalence whose citizens are known to have a specific genetic predisposition [13]. This study aims to determine the correlation between CD4 and estimated glomerular filtration rate (eGFR) or urine protein:creatinine ratio (uPCR) as a marker of kidney disorder. Patients older than 5 years were chosen to allow uniform use of the CD4 examination. Methods A cross-sectional study was conducted on pediatric HIV patients at Teratai HIV Clinic Dr. Hasan Sadikin General Hospital Bandung in June 2019. The inclusion criteria were: 1) children aged 5 to 18 years and 2) known diagnosis of HIV infection according to the World Health Organization (WHO) criteria [14]. Exclusion criteria were: 1) history of previous kidney abnormality or 2) urinary tract infection, characterized by fever and urinary complaints. Informed consent was obtained from parents of the subjects who met the study criteria. Identified subject characteristics consisted of age, sex, age at diagnosis, time since diagnosis, duration of ART, type of ART, route of transmission, clinical stage at diagnosis, and nadir CD4 count. Weight was measured in kilograms (kg) using a weight scale, while height was measured in centimeters (cm) using a stadiometer. Nutritional status was determined by the WHO body mass index (BMI) for age chart. Two blood samples, each 3 milliliters (mL), were collected for CD4 and serum creatinine examination. Serum creatinine was then used to calculate eGFR using the Schwartz formula [15]. Five mL of morning urine samples were collected from each subject for examination of urine protein and urine creatinine and calculation of uPCR. Normal eGFR values by age were determined based on the guidelines from the National Kidney Foundation Kidney Disease Outcomes Quality Initiative (NKF KDOQI). Impaired renal function was defined as eGFR < 90 mL/min, while proteinuria was defined as uPCR ≥ 0.2, and nephrotic proteinuria was uPCR ≥ 1.0 [15,16]. Correlation analysis between CD4 and eGFR was carried out with the Pearson correlation test, while correlation analysis between CD4 and uPCR was performed by the Spearman correlation test. Data analysis was performed using the IBM SPSS Statistics for Windows version 20.0 (IBM Corp., Armonk, NY, USA). This study was approved by the Research Ethics Committee of Dr. Hasan Sadikin General Hospital, Bandung (approval number: LB.02.01/X.6.5/163/2019). Results During the study period in June 2019, 45 subjects who met the inclusion criteria were enrolled. Three subjects younger than 5 years of age were excluded. Table 1 shows the demographic and clinical characteristics of the study population. Male and female proportions were almost the same (54.8% vs. 45.2%). The youngest age of the subjects was 5 years 7 months, while the oldest was 16 years 11 months. The median age at diagnosis was 3 years, with an average time since diagnosis of 8 years. All patients received ART, one of which consisted of lamivudine. Six subjects who used tenofovir previously experienced first line regimen treatment failure or started treatment during adolescence. Most subjects (79%) had normal nutritional status. Only 40 of the 42 subjects had available clinical stage at diagnosis. Two patients were initially diagnosed with HIV infection in a district hospital and private clinic, and we were not able to trace their initial history. Twenty-six (65.0%) of the 40 subjects were in advanced stage (clinical stage III or IV) at the time of diagnosis. The results of CD4, eGFR, and uPCR examinations are shown in Table 2 and are classified according to clinical stage in Table 3. No statistically significant difference was observed among clinical stages. Thirty-three subjects (78.6%) were immunocompetent based on CD4 count, 6 (14.3%) experienced mild immunodeficiency, and 3 (7.1%) experienced severe immunodeficiency [14]. Distributions of eGFR and uPCR were abnormal, with extreme values in several subjects. One subject with uPCR results far exceeding other subjects (0.65 mg/mg) was severely stunted in growth (WHO height for age z score, -3.46 standard deviation, SD), with a CD4 count of 60 cell/mm 3 (%CD4+, 5.15%). Another subject with eGFR far below that of the other subjects (118 mL/min/m 2 ) was the oldest subject (aged 16 years 11 months) and was severely undernourished (WHO BMI for age z score, -4.4 SD) with proteinuria (uPCR 0.39 mg/mg). All subjects had eGFR > 90 mL/min/m 2 . Twenty-one subjects (50.0%) had eGFR above the normal value according to age. Twelve subjects (28.6%) had proteinuria, while none experienced nephrotic proteinuria. Five of the 6 subjects (83.3%) who received tenofovir experienced proteinuria (P = 0.005), while only 8 (21.1%) of the 38 subjects who received zidovudine experienced proteinuria (P = 0.004). Five of the six subjects who used tenofovir and experienced proteinuria had been diagnosed with HIV stage IV at diagnosis (2 subjects), had mild immunodeficiency at examination (2 subjects), or were severely stunted in growth (2 subjects). Correlations between CD4 and eGFR or uPCR are www.krcp-ksn.org shown in Table 4 and Fig. 1. CD4 count was positively correlated with eGFR (r = 0.473, P = 0.001) and negatively correlated with uPCR (r = -0.284, P = 0.034). We performed multivariate analysis to determine the correlations of CD4 and tenofovir use with eGFR and uPCR, as shown in Tables 5 and 6. We observed that the correlation between CD4 and eGFR was 0.415 after being controlled by tenofovir. Every 1 cell/mm 3 increase of CD4 increased eGFR by 0.023 mL/min/1.73 m 2 . Meanwhile, the correlation between CD4 and uPCR was -0.390 after being controlled by tenofovir. Every 1 cell/mm 3 increase of CD4 decreased uPCR by 0.00013 mg/mg. Discussion This study found that 29% of subjects had proteinuria by means of uPCR examination. This result is greater than that in a study conducted by Gupta et al [17] (11.5%) in 2017. Differences in results between these studies may be due to differences in clinical stage and time from diagnosis among the subjects. In addition, the previous study involved a greater number of subjects (139 subjects) and a longer study period (1 year 4 months). The results of this study are similar to those of studies conducted by Chaparro et al [18] in the United States (33%) in 2008 and Ikpeme et al [19] in Nigeria (31.6%) in 2012. All these studies used uPCR ≥ 0.2 to determine proteinuria. This high prevalence indicates that impaired renal function is indeed an important complication in HIV patients [20]. This study found that CD4 positively correlated with eGFR and negatively correlated with uPCR. Previous studies have reported mixed results related to CD4 correlation with kidney disorders. Gupta et al [17] found no correla- tion between CD4 and proteinuria, while Chaparro et al [18] found a correlation between CD4 and nephrotic proteinuria but no correlation with intermediate proteinuria (uPCR ≥ 0.2, < 1.0). These findings might be due to differences in subject characteristics between studies. The majority (86%) of subjects in India were in the early clinical and immunological stages, and proteinuria was found in only 11.5% [17]. Meanwhile, Chaparro et al. involved a greater number of subjects (286 subjects) and a wider age range (the youngest being 0.2 years, the oldest being 22 years). Chaparro et al. used the same proteinuria examination method and found an incidence of proteinuria of 33%, of which 11.2% were nephrotic (uPCR > 1.0). In that study, there were no subjects with nephrotic proteinuria [18]. Ikpeme et al [19] found similar results to the present study: a correlation between CD4 and proteinuria, with a coefficient of -0.2. That study involved a greater number of subjects (98 subjects) and a longer study period (6 months). Proteinuria was determined by urine albumin:creatinine ratio (uACR) test to detect microalbuminuria that reflects damage to the glomerular vascular endothelium in early stages and decreased tubular ability to reabsorb urine albumin. Meanwhile, uPCR examination includes all non-albumin proteins of both glomerular and tubular origins. Fisher et al [21] in 2013 compared uACR and uPCR tests in chronic kidney disease populations (with no HIV infection) and found a very good correlation between the two examinations (r = 0.92). uPCR was chosen in this study because prior research by Antonello et al [16] reports a very good correlation between uPCR and 24-hour quantitative protein test as the gold standard examination for proteinuria (r = 0.957) in HIV children. This study found a correlation between CD4 cell count with eGFR (r = 0.473, P = 0.001) and serum creatinine (r = -0.334, P = 0.024). To our knowledge, this is the first study investigating the correlation between CD4 and eGFR in HIV children. Previous studies were conducted on adult HIV patients. Among them are studies by Verma and Singh [22] in India (r = -0.26, P = 0.02) and Adedeji et al [23] in Nigeria (r = -0.228, P = 0.025). The difference in results may be due to differences in subject age, the greater number of subjects in the previous studies, and the various mechanisms possibly underlying kidney complications. Kidney complications in HIV can occur due to hemodynamic disorders such as severe dehydration or hypovolemic shock, nephrotoxic ART, direct HIV infection in epithelial cells and renal mesangium, or deposition of immune complexes into kidney tissue. Nephropathy due to direct infection of kidney cells (HIV-associated nephropathy) is the most common cause of kidney disease in children and adolescents infected with HIV in the world. These complications can be either mediated or not mediated by CD4 [24,25]. This is reflected in the results of our study showing that, even though the degree of immunodeficiency represented by CD4 can affect the degree of renal complications, the low correlation coefficient indicates the presence of other factors that influence kidney complications. In this study, most of our subjects were in clinical stage III or IV at the time of diagnosis. Due to good adherence to medications, most of the subjects exhibited normal nutritional status and CD4 count. Nonetheless, clinical stage reflects prognosis and increased susceptibility to HIV-related complications, including kidney complications [26]. This may explain the considerably high numbers of subjects with proteinuria in this study despite normal CD4 count in most. Proteinuria was found in all clinical stages, irrespective of sex, time since diagnosis, duration of therapy, and nutritional status. Tenofovir use increased the incidence of proteinuria, but proteinuria also appeared to occur with use of other drugs, though without any statistical significance. This underlines the importance of monitoring kidney function from diagnosis and periodically thereafter as a method of early detection of impaired kidney function in all HIV patients. Urine protein testing is an choice for monitoring patients with HIV infection. Though CD4 cell count is the main parameter determining ART initiation, ART is administered to children at clinical stages III and IV regardless of CD4 count. The presence of nephropathy in HIV is one of the criteria for diagnosing stage 4 HIV, which requires renal biopsy to establish a definitive diagnosis. However, this procedure is not performed routinely in children. Definitions that are widely used in many epidemiological studies consist of persistent proteinuria (> 1+ protein on dipstick or uPCR ≥ 0.2 for more than three months) with enlarged echogenic kidneys on ultrasonography and abnormal microscopic examination of urine [6,25]. In Indonesia, examination of kidney function is not routinely performed until there are clinical symptoms [11]. If persistent proteinuria is detected and confirmed by www.krcp-ksn.org ultrasonography or kidney biopsy, ART initiation can be performed earlier without waiting for CD4 count decline [25]. This is particularly important in children, because HIV in children has unique properties compared to HIV in adults. Although early manifestations are often mild and nonspecific, due to their immature immune systems with higher CD4 count, children are more susceptible to common childhood and opportunistic infections and may experience rapid progression of HIV infection if treatment is delayed [27]. In children, ART is said to be the most important management tool to inhibit progression to ESRD [28]. One of the limitations of this study is the cross-sectional design, which prevented us from evaluating the persistence of proteinuria. However, of the 27 adolescents involved in this study, only 2 were severely undernourished. Furthermore, urine samples were collected in the morning, and the subjects rested for at least half an hour before collecting urine specimens. In addition, the results of serum creatinine, eGFR, and uPCR had an abnormal distribution. This may be due to the nonhomogeneous characteristics of the subjects, resulting in extreme values in several subjects. There may be confounding factors that influenced the results because of the non-strict exclusion criteria. We observed that eGFRs in this study were very high. Theoretically, this may possibly have been caused by low muscle mass of the subjects. However, we were certain that our subjects did not experience muscle hypertrophy. Most of our subjects had normal nutritional status, with only three being severely undernourished and showing eGFR values that were not very high. Therefore, we have reason to believe that serum creatinine remains a reliable measurement to reflect renal function. Nonetheless, it can still be considered a limitation that we did not measure other indicators of eGFR, which are not always available in our country. Moreover, as mentioned above, kidney complications in HIV can occur due to various factors, and kidney biopsy is the gold standard to establish the underlying cause. However, it is not always feasible to perform biopsy in all settings. Urine protein testing can be an easy alternative for screening and to determine the need for further examination in children with HIV. Further research is needed with a larger sample size, a longer study period, and a cohort method to evaluate whether proteinuria is persistent in children with HIV. In conclusion, degree of immunodeficiency correlates with degree of kidney complications. Screening at diagnosis and periodic monitoring of kidney functions are crucial in all HIV patients. Conflicts of interest All authors have no conflicts of interest to declare.
2020-03-10T13:16:37.498Z
2020-03-09T00:00:00.000
{ "year": 2020, "sha1": "ec61aa6a9bd99060d0ff0dad83504497c9a9212d", "oa_license": "CCBYNC", "oa_url": "https://www.krcp-ksn.org/upload/pdf/KRCP-39-040.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0e7c6519be24ebdc683fff969def67ed8dd334bd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268788573
pes2o/s2orc
v3-fos-license
The Role and Prospects of Mesenchymal Stem Cells in Skin Repair and Regeneration Mesenchymal stem cells (MSCs) have been recognized as a cell therapy with the potential to promote skin healing. MSCs, with their multipotent differentiation ability, can generate various cells related to wound healing, such as dermal fibroblasts (DFs), endothelial cells, and keratinocytes. In addition, MSCs promote neovascularization, cellular regeneration, and tissue healing through mechanisms including paracrine and autocrine signaling. Due to these characteristics, MSCs have been extensively studied in the context of burn healing and chronic wound repair. Furthermore, during the investigation of MSCs, their unique roles in skin aging and scarless healing have also been discovered. In this review, we summarize the mechanisms by which MSCs promote wound healing and discuss the recent findings from preclinical and clinical studies. We also explore strategies to enhance the therapeutic effects of MSCs. Moreover, we discuss the emerging trend of combining MSCs with tissue engineering techniques, leveraging the advantages of MSCs and tissue engineering materials, such as biodegradable scaffolds and hydrogels, to enhance the skin repair capacity of MSCs. Additionally, we highlight the potential of using paracrine and autocrine characteristics of MSCs to explore cell-free therapies as a future direction in stem cell-based treatments, further demonstrating the clinical and regenerative aesthetic applications of MSCs in skin repair and regeneration. Introduction MSCs are a multipotent type of stem cells that originate from the mesoderm and ectoderm [1] in early development and can differentiate into a wide range of tissue cells, such as bone, cartilage, fat, muscle, and nerve cells.MSCs were first discovered in bone marrow, but have since been found in many other tissues of the body, such as adipose tissue, synovial tissue bone, muscle, lung, liver, pancreas, amniotic fluid, and umbilical cord blood [2,3].MSCs from different sources differ in their accessibility, content, proliferation ability, immunomodulatory capacity, and the cytokines they secrete, and have different therapeutic potentials for different diseases. Since MSCs were first discovered in bone marrow by Cohnheim in 1867 [4], they have been considered to play an important role in skin regeneration and wound healing.In 1991, Professor Caplan first introduced the concept of mesenchymal stem cells and emphasized their potential for multidirectional differentiation [5].Subsequently, many researchers have isolated MSCs from different tissues and have demonstrated that they can differentiate into a wide range of cell types such as osteoblasts, adipocytes, chondrocytes, tenocytes, cardiomyocytes, keratinocytes, hepatocytes, and neural cells [6,7].For example, dental-derived MSCs can be obtained from different parts of the teeth, such as the pulp, the ligament, the follicle, and the gingiva [8].These stem cells have the ability to differentiate into various tissue types, such as bone, cartilage, fat, nerve, and skin [8].Dental MSCs can be used to repair damaged tissues and organs, and to treat diseases that; affect the immune system, the nervous system, the liver, and the skin [9].In addition, MSCs have been found to have various biological functions such as immunomodulation, antiinflammation, anti-apoptosis, and pro-angiogenesis [10,11].In 1995, Professor Caplan extracted and isolated cultured MSCs from the bone marrow of patients with malignant hematological disorders and then infused them back into the patients to observe the clinical effects and to demonstrate the safety of these matrices [12].Since then, the clinical application of MSCs has gradually expanded to a wide range of diseases, such as cardiovascular diseases, neurological diseases, bone and joint diseases, autoimmune diseases, liver diseases, diabetes, and so on [13]. Currently, several MSC-based therapies have received FDA approval for the treatment of acute graft-versus-host disease, bone defects, osteoporotic vertebral compression fractures, and ischemic heart failure [14].According to ClinicalTrials.govclinical registry data, there are currently more than 1300 clinical trials related to MSCs around the world, covering more than 300 diseases.Most of these clinical trials are being carried out for musculoskeletal disorders, central nervous system disorders, immune system disorders, wounds and traumas, rheumatic disorders, joint disorders, arthritis, vascular disorders, respiratory disorders, digestive disorders, and gastrointestinal disorders.These clinical studies have demonstrated that MSCs are a safe and effective therapeutic tool, providing a new treatment strategy for difficult multi-system diseases [15]. The clinical value of MSCs is not only reflected in their therapeutic efficacy, but also in their advantages in terms of high proliferation ability, immunogenicity, and differentiation capacity [16].Currently, the most used stem cells are bone marrow-derived mesenchymal stem cells, but they have some limitations, such as a decline in the number and activity of the stem cells with age [17].Therefore, the search for other alternative sources of MSCs is an important research direction.Among them, umbilical cord-derived MSCs have some advantages, such as abundant number of stem cells, easy collection, no harm to donors, low immunogenicity, high differentiation capacity, and no ethical controversy [18].Umbilical cord-derived MSCs have been used to treat a wide range of diseases, such as cardiovascular diseases, liver diseases, bone and muscle degenerative diseases, neurological injuries, and autoimmune diseases [19][20][21]. In conclusion, MSCs are stem cells with a wide range of clinical applications, and they can promote tissue and organ regeneration and repair through a variety of mechanisms, providing new possibilities for the treatment of a wide range of diseases [13].However, the clinical application of MSCs also faces some challenges and problems, such as cell quality control, immunological rejection, carcinogenic risk, standardized management, etc., which require further research and optimization [22]. The Wound Healing-Promoting Mechanisms of Mesenchymal Stem Cells Wound healing is a complex biological process involving the coordinated action of multiple cellular, molecular, and signaling pathways.The process of wound healing can be divided into four phases: hemostasis, inflammation, proliferation, and remodeling (Figure 1) [23,24].Under normal circumstances, the wound healing process is orderly and can restore the structure and function of the tissue.However, in some cases of disease or injury, the process of wound healing may be disrupted, resulting in difficulty in healing or the formation of pathological scars [25].Therefore, exploring effective ways to promote wound healing and improve scar quality is an important clinical issue. Stem cells are a class of cells with self-renewal and multidirectional differentiation capabilities, and they play an important role in tissue repair and regeneration [26].MSCs are adult stem cells found in a wide range of tissues; Figure 2 provides an illustration of the sources of mesenchymal stem cells [27].The low immunogenicity, ease of isolation and expansion, multiple differentiation potentials, and paracrine functions of MSCs make them ideal candidates for trauma therapy [28].In recent years, more and more studies have shown that MSCs can promote wound healing through various mechanisms, such as through their differentiation potential, paracrine function, and promotion of angiogenesis [29,30].Among them, the MSCs' paracrine function refers to their involvement in the inflammatory, proliferative, and remodeling processes of wound healing by secreting a variety of bioactive molecules, such as growth factors, cytokines, chemokines, exosomes, etc., to influence the function and state of the surrounding cells and tissues [31].Neovascularization is the division of new blood vessels from the surrounding normal blood vessels during the wound healing process due to the inflammatory response and angiogenic factors [32].The proliferation and migration of endothelial cells in a wound form a new vascular network, thereby improving blood supply and oxygen delivery to the wound and promoting wound healing and tissue regeneration [33].The newborn blood vessels can provide oxygen and nutrients to the wound tissue, promote cell proliferation and differentiation, and enhance the resistance and repair ability of the wound [32].The newborn blood vessels also release some cytokines, such as vascular endothelial growth factor (VEGF), transforming growth factor-β (TGF-β), IL-6, etc., which regulate the inflammatory response and fibrotic process, inhibit the activation and secretion of inflammatory cells, and promote the proliferation of fibroblasts and collagen synthesis [31,34].In addition, neovascularization improves the microcirculation in the wound tissue, reduces tissue hypoxia and edema, reduces the damage to and death of capillary endothelial cells, and maintains capillary patency [35].The newborn blood vessels can influence the remodeling process of the wound tissue, promoting steps such as epithelial formation, collagen remodeling, fibrosis, and the formation of structures such as granulation tissue and scar tissue [36,37].In conclusion, neovascularization plays an important role in wound healing as both a stimulating and regulatory factor.Neovascularization interacts with other factors that, together, determine the efficiency and quality of wound healing.In the initial phase of wound healing, when blood clo ing occurs, platelets release signaling molecules and chemical messengers that a ract inflammatory cells.Inflammation begins with the influx of neutrophils, facilitated by the release of histamine from mast cells.Subsequently, monocytes arrive and differentiate into tissue macrophages, which are responsible for clearing residual cell debris and neutrophils.In the proliferative phase, keratinocytes migrate to bridge the wound, new blood vessels form through the growth of tiny vessels, and specialized cells called fibroblasts replace the initial blood clot with a tissue known as granulation tissue.Macrophages and regulatory T cells play crucial roles during this stage of the healing process.Eventually, the newly formed tissue undergoes further restructuring as fibroblasts reshape the deposited matrix, the blood vessels diminish in size, and specialized cells called myofibroblasts contribute to the overall contraction of the wound.Reproduced with permission from [23]. Stem cells are a class of cells with self-renewal and multidirectional differentiation In the initial phase of wound healing, when blood clotting occurs, platelets release signaling molecules and chemical messengers that attract inflammatory cells.Inflammation begins with the influx of neutrophils, facilitated by the release of histamine from mast cells.Subsequently, monocytes arrive and differentiate into tissue macrophages, which are responsible for clearing residual cell debris and neutrophils.In the proliferative phase, keratinocytes migrate to bridge the wound, new blood vessels form through the growth of tiny vessels, and specialized cells called fibroblasts replace the initial blood clot with a tissue known as granulation tissue.Macrophages and regulatory T cells play crucial roles during this stage of the healing process.Eventually, the newly formed tissue undergoes further restructuring as fibroblasts reshape the deposited matrix, the blood vessels diminish in size, and specialized cells called myofibroblasts contribute to the overall contraction of the wound.Reproduced with permission from [23]. MSCs can induce the proliferation, migration, and differentiation of peripheral vascular endothelial cells (VECs) by directly contacting or indirectly acting on VECs, thus promoting the formation and maturation of neovascularization of wounds, and improving the wound healing rate and functional recovery [38].Endothelial cells are the main constituent cells of blood vessels and they play a key role in the process of neovascularization in wounds.Direct contact refers to the presence of physical or chemical interactions between MSCs and VECs, such as mechanical stretching, shear stress, and pharmacological inhibitors, which can enhance signaling and silencing of transcription factors between VECs and MSCs [39].Indirect effects are defined as the effects of MSCs on the microenvironment around VECs through the release of exosomes or other factors, such as vascular endothelial growth factor A (VEGFA), epidermal growth factor (EGF), and fibroblast growth factor 2 (FGF2) [40,41].The exosomes secreted by MSCs can bind to receptors on VECs, activating downstream signaling pathways such as the EGFR, ERK1/2, PI3K/Akt pathways, etc., and regulating the proliferation, migration, and differentiation of VECs, thereby promoting neovascularization [41]. MSCs are a class of adult stem cells with multiple differentiation potentials, which can differentiate into skin tissue cells, such as keratinocytes, fibroblasts, endothelial cells, etc., under the appropriate conditions, and thus participate in the re-epithelialization of wounds, formation of granulation tissue, and regeneration of blood vessels [37].Keratinocytes are the main cell type of the skin's surface layer and they play a key role in the re-epithelialization of wounds [42].Re-epithelialization is the migration and proliferation of epithelial cells from the surface of the wound towards the center of the wound to form a new epithelial layer and restore the barrier function of the skin [43].It has been found that exosomes isolated from human umbilical cord mesenchymal stem cells can accelerate re-epithelialization of burn wounds by enhancing the downstream effects of Wnt signaling through the increased nuclear translocation of β-catenin, thereby promoting the proliferation and migration of keratin-forming cells [44,45].Exosomes are vesicles encapsulated by cell membranes, which can carry a wide range of biologically active molecules, such as proteins, nucleic acids, and lipids, thereby transmitting information between cells [46].The Wnt/β-catenin signaling pathways are signal transduction pathways involved in cell proliferation, and play an important role in skin development and regeneration [47].In recent years, it has been demonstrated that human umbilical cord mesenchymal stem cell exosomes could also accelerate the re-epithelialization of burn wounds by increasing the phosphorylation level of AKT and enhancing the downstream effects of AKT signaling, thereby promoting the proliferation and migration of keratin-forming cells [46].The AKT signaling pathway is a signal transduction pathway involved in cell survival and proliferation, and plays an important role in skin regeneration [38].Fibroblasts are the main cell type in the dermis of the skin and they play an important role in the proliferation and remodeling stages of wound healing [48].Proliferation and remodeling refer to the synthesis and secretion of large amounts of extracellular matrix, such as collagen, elastic fibers, fibronectin, etc., by fibroblasts in the wound, thereby forming granulation tissue that fills in the wound defect and strengthens the wound [48].It has been demonstrated that exosomes isolated from human adipose MSCs can reduce the proliferation of scar fibroblasts and collagen synthesis by inhibiting the TGF-β/Smad signaling pathway and improve scar formation in burn wounds [49].Keloid fibroblasts are a specific type of fibroblasts that appear during the wound healing process; they are highly proliferative and secretory but lack the ability to degrade and remodel the extracellular matrix, leading to the excessive deposition of extracellular matrix and fibrosis of the tissue [50].The TGF-β/Smad signaling pathway is a signal transduction pathway involved in cell proliferation and differentiation, and plays an important role in the regulation of scar formation [51].Studies have shown that human adipose MSC exosomes can reduce scar formation in burn wounds by decreasing the expression and activity of TGF-β and inhibiting the phosphorylation and nuclear translocation of Smad2/3, thereby reducing scar fibroblast proliferation and collagen synthesis [52].Additionally, MSCs also crosstalk with tissue cells such as macrophages to promote wound healing.Macrophages play a beneficial role in the wound repair process, and the anti-inflammatory M2 phenotype promotes wound healing in the later stages of wound healing [53].Exosomes of MSCs are able to promote macrophage M2 polarization by targeting pknox1 so as to enhance wound healing [54]. Biomedicines 2024, 12, x FOR PEER REVIEW 4 of 21 inflammatory response and fibrotic process, inhibit the activation and secretion of inflammatory cells, and promote the proliferation of fibroblasts and collagen synthesis [31,34]. In addition, neovascularization improves the microcirculation in the wound tissue, reduces tissue hypoxia and edema, reduces the damage to and death of capillary endothelial cells, and maintains capillary patency [35].The newborn blood vessels can influence the remodeling process of the wound tissue, promoting steps such as epithelial formation, collagen remodeling, fibrosis, and the formation of structures such as granulation tissue and scar tissue [36,37].In conclusion, neovascularization plays an important role in wound healing as both a stimulating and regulatory factor.Neovascularization interacts with other factors that, together, determine the efficiency and quality of wound healing.MSCs can induce the proliferation, migration, and differentiation of peripheral vascular endothelial cells (VECs) by directly contacting or indirectly acting on VECs, thus promoting the formation and maturation of neovascularization of wounds, and improving the wound healing rate and functional recovery [38].Endothelial cells are the main constituent cells of blood vessels and they play a key role in the process of neovascularization in wounds.Direct contact refers to the presence of physical or chemical interactions between MSCs and VECs, such as mechanical stretching, shear stress, and pharmacological inhibitors, which can enhance signaling and silencing of transcription factors between VECs and MSCs [39].Indirect effects are defined as the effects of MSCs on the microenvironment around VECs through the release of exosomes or other factors, such as vascular endothelial growth factor A (VEGFA), epidermal growth factor (EGF), and fibroblast growth factor 2 (FGF2) [40,41].The exosomes secreted by MSCs can bind to receptors on The Potential of Mesenchymal Stem Cells in the Treatment of Clinical Diseases The use of MSCs in clinical diseases is a cutting-edge research area that promises to provide new strategies and approaches for the treatment of a variety of skin injuries and diseases, such as chronic refractory wounds and burn injuries.The main aspects of acute and chronic wound healing are at the anatomical level. MSC-derived extracellular vesicles (EVs) aid in tissue regeneration by facilitating the regrowth of the epidermis, dermis, hair follicles, nerves, and blood vessels, while also mitigating abnormal pigmentation (Figure 3) [55].The roles of MSCs in wound healing are shown in Table 1.Chronic refractory wounds mean that the wounds cannot achieve structural and functional integrity in time and eventually cause a chronic inflammatory state [56].They usually require long-term care and treatment to promote healing and prevent complications.The difficulty in the treatment of chronic refractory wounds lies in the lack of effective stimulating and growth factors, as well as the lack of precise control and regulation of the wound tissue.Burn injuries involve damage to the skin tissues due to friction, cold, heat, radiation, chemical, or electric sources [57].They usually cause severe complications such as edema, necrosis, and infections [57].The difficulties in the treatment of burn injuries lie in the lack of effective protective barriers and repair mechanisms, as well as the lack of individualized therapeutic treatments for different types and degrees of burns.In this section, we focus on the potential of MSCs in the treatment of clinical diseases from two perspectives: the treatment of chronic refractory wounds and burn injuries. mitigating abnormal pigmentation (Figure 3) [55].The roles of MSCs in wound healing are shown in Table 1.Chronic refractory wounds mean that the wounds cannot achieve structural and functional integrity in time and eventually cause a chronic inflammatory state [56].They usually require long-term care and treatment to promote healing and prevent complications.The difficulty in the treatment of chronic refractory wounds lies in the lack of effective stimulating and growth factors, as well as the lack of precise control and regulation of the wound tissue.Burn injuries involve damage to the skin tissues due to friction, cold, heat, radiation, chemical, or electric sources [57].They usually cause severe complications such as edema, necrosis, and infections [57].The difficulties in the treatment of burn injuries lie in the lack of effective protective barriers and repair mechanisms, as well as the lack of individualized therapeutic treatments for different types and degrees of burns.In this section, we focus on the potential of MSCs in the treatment of clinical diseases from two perspectives: the treatment of chronic refractory wounds and burn injuries.Reproduced with permission from [55]. Table 1.MSC therapy in animal models to promote wound healing. Condition Model Source Results Ref. Severe burn Rat Human umbilical cord blood A reduction in the infiltration of inflammatory cells and the levels of the inflammatory factors IL-1, IL-6, and TNF-α at the wound site, along with increased levels of VEGF and IL-10, contributed to the acceleration of wound healing. [58] Excisional wound Mouse Human Wharton's jelly-derived MSCs Promoted the proliferation and migration of fibroblasts. [59] To date, the treatments for chronic refractory wounds have included debridement, topical antibiotics, compression bandages, skin grafts, and cytokines [67][68][69][70].However, all of these methods have certain limitations and side effects, for example, debridement can cause infection and bleeding [71], etc.Therefore, there is an urgent need to explore some new approaches.MSC therapy is a novel therapy with great potential to restart the normal healing response in old wounds by increasing the accumulation of MSCs in the wound.The mechanisms of stem cell effects in chronic refractory wound healing include cell recruitment, cell differentiation, immunomodulation, antimicrobial effects, pro-angiogenic effects, and epidermal replantation [72,73].Stem cells are used in chronic refractory wounds by directly injecting MSCs into the wound [74,75].Direct injection can rapidly increase the number and distribution of MSCs, promoting wound healing and tissue regeneration [75].Several clinical trials and studies have been conducted to investigate the efficacy and safety of MSCs in the treatment of chronic refractory wounds [55].In short, MSCs have great potential and advantages in the treatment of chronic refractory wounds, and can be used in a variety of ways to achieve repair of and tissue regeneration in wounds.However, there are still some challenges and problems, such as the need for the further optimization and validation of stem cell sources, quality, quantity, preservation, transport, and safety.More high-quality clinical trials are needed in the future to assess the efficacy and safety of MSCs in different types and degrees of chronic refractory wounds and to explore the optimal use and dosage. Besides their superiority in treatments for chronic refractory wounds, MSCs also have great potential in the treatment of burn injuries and can promote the healing of burn wounds in several ways.Firstly, MSCs can inhibit the excessive inflammatory response after a burn injury by reducing the infiltration of inflammatory cells and releasing inflammatory factors, thus reducing the severity of the burn injury and the risk of complications [76].They can also trigger the polarization of macrophages from the pro-inflammatory M1 type to pro-healing M2 type, thereby promoting wound cleaning and repair [77].In addition, MSCs can stimulate the formation of neovascularization by secreting growth factors such as VEGF and FGF, which increase blood perfusion and provide nutrients to burn wounds, thereby accelerating wound healing [78].They can improve the structure and function of wounds by secreting matrix proteins such as collagen, elastin, and fibronectin, which promote the reconstruction of the dermis and enhance the strength and elasticity of the wound [79].MSCs can also stimulate the proliferation and differentiation of epidermal cells and promote the regeneration of the epidermal layer through the secretion of growth factors such as transforming growth factor-β (TGF-β), epidermal growth factor (EGF), and keratinocyte growth factor (KGF), thereby restoring the barrier function of the skin [23].Furthermore, MSCs can regulate the remodeling of the extracellular matrix (ECM) by secreting enzymes such as matrix metalloproteinases (MMPs) and tissue inhibitors of metalloproteinases (TIMPs), which balances matrix synthesis and degradation, thereby reducing scar formation after a burn [80].They can also inhibit keloid formation by restraining the proliferation and secretion of fibroblasts, thus improving the restoration effects [81]. In summary, MSCs have multifaceted advantages in the treatment of burns and can influence the healing process of burn wounds at multiple levels, including inflammation, vascularity, and scarring, in the dermis and epidermis to improve the quality of life and survival rate of burn patients.Currently, several clinical studies and case reports have confirmed the efficacy and safety of MSCs in the treatment of burn injuries [82], but further exploration of the optimal source, dosage, delivery, timing, and mechanism of MSCs is still needed to provide a more optimal treatment plan for burn patients. Applications of Mesenchymal Stem Cells in Skin Regeneration and Rejuvenation The use of the regeneration ability of MSCs is not limited to disease treatments and is also used in the aesthetic field.MSCs have received attention due to their multilineage differentiation abilities.Some studies have reported that mesenchymal stem cell exosomes can be utilized in the treatment of skin aging.Except for chronological aging, human skin also undergoes photoaging, a kind of sun-induced skin aging.Actually, in the past 30 years, the research on the molecular mechanisms of skin photoaging such as the production of intracellular ROS has made substantial progress.The generation of reactive oxygen species (ROS) damages the connective tissues in human skin [83].The exposure to ultraviolet rays for a long time has been proven to decrease collagen protein production. In contrast, MSCs, with their antioxidation and anti-apoptosis effects, can reduce the production of metalloproteinases (MMPs) and activate the proliferation of dermal fibroblasts [84].A study compared adipose-derived stem cells (ADSCs) with fibroblasts in the improvement of skin wrinkles caused by photoaging and found that ADSCs are as effective as fibroblasts in promoting the production of collagen protein [85].There are plenty of studies that have demonstrated that therapies and medicines based on stem cells can inhibit the signaling cascades which participate in telomere shortening, estrogen depletion, and excessive ROS production [86].It has been reported that ADSCs have anti-aging effects on aging cells and in animal models of premature aging.ADSCs can accelerate mitophagy, eliminate intracellular ROS, and eventually improve the number of mitochondria [87].Exosomes from the conditioned medium of human induced pluripotent stem cells (iPSCs) have a therapeutic potential in the treatment of skin aging caused by photoaging and natural senescence.Pretreated induced pluripotent stem cells exosomes (iPSCs-Exo's) can inhibit the overexpression of MMP-1/3 and attenuate the human dermal fibroblast (HDF) injury caused by UVB and finally restore the expression of type I collagen [88].By injecting autologous adipose-derived mesenchymal stem cells expanded in vitro, completely de novo oxytalan and elaunin fiber production in the subepidermal area and the rebuilding of the dermis-epidermal junction structure were observed, indicating the complete rescue of solar elastic deformation [89]. Moreover, adipose stem cell-conditioned culture medium (ADSC-CM) can reduce UVB-induced apoptosis and stimulate collagen synthesis by HDFs to reduce the appearance of wrinkles, which was confirmed by a shortening of the subG1 phase of HDFs.ADSC-CM can also increase the expression of type I collagen in HDFs and increase the level of metalloproteinase 1 [90].Conditioned media from human umbilical cord blood-derived mesenchymal stem cells (USC-CM) contains epithelial growth factor (EGF), basic fibroblast growth factor (bFGF), platelet-derived growth factor (PDGF), hepatocyte growth factor (HGF), collagen type 1, and growth differentiation factor-11 (GDF-11), one of the youth growth factors [91].In an experiment, GDF-11 significantly accelerated the growth and migration of HDFs and also improved the production of extracellular matrix (ECM).USC-CM exosomes (USC-CM Exo's) contain the growth factor related to skin rejuvenation which acts on HDFs to promote cell migration and collagen synthesis [92]. In recent years, the extracts secreted by mesenchymal stem cells have been used as a prospective biological therapy.However, the extracts are unstable and non-specific which could be resolved using artificial intelligence.Due to the ability of machine learning and artificial intelligence to predict and simulate protein folding and peptide/protein structure interactions, they have been used for screening promising biomimetic peptide components in mesenchymal stem cell secretions.One peptide virtual screening model which identified EQ-9 as a peptide with anti-aging and skin repair functions.EQ-9 potentially inhibits inflammation through increasing the fibroblast survival rate and decreasing intracellular ROS levels [93].It has been reported that microfragmented adipose tissue containing adipose stem cells combined with a crosslinked hyaluronic acid scaffold can improve soft tissue defects such as deep wrinkles [94]. Stem Cell Delivery into Skin Wounds A dermal substitute is able to promote wound healing by shortening the healing time and improving the impaired function of the injured tissues.As a prospective carrier of stem cells, SECM-MC hydrogel, composed of soluble ECM (sECM) and methyl cellulose (MC), was inserted into a full-layer skin wound, which led to wound healing through re-epithelialization and neovascularization [95]. Through the combination of biomaterials and living cells, tissue engineering technology can promote the development of regenerative medicine.Tissue engineering scaffolds can transport stem cells to the sites in need of repair, eventually increasing the retention and implantation rates of stem cell transplants [96,97].The rise of 3D printing technology further enriches the structural design and composition of tissue engineering scaffolds, and also provides convenience for the loading and delivery of live cells.Transplanting human gingival tissue pluripotent mesenchymal stem cells/stromal cells in 3Dprinted medical grade polycaprolactone (mPCL) dressings resulted in wound contracture and significantly improved skin regeneration through granulation and re-epithelialization [98].One study showed that dermal vascular endothelial cells in wounds treated with chitosan exhibited better immunoreactivity [99].A new type of chitosan/decellularized dermal matrix (CHS/ADM) stem cell delivery system can overcome the limitation of the traditional collagen delivery system, which lacks a response to high ROS environments.In a high ROS microenvironment, the new system acts as a protective screen and effectively clears a certain amount of the ROS, protecting mesenchymal stem cells (MSCs) from oxidative stress [100].Mesenchymal stem cells derived from rat adipose tissue seeded on collagenchitosan scaffolds and implanted into wounds can completely heal the wound, restoring the epidermis and dermis to a normal state [101].Decellularized human amniotic membranes (dAMs) and matrix (sAM) were used as wound dressing scaffolds.AdMSCs were seeded onto dAMs or sAM, and the results showed that they can promote wound healing by enhancing angiogenesis and collagen remodeling [102].A scaffold made of polyethylene terephthalate (PET) can be used to load mesenchymal stromal cells (MSCs) and promote the rate of wound re-epithelialization [103].BM-MSCs delivered by EGF microspheres into an engineered skin model improved skin wound healing and repaired sweat glands [104].Stem cells and nano-specific simvastatin, which both have the effect of enhancing wound healing, can be locally applied in the same tissue scaffold (TS) to provide a more effective choice for diabetic wound healing [105].A DADM/MSC scaffold containing bone marrow mesenchymal stem cells (BM MSCs) and a degenerated decellularized dermal matrix (DADM) promoted wound healing in deep, extensive burn wounds.It was reported that, combined with cell therapy, a pre-synthesized novel nano scaffold made of nanocellulose, type I collagen, and carboxymethyl diethylaminoethyl cellulose has a synergistic effect on wound healing in rats [106].The researchers developed an in situ cell electrospinning system which overcomes the shortcomings of some stem cell delivery methods, such as a lack of targeting and easy cell loss.The system increased collagen deposition to enhance extracellular matrix remodeling without negatively impacting surface marker expression and the differentiation ability of MSCs. Furthermore, it also increased the expression of vascular endothelial growth factor (VEGF) and the formation of small blood vessels to promote angiogenesis but significantly reduced the expression of interleukin-6 (IL-6), and ultimately promoted skin wound healing [107].When comparing poly (ε-caprolactone) (PCL) and poly (ε-caprolactone)/type I collagen (PCol) in the ability to promote biological signaling, wound coverage, and tissue repair processes, it was found that PCol/human Wharton's jelly mesenchymal stromal cells hWJ-MSCs had a better effect on skin tissue repair [108].The rheological properties of hydrogels are similar to those of the natural extracellular matrix of skin and they can simulate various functions.Therefore, hydrogels have good development prospects as stem cell delivery vehicles [109].Hydrogels derived from porcine myocardial matrix have entered clinical trials (NCT02305602) for the prevention and treatment of heart failure after myocardial infarction. Additionally, natural polymer hydrogels have great advantages in accelerating chronic wound healing.Adipose-derived stem cell (ADSC)-and platelet-rich plasma (PRP)supported hydrogel systems based on methacrylate gelatin (GelMA) and methacrylate fibroin protein (SFMA) have been developed as cell and growth factor delivery carriers for the treatment of pressure ulcers, and have shown good efficacy [110].In addition, studies have indicated a great potential for using Poloxam hydrogel as a cell carrier to support human mesenchymal stromal cells (hMSCs) [111].hUCMSCs, human umbilical cord mesenchymal stem cells encapsulated in a functional injectable thermosensitive hydrogel (chitosan/sodium glycerophosphite/cellulose nanocrystalline, CS/GP/CNC), can be used to repair full-layer skin wounds and significantly accelerate wound closure, microcirculation, tissue remodeling, re-epithelialization, and hair follicle regeneration [112].Through promoting the transformation of M1-type macrophages into M2-type macrophages and accelerating wound angiogenesis, carboxyethyl chitosan (CEC)-dialdehyde carboxymethyl cellulose hydrogel (MSC-Exos@CEC-DCMC HG) loaded on bone marrow mesenchymal stem cell-derived exosomes (MSC-Exo) resulted in a reduction in inflammation [113].Microgels consisting of aligned silk nanofibers were used to load MSCs and regulate paracrine signaling; dispersing the MSCs into these injectable silk nanofiber hydrogels can protect and stabilize these cells in wounds. At the same time, the system is adjustable which enhanced the effect of the MSCs [114].Recently, a novel polysaccharide-based hydrogel scaffold was made using alginate to create a suitable microenvironment for the delivery of adipose-derived mesenchymal stem cells (ASCs) and was demonstrated to improve wound healing processes and accelerate wound closure [115].In situ hydrogel systems composed of hyperbranched polyethylene glycol diacrylate (HB-PEGDA) polymers, sulfhydryl functionalized hyaluronic acid (HA-SH), and short RGD peptides bound to adipose-derived stem cells (ASCs) significantly enhanced neovascularization and accelerated wound healing [116].With the aim of reducing scar formation, bone marrow mesenchymal stem cell (BMSC)-derived nanovesicles (NVs) released by hydrogels were enhanced by genipin and BSA, resulting in efficient ROS clearance and good immunomodulatory activity, and the promoted the proliferation and migration of fibroblasts and vascular endothelial cells, which effectively treated diabetic wounds [117].Injectable hydrogels from adipose acellular matrix hydrogels (hDAT-gel) combined with human adipose stem cells (hASCs) can accelerate the formation of blood vessels at the wound site to a certain extent and accelerate wound healing, which has great potential in the field of wound healing [118]. It has been found that the stimulation of rat adipose stem cells (rASC) with a 5 µA electrical stimulation (ES), namely 5 µA PFS, can enhance their paracrine function, and deliv-ering the 5 µA PFS with a heparinized PGA-host-guest hydrogel (PGA-Hp hydrogel) effectively accelerated the repair process in a rat full-layer wound model.Amino-functionalized mesoporous silica nanoparticles (MSNs) can enhance the stability of hydrogel beads and significantly improve the proliferative properties of human adipose-derived mesenchymal stem cells (hASCs).In one study, treating circulating monocytes with mesenchymal stem cell (MSC) superserum to produce activated macrophages in a double-layer scaffold composed of hydrogels and nanofibers resulted in faster wound healing rates [119].In another similar study, a double-layer scaffold composed of hydrogels and nanofibers was also prepared, and ADSCs were inoculated onto it; the optimal performance of re-epithelialization, collagen tissue production, neovascularization, and reduction in inflammation in the wound area were observed [120]. Currently, synthetic cell vectors can be produced by polymerizing acrylic plasma onto medical-grade silica gel for the purpose of delivering hBM-MSCs into the skin [121].Silk fibroin (SF) was shown to significantly improve the adhesion of bone mesenchymal stem cells (BMSCs), while Col/TSF hybrid scaffolds have excellent skin affinity, good air and water permeability, and good wound healing potential [122]. The Effect of Stem Cell Sheets Scaffolds have disadvantages such as high requirements for a sufficient supply of cells and correct cell injection positions.In contrast, cell sheets, cells that are either selfsupporting or that are delivered from a supporting material but where the material plays no long-term role in the therapy [123], do not need a scaffold and have a higher cell density, which is one of the crucial factors for enhancing the therapeutic function of cell transplantation.Due to their high cell density, cell sheets show a longer retention time at the transplant site as well as more local delivery of growth factors and cytokines. Cell sheet engineering (CSE) has attracted increasing attention as a competitive alternative to traditional cell-based or stent-based approaches due to its inherent advantages of higher cell survival and biocompatibility [124].In addition to heart tissue, the benefits of ASC tablets on tissue regeneration are mainly reflected in the skin [125].Early studies have found that the combination of ASC tablets and artificial skin grafts accelerates wound healing and blood vessel formation [126].A ROS-induced cell sheet stacking method was designed, and the newly prepared hematoporphyrin was incorporated into a polyketone membrane (Hp-PK membrane), which could improve the delivery efficiency of cell sheets and be effectively applied for wound healing [127].Human umbilical cord mesenchymal stem cells (hUC-MSCs) were cultivated on Col-T scaffolds to prepare stem cell tablets, which could restore the structure and function of damaged tissues [128].The injection of disintegrated human amniotic fluid stem cells hAFSC tablets can exert anti-fibrotic properties without delaying wound closure, thereby accelerating skin wound healing and reducing fibrotic scarring, similar to fetal wound healing. Compared to dissociated cells, treating wound tissues with ASC tablets has the advantages of faster wound healing and minimal risk of long-term side effects [129].Compared with normal people, diabetic patients with foot ulcers usually show prolonged wound healing due to diabetic neuropathy and blood flow disorders.However, it had been indicated that the direct injection of human fat stem cells (hASCs) can effectively accelerate wound healing in diabetic patients, and this study also pointed out that hASCs have the disadvantage of relative instability [130].Studies of skin pressure sore healing induced by the injection of MSC-based cell sheets (CSs) in C57Bl/6 mice found that, despite a brief retention of the CSs on the ulcer surface (3-7 days), there was an increase in granulation tissue (GT) thickness and increased vascular maturity, while at the same time, compared to the mesenchymal stromal cell (MSC) exosome group, the CSs had a unique function of skin repair using skin appendages [131].Additionally, the use of the peritoneum as the support for the precise transplantation of ASC tablets to the back of SD rats had better effects on gross and histopathological repair than that of simply injecting the ASC tablets [132].Compared with fibroblast tablets, complete tablets composed of amniotic mesenchymal stem cells have a higher tendency to disintegrate, and have the potential to treat burn wounds [133]. Moreover, adipose-derived stem cell (ADSC) tablets can also promote peripheral nerve regeneration [134] and repair ulcerative oral mucosa [135].In order to harvest cells from a culture medium with intact extracellular matrix (ECM) and preserved intercellular connections, many new materials and methods have been developed, among which nanomaterials are a hot topic.For example, TiO2 nanodots are the most commonly used nanomaterials in photoinduced cell sheet technology, and two-dimensional (2D) nanomaterials such as graphene have also been applied in photoinduced cell sheet technology [136]. Effects of Conditioned Media There are some limitations to the clinical use of ASCs.For autologous ASCs, ASCs need to be cultured for several weeks to obtain a sufficient number of cells, and the process comes with significant costs and requires staff to maintain the cell culture and cell processing facilities. At present, there is increasing evidence that mesenchymal stem cells can promote skin repair and regeneration through paracrine actions [137][138][139].Mesenchymal stem cell-conditioned medium, as a cell-free therapy, can accelerate the wound healing process and avoid the risks of live-cell therapy.Not only that, CM can be easily manufactured, stored, and transported.A study found that Wharton's jelly mesenchymal stem cell (WJ-MSC)-derived conditioned medium (MSC-CM) secreted some factors that promoted HUVEC multiplication, increased the regeneration of sebaceous glands, and enhanced the angiogenesis induced by human umbilical vein endothelial cells.By promoting the expression of α-SMA, MSC-CM significantly increased the number of skin blood vessels in healed wounds.At the same time, MSCCM was used to treat radiation dermatitis in rats for the first time in [140].Another similar study showed that WJ-MSCs had a greater ability in sweat gland repair and skin regeneration after skin injury, and the MSC-CM group had the smallest wound area and the highest Col1A2 expression [141].The conditioned medium of human cord blood mesenchymal stem cells (USC-CM) has an anti-inflammatory effect through the growth factors and cytokines in it, such as EGF [142].ASC-CM treatment was shown to promote the anti-inflammatory phenotype of macrophages, partially protecting the skin barrier damaged by PMA exposure, and has broad application prospects in wound healing and skin inflammation [143]. Moreover, USC-CM also includes growth factors associated with skin rejuvenation, such as growth differentiation factor-11 (GDF-11) [144].It has also been found that after injecting a medium conditioned by mesenchymal stromal cells into a wound, a lower inflammation level or enhanced epithelialization were observed [145].By preparing a diabetic foot ulcer (DFU) model and pretreating the ulcer with rat bone marrow MSCconditioned medium, it was finally demonstrated that MSC-CM injected into DFU rats promotes the wound healing process by accelerating wound closure, cell proliferation, and angiogenesis, without increasing ulcer cell death [146]. Excessive wound repair can lead to hyperplastic scars or keloids, and it is generally believed that skin fibroblasts play an important role in the scarring process.Hypoxicconditioned media of placenta-derived mesenchymal stem cells (PMSCs) reduced scarring in vivo and inhibited the proliferation and migration of skin fibroblasts in vitro [147], suggesting that PMSCS may be a promising wound treatment therapy. Other Practicable Methods to Deliver Stem Cells Before the transplantation of human adipose-derived stromal cell (hASC) globules (PBM-globules), photobiological regulation can stimulate angiogenesis and tissue regeneration in mouse flaps to improve skin tissue functional recovery.When the wound tissue is treated with cell sheets composed of adipose-derived stem cells (ASCs), more transplanted ASCs can be observed in the wound tissue, and the new skin formed from it has a thickness similar to normal skin and a highly organized collagen structure, which can ultimately im-prove skin wound healing and reduce scarring [129].The injection of cell sheets made from disintegrated human amniotic fluid stem cells (hAFSCs) can exert anti-fibrotic properties without delaying wound closure [148]. Studies have found that three-dimensional graphene foam (GF) scaffolds have good biocompatibility, and the combination with bone marrow mesenchymal stem cells (MSCs) promotes the growth and proliferation of MSCs, which can improve skin wound healing [149].Platelet-rich plasma (PRP) products are believed to have a pro-angiogenic effect and are currently recommended for the treatment of chronic wounds.In one study, the researchers added PRP to irradiated HDMEC and hASC cultures to prevent a large radiation-induced drop in cell numbers, rescuing the proliferation defects caused by external radiation.This method may be beneficial for treating chronic wounds with defective healing processes [150].Modified acellular dermal matrix (DADM), as a skin substitute combined with bone marrow mesenchymal stem cells (BM-MSCs) implanted on mouse skin wounds, has good survival characteristics and represents a promising alternative therapy for deep, extensive burn wound healing [151].To verify that preconditioning ASCs with hypoxia leads to enhanced functions of the adipose-derived stem cells (ASCs), porcine adipose stem cells (pASCs) cultured under hypoxia (PASCS-HYP) were transplanted into a mouse model of full-layer skin wound resection and were shown to promote the expression of the angiogenesis marker VegfA and decreased the level of proliferation-promoting Tgfβ1 [152].Low-level laser therapy (LLLT) can increase the survival rate of ASCs, thereby stimulating the secretion of growth factors [153].Gene-activated scaffolds (GASs) transfected with genes were loaded with bone marrow-derived mesenchymal stem cells (BM-MSCs) to form GAS/BM-MSCs constructs; these constructs were shown to accelerate the wound healing process and induce in situ regeneration of full-layer skin with sweat glands [154].The use of micronized amniotic membrane (mAM) as a microcarrier can improve the in vitro expansion efficiency of mesenchymal stem cells [155].Studies have shown that the pre-treatment of MSCs with bioactive compounds can improve their survival rate and regenerative potential.Human umbilical cord mesenchymal stem cells (hUC-MSCs) can significantly improve healing after treatment with quercetin [156].Rosuvastatin calcium-loaded scaffolds were prepared and combined with MSCs and implanted into mouse wounds, and it was found that the mesenchymal stem cell-drug scaffolds showed complete skin healing after 30 days [157]. In one study, a 3D radial and vertically aligned nanofiber scaffold was developed that perfectly matched the size, depth, and shape of diabetic wounds to transplant bone marrow mesenchymal stem cells (BMSCs) to promote granulation tissue formation, angiogenesis, and collagen deposition, and tilt the immune response in the pro-regeneration direction [158].The cord platelet gel (CBPG) developed by the Cord Blood Unit (CBU) has been successfully applied to induce wound closure and tissue regeneration [159]. Summary and Outlook In recent years, due to the development of tissue engineering technology, the use of mesenchymal stem cells as a new method for wound healing has garnered extensive research, for uses such as nanotherapy, stem cell therapy, and 3D bioprinting, and due to the low amount of ethical controversy associated with it, there are many tissue sources for mesenchymal stem cells.One of the benefits of mesenchymal stem cells is that bone marrow, adipose tissue, umbilical cord blood, Wharton's jelly stem cells and amniotic fluid can be easily collected. In general, the mechanisms through which mesenchymal stem cells promote wound healing have been fully discussed at the molecular level and cellular level.Preclinical and clinical studies on the application of mesenchymal stem cells in the treatment of burns and chronic wounds have fully demonstrated their potential in the field of wound healing.Moreover, mesenchymal stem cells, combined with other tissue engineering techniques, can play a greater role in skin tissue repair.For example, through 3D-printing technology, medical-grade materials such as polycaprolactone (mPCL) and polyethylene terephthalate (PET) can be used to effectively create scaffolds, transporting stem cells to the desired repair site.The scaffold made by combining chitosan and collagen can overcome the limitations of traditional collagen scaffolds and even achieve complete wound healing.In addition, hydrogels can be used as the main raw material for delivery carriers because their rheological properties are similar to those of skin.When combined with emerging materials, mesenchymal stem cells have also shown good performance in promoting wound healing in many studies. However, many relevant studies are still in the pre-clinical stage, and further exploration is still needed before they are truly transformed into clinical wound healing treatments.However, mesenchymal stem cells have demonstrated their ability, and we have sufficient confidence that mesenchymal stem cells will become the backbone of wound healing in the future. Figure 1 . Figure 1.Schematic depiction of wound healing phases and the corresponding cellular responses.In the initial phase of wound healing, when blood clo ing occurs, platelets release signaling molecules and chemical messengers that a ract inflammatory cells.Inflammation begins with the influx of neutrophils, facilitated by the release of histamine from mast cells.Subsequently, monocytes arrive and differentiate into tissue macrophages, which are responsible for clearing residual cell debris and neutrophils.In the proliferative phase, keratinocytes migrate to bridge the wound, new blood vessels form through the growth of tiny vessels, and specialized cells called fibroblasts replace the initial blood clot with a tissue known as granulation tissue.Macrophages and regulatory T cells play crucial roles during this stage of the healing process.Eventually, the newly formed tissue undergoes further restructuring as fibroblasts reshape the deposited matrix, the blood vessels diminish in size, and specialized cells called myofibroblasts contribute to the overall contraction of the wound.Reproduced with permission from[23]. Figure 1 . Figure 1.Schematic depiction of wound healing phases and the corresponding cellular responses.In the initial phase of wound healing, when blood clotting occurs, platelets release signaling molecules and chemical messengers that attract inflammatory cells.Inflammation begins with the influx of neutrophils, facilitated by the release of histamine from mast cells.Subsequently, monocytes arrive and differentiate into tissue macrophages, which are responsible for clearing residual cell debris and neutrophils.In the proliferative phase, keratinocytes migrate to bridge the wound, new blood vessels form through the growth of tiny vessels, and specialized cells called fibroblasts replace the initial Figure 2 . Figure 2. Schematic diagram illustrating the two primary sources of mesenchymal stem cells: adultderived and perinatal-derived sources.Reproduced with permission from [27]. Figure 2 . Figure 2. Schematic diagram illustrating the two primary sources of mesenchymal stem cells: adultderived and perinatal-derived sources.Reproduced with permission from [27]. Figure 3 . Figure 3. Illustration depicting the impact of MSC-derived EVs on the process of wound healing.Reproduced with permission from [55]. the infiltration of inflammatory cells and the levels of the inflammatory factors IL-1, IL-6, and TNF-α at the wound site, along with increased levels of VEGF and IL-10,[58] Figure 3 . Figure 3. Illustration depicting the impact of MSC-derived EVs on the process of wound healing.Reproduced with permission from [55]. -derived Table 1 . MSC therapy in animal models to promote wound healing.
2024-03-31T15:17:03.352Z
2024-03-27T00:00:00.000
{ "year": 2024, "sha1": "903e23dee0f8e873014620fc39a6ecd96a9bd126", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9059/12/4/743/pdf?version=1711534585", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "75c922dd3b02cdb76bd34e64323832f3142de686", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
203601891
pes2o/s2orc
v3-fos-license
Resectability of Rectal Neuroendocrine Tumors Using Endoscopic Mucosal Resection with a Ligation Band Device and Endoscopic Submucosal Dissection Background Rectal neuroendocrine tumors (NETs) < 10 mm in diameter, limited to the submucosa without local or distant metastasis, can be treated endoscopically. Endoscopic mucosal resection with a ligation band device (EMR-L) and endoscopic submucosal dissection (ESD) have been employed to resect rectal NETs. We evaluated and compared the clinical outcomes of EMR-L and ESD for endoscopic resection of rectal NETs G1 < 10 mm in diameter. Methods We conducted a retrospective study of 82 rectal NETs in 82 patients who underwent either EMR-L or ESD. Therapeutic outcomes (en bloc resection and complete resection rates), procedure time, and procedure-related adverse events were evaluated. Additionally, we measured the distance of the lateral and vertical margins from the border of the tumor in pathologic specimens and compared the resectability between EMR-L and ESD. Results Sixty-six lesions were treated using EMR-L and 16 using ESD. En bloc resection was achieved in all patients. The complete resection rate with EMR-L was significantly higher than that with ESD (95.5% vs.75.0%, p = 0.025). The prevalence of vertical margin involvement was significantly higher in the ESD group than in the EMR-L group (12.5% vs. 0%, p = 0.036), and ESD was more time consuming than EMR-L (24.21 ± 12.18 vs. 7.05 ± 4.53 min, p < 0.001). The lateral and vertical margins were more distant in the EMR-L group than in the ESD group (lateral margin distance, 1661 ± 849 vs. 1514 ± 948 μm; vertical margin distance, 277 ± 308 vs. 202 ± 171 μm). Conclusions EMR-L is more favorable for small rectal NETs with respect to therapeutic outcomes, procedure time, and technical difficulties. Additionally, EMR-L enables achievement of sufficient vertical margin distances. Background Rectal neuroendocrine tumors (NETs) occur in the enterochromaffin cells of Lieberkühn's crypts [1]. These tumors were believed to possess biologically indolent behavior and were formerly called carcinoid tumors. Recently, there has been a gradual shift from the term carcinoid tumor to neuroendocrine tumor, which is further classified according to the site of origin and grade based on the proliferation indices of tumor cells, such as mitotic figures and Ki-67 labeling index [2]. Although rectal NETs are uncommon, representing only 1.1%-1.8% of all anorectal neoplasms, their incidence has considerably increased in recent decades [3,4]. The rectum is the third most common site for NETs reported in western countries, following the small bowel and colon (including the appendix); however, in Asia, including Korea, the rectum is the most common site for all cases of gastrointestinal NETs and accounts for 48%-61% of cases [5,6]. In rectal NETs, the risk of metastasis depends on tumor size, histologic differentiation, proliferative index, and lymphatic, vascular, or neural invasion [7][8][9][10]. Of these, tumor size is the most important factor for predicting the risk of metastasis [11]. For lesions > 20 mm, radical surgery including lymph node dissection should be performed [9]. However, for tumors < 1 cm in diameter without infiltration of the muscularis propria, or lymph node and distant metastasis, endoscopic resection is recommended. Additionally, tumors of 1-2 cm can also be removed endoscopically, provided there are no features of metastatic potential such as high mitotic rate, muscularis propria invasion, and lymph node and distant metastasis [10,12,13]. To date, various endoscopic techniques have been developed to resect rectal NETs. Endoscopic mucosal resection with a ligation band device (EMR-L) and endoscopic submucosal dissection (ESD) have been reported to achieve complete resection of rectal NETs [14][15][16][17]. However, ESD is a timeconsuming procedure and requires advanced endoscopic skills, compared to EMR-L. EMR-L ensures achievement of sufficient safety margin, as compared to conventional EMR, because it creates a pseudopedicle before resection via submucosal injection below the lesion and a ligation band device that can remove a deeper part of the submucosal layer [18,19]. According to a previous report, EMR-L achieves a complete resection rate as high as that with ESD [15,20]. Additionally, this procedure is easy, simple, less time-consuming, and carries a low risk of adverse events such as bleeding and perforation [15,16]. Complete resection is an important indicator of a curative treatment for rectal NETs. Therefore, this retrospective study is aimed at evaluating and comparing the clinical outcomes of EMR-L and ESD for endoscopic resection of rectal NETs G1 < 10 mm in diameter in terms of complete resection and recurrence rate. Additionally, we hypothesized that the longer the lateral and vertical margin distances from the borders of the tumor in a pathologic specimen, the higher the possibility of complete resection and the better the endoscopic resection method. Thus, we measured the lateral and vertical margin distances from the borders of the tumors in pathologic specimens and compared them between EMR-L and ESD. Material and Methods 2.1. Patients. Between January 2011 and December 2012, a total of 82 rectal NETs in 82 consecutive patients (45 men, 37 women; median age, 51.8 years; range: 29-71 years) were resected using either EMR-L or ESD at the Pusan National University Hospital in Korea. Clinical data from these 82 cases including age, sex, tumor size, tumor location, endoscopic procedure, procedure time, procedure-related adverse events, and follow-up outcomes were collected. All patients were informed of the benefits and risks of the procedure. Written informed consent to perform EMR-L or ESD was obtained from all enrolled patients. This study was approved by the Institutional Review Board of Pusan National University Hospital, Busan, Korea (approval number: 1708032058). Rectal NETs were defined as NETs located within 15 cm of the anal verge. We divided the rectum into the following three parts: lower, middle, and upper rectum. From the anal verge, the three parts were defined as follows: the lower rectum extended from the anal verge to 6 cm; the middle rectum, from 7 to 12 cm; and the upper rectum, from 12 to 15 cm [21]. For the evaluation of tumor size and depth of invasion, endoscopic ultrasonography (EUS, GF-UC240P-AL5, Olympus Optical Co., Tokyo, Japan) was performed in all patients before endoscopic resection. Abdominal computed tomography (CT) and chest radiography were performed to exclude the presence of local and distant metastasis. The indications for endoscopic treatment of rectal NETs were as follows: histopathologically proven rectal NETs before endoscopic resection, typical rectal NET appearance (small, sessile, and submucosal tumors covered with yellow discolored mucosa) observed endoscopically but not diagnosed histopathologically [22], tumors located within the submucosal layer as noted with EUS, and no evidence of local or distant metastasis on chest radiography and abdominal CT [9,13]. EMR-L and ESD Procedures. These procedures were performed by two highly experienced endoscopists (G.A.S. and D.Y.R.) with >5 years of experience in performing therapeutic endoscopy (extensive experience in >3000 colorectal EMR cases and >300 colorectal ESD cases). The decision to perform EMR-L or ESD was made at the discretion and individual preference of attending endoscopists. Bowel preparation with a polyethylene-glycol solution and ascorbic acid (Coolprep; Taejoon Pharmaceuticals, Seoul, Korea) was performed before endoscopic resection. EMR-L and ESD were performed using a single channel scope (GIF-H260; Olympus Co., Ltd., Tokyo, Japan). 2.2.1. EMR-L Procedure. The endoscope was inserted into the rectum. Saline solution diluted to 1 : 100,000 with epinephrine, and a small amount of indigo carmine was injected into the submucosal layer beneath the lesion. After lifting the tumor off the muscularis propria, an endoscope with a band ligation device attached to its tip was reinserted into the rectum. Subsequently, the lesion was aspirated into the transparent cap, followed by deployment of the elastic band. Snare resection was performed below the elastic band, using blend electrosurgical current ( Figure 1). ESD Procedure. The endoscope with a transparent hood attached to its tip was inserted into the rectum. Saline solution diluted to 1 : 100,000 with epinephrine, and a small amount of indigo carmine was injected into the submucosal layer around the lesion. A circumferential mucosal incision was made at 3-5 mm from the lesion. Subsequently, additional saline was injected beneath the lesion to lift the lesion apart from the muscularis propria. Finally, the submucosal layer was directly dissected using a dual knife (KD-650L; Olympus, Tokyo, Japan) ( Figure 2). Histopathological Evaluations and Follow-Up. The tumor size was determined by measuring the resected specimen prior to tissue fixation in formalin. The maximum diameter was used as the measure for tumor size. Resected specimens were evaluated histopathologically in slices at 2 mm intervals, using light microscopy at low-power and high-power magnifications by an experienced pathologist (D.Y.P.). The specimens were carefully examined for histopathological type, differentiation, depth of invasion, lateral and vertical resection margins, and lymphovascular invasion. Complete resection refers to en bloc resection with no tumor cells identified at the lateral and vertical margins. Based on the 2010 classification criteria proposed by the World Health Organization (WHO) [2], the proliferation of tumors was evaluated by using the Ki-67 index and calculating the mitotic count. In particular, we measured the lateral and vertical margin distances from the borders of tumors. The lateral margin distance was defined as the horizontal distance from the border of tumor in the resected specimen, whereas the vertical margin distance was defined as the vertical distance from the border of tumor in the resected specimen ( Figure 3). Outcome Parameters. The primary outcomes were en bloc and complete resection rates. The secondary outcomes were procedure time, procedure-related adverse events, and recurrence rate. Additionally, we measured the lateral and vertical margin distances from the borders of tumors in pathologic specimens and compared the lateral and vertical margin distances between EMR-L and ESD. En bloc resection was endoscopically defined as resection of the entire lesion in a single piece. Complete resection was histopathologically defined according to the following criteria: en bloc resection, no tumor cells on the lateral and vertical resection margins of the resected tumor, well-differentiated NET, and no lymphovascular invasion according to the 2010 WHO classification [2]. Procedure time was defined as the time from identification of the lesion to complete resection of the tumor. Procedure-related adverse events included bleeding and perforation. Procedure-related bleeding was defined as hematochezia after completion of EMR-L or ESD, which required endoscopic or radiologic hemostasis or blood transfusion. Bleeding that occurred during EMR-L or ESD procedure and was treated endoscopically was not regarded as procedure-related bleeding. Procedure-related perforation was defined as a visible hole in the rectal wall recognized during the endoscopic procedure or the presence of air in the peritoneum or retroperitoneum demonstrated by radiologic examinations. After endoscopic treatment, the follow-up interval for endoscopic examination and CT was at least 12 months. We recommended that patients whose lesions were detected to have lateral and/or vertical margin involvement underwent additional surgery with regional lymph node dissection. For patients who refused to undergo additional surgery, follow-up with rectoscopy, chest radiography, and abdominal CT was performed annually. If residual tumors on the scar were suspected, we performed endoscopic biopsies. Of 82 lesions, 66 were resected using EMR-L and 16 were resected using ESD. The EMR-L group included 66 patients (37 men, 29 women; mean age, 51 61 ± 9 81 years), whereas the ESD group included 16 patients (8 men, 8 women; mean age, 52 69 ± 9 83 years). Endoscopic biopsy prior to the procedure was performed in 74.2% of patients in the EMR-L group and 75.0% of patients in the ESD group. The mean diameters of tumors in the EMR-L and ESD groups were 5 02 ± 1 69 and 7 08 ± 2 15 mm, respectively (p = 0 002). Endoscopic and Histopathological Outcomes of EMR-L and ESD. Table 2 shows the therapeutic outcomes of EMR-L and ESD. En bloc resection was endoscopically achieved in all patients. However, the complete resection rate in the EMR-L group was 95.5% (63/66), which was significantly higher than that in the ESD group (75% (12/16), p = 0 025). Lateral margin involvement was observed in 3 cases in the EMR-L group (4.5%) and 3 cases in the ESD group (18.8%; p = 0 085). The rate of vertical resection margin involvement was significantly lower in the EMR-L group (0 of 66 lesions, 0%) than in the ESD group (2 of 17 lesions, 12.5%; p = 0 025). The mean procedure duration for EMR-L vs. ESD was 7 05 ± 4 53 vs. 24 21 ± 12 18 min (p < 0 001). ESD was a more time-consuming procedure than EMR-L. Procedurerelated adverse events such as bleeding and perforation did not occur in either group. No local or metastatic recurrence was observed in either group during the follow-up period (mean, 41.9 months; range: 18-66 months). The lateral margin distance was longer in the EMR-L group than in the ESD group (lateral margin distance, 1661 ± 849 μm vs. 1514 ± 948 μm, respectively) ( Figure 3). Furthermore, the vertical margin distance was longer in the EMR-L group than in the ESD group (vertical margin distance, 277 ± 308 μm vs. 202 ± 171 μm, respectively). However, none of these differences were statistically significant (p = 0 546 and p = 0 350, respectively). Clinicopathological Characteristics and Follow-Up Outcomes of Patients with Incomplete Resection. Among patients with complete resection in both the EMR-L and ESD groups, no local recurrence occurred during the mean follow-up period of 41.9 months (range: 18-66 months). Incomplete resection was achieved in 7 patients. Their clinicopathological characteristics and follow-up outcomes are summarized in Table 3. In the ESD group, 4 lesions showed margin involvements: 2 had lateral margin involvement, 1 had vertical margin involvement, and 1 had both lateral and vertical margin involvement. In the EMR-L group, 3 lesions showed lateral margin involvement; there was no vertical margin involvement. We recommended additional endoscopic treatment or surgery; however, the patients did not want to undergo further treatment. Hence, close followup examinations were performed for these patients. We did not observe local recurrence or distant metastasis in any of the 7 patients. Discussion Herein, we reviewed our institutional experience on resection of rectal NETs by two endoscopic methods-namely, EMR-L and ESD. Our results showed that the rate of en bloc resection for EMR-L and ESD did not differ. However, EMR-L yielded a higher complete resection rate, which was superior to that of ESD (75.0% for ESD vs. 95.5% for EMR-L, p = 0 025), along with an acceptable procedure time. The rate of en bloc resection was the same for both EMR-L and ESD. No complications were reported for both the techniques. These results are consistent with those of a previous study evaluating the treatment outcomes of ESD and modified EMR for rectal NETs. In the previous study, the complete resection rate achieved with modified EMR was higher than that achieved with ESD (91.09% vs. 88.71%, respectively) [13]. Additionally, to the best of our knowledge, this is the first report to evaluate the resectability of rectal NETs with EMR-L and ESD by measuring the lateral and vertical margin distances from the borders of the tumors in pathologic specimens. In the current study, we observed that EMR-L is superior to ESD in terms of lateral and vertical margin distances from the borders of tumors in pathologic specimens. The incidence of gastrointestinal NETs has considerably increased in recent decades [23]. However, the distribution of tumors in the digestive system reported in Asia differs from that in reports from western countries. In Korea, the rectum is the most common site for gastrointestinal NETs, which showed the most significant increase in cases reported in the last decade [23]. Conversely, western reports describe the small intestinal NETs as being the most common. However, whether there is a true increased prevalence of tumors or whether the rate of detection simply increased because of a widespread use of screening colonoscopy is unclear [4,7]. Most rectal NETs are well-differentiated, are WHO grade 1 and 2, and are located within 10 cm from the dentate line, and 80% of tumors invade no deeper than the submucosa [7-9, 14, 24]. The selection of a safe and effective endoscopic resection method is required to achieve complete resection because most rectal NETs arise from deeper layers of the mucosa and frequently infiltrate the submucosal layer. According to recent reports and meta-analysis, rectal NETs G1 that are estimated endoscopically as <16 mm in diameter without atypical endoscopic features (central depression, ulcerofungating, semipedunculated, erosion, ulceration, and hyperemia) [25] and are confined to the submucosal layer without lymphovascular invasion demonstrate a high complete resec-tion rate and excellent long-term prognosis. Therefore, they are suitable for endoscopic treatment, which offers improved quality of life compared with surgery [7,22,[26][27][28]. To date, various endoscopic resection techniques have evolved and have been used for resection of rectal NETs, such as endoscopic polypectomy, endoscopic mucosal resection (EMR), and endoscopic submucosal dissection (ESD). Furthermore, new techniques derived from conventional EMR procedures have been developed, including EMR with a ligation band (EMR-L), EMR using a transparent cap (EMR-C), EMR using a dual-channel endoscope (EMR-D), and endoscopic submucosal resection with a ligation device (ESMR-L). However, there remains a debate regarding the best endoscopic technique. The ESD technique is suitable for complete resection of a relatively large lesion. ESD has been approved for en bloc and complete resection of early gastric cancer, especially Korea and Japan. In addition, ESD can more effectively resect subepithelial tumors, including rectal NETs. However, rectal NETs treated with ESD are reported to have a vertical resection margin involvement of 6.5% to 19.4% due to difficulties with submucosal dissection because of its proximity to the muscularis propria [15,17,29]. Similar to previous reports, vertical margin involvement was also observed in 12.5% cases in this study. The procedure time for performing ESD is long, and an advanced and experienced endoscopist is needed [14,30]. It takes more time to learn ESD than EMR [31,32]. Furthermore, there is a risk of perforation during ESD. Although perforations can be managed by an endoscopic method, the reported perforation rates for colorectal lesions are higher than those for stomach lesions (10.4% vs. 1.4%, respectively) [31,33,34]. A large number of colorectal perforations have been reported during the learning course of ESD. Therefore, the application of ESD for small rectal NETs may be limited and is not yet a widely accepted management. Conventional EMR is simpler, less expensive, and associated with fewer adverse events than ESD. However, it can sometimes cause incomplete resection and crush injury to the resected specimen of rectal NETs that are mainly located in the submucosal layer, leading to difficulty in pathologic evaluation [35,36]. Conventional EMR shows unsatisfactory complete resection rates, ranging between 52.2% and 84.6%. It could be partly due to the nature of rectal NETs, which originate from the lower crypts and infiltrate the submucosal Table 3: Clinical characteristics and follow-up outcomes of cases of incomplete resection. layer, demonstrating a subepithelial tumor-like growth pattern. EMR-L was designed to overcome these shortcomings of conventional EMR [18,19]. This endoscopic resection technique involves suctioning the submucosal layer sufficiently into a transparent cap, followed by resection of the pseudopolyp that is formed by a ligation band device. Therefore, EMR-L can obtain undamaged round specimens and provides deeper and wider resection margins [19,37], consistent with our findings. In the present study, we attempted to determine which method achieves longer lateral and vertical margin distances from the borders of tumors in pathologic specimens. The lateral and vertical margin distances achieved were longer with EMR-L than with ESD. Additionally, there was no vertical resection margin involvement in the EMR-L group. However, vertical resection margin involvement was identified in 2 of 16 lesions (12.5%) in the ESD group. EMR-L has the advantages of having a shorter procedure time and of being a simpler technique than ESD. In this study, the mean procedure time for the ESD group (24 21 ± 12 18 min) was longer than that for the EMR-L group (7 05 ± 4 53 min). However, the rate of adverse events, such as bleeding and perforation, was comparable between the groups. Therefore, EMR-L is thought to be more effective, safer, and more feasible than ESD in clinical practice [15,38,39]. One of the most intriguing findings of our study is that EMR-L is superior to ESD with respect to lateral and vertical margin distances from the borders of tumors in pathologic specimens. Theoretically, if any endoscopic resection method for rectal NETs can secure a longer distance of the tumor from the resection margin, such endoscopic resection method can achieve a more complete resection. Therefore, the significance of a longer distance of the tumor from the resection margin after endoscopic resection lies in the fact that it can decrease local recurrence, which may reduce surveillance burden, morbidity, and mortality due to recurrence of rectal NETs. However, the horizontal margins in ESD are purely a matter of choice. The endoscopist can establish the horizontal margin freely and distantly when performing ESD. The choice to remove too small lateral margins could adversely affect both the lateral and deep margins, particularly when not using a pocket creation technique, as done here with initial complete circumferential incision. Both circumferential incision and a small lateral margin make cap insertion under the mucosa difficult or impossible with poor exposure of the submucosa and more blind dissection under the mucosal flap that obscures the submucosa/muscularis propria division. This would make the deep submucosal dissection difficult or impossible to perform. For this reason, as shown in our results, the vertical margin with ESD would be shorter than that with EMR-L and the complete resection rate might be lower. Considering this point, when performing ESD for resection of rectal NETs, we should have sufficient lateral margin and use an advanced endoscopic technique such as submucosal tunneling. In the future, many investigations on advanced endoscopic techniques such as submucosal tunneling for resection of rectal NETs will be necessary. Our research group is planning to conduct a study evaluating the feasibility of the submucosal tunneling method for the removal of subepithelial tumors, including rectal NETs of the colon and rectum. No. sex/age Tumor location Tumor size (mm) Endoscopic method ER Tumor margin LVI A previous study showed that endoscopic biopsy of rectal NETs before endoscopic resection can flatten the lesions and blur the margins; therefore, the complete resection rate of rectal NETs decreases because of fibrosis due to a previous biopsy [40]. However, in this study, the proportion of patients who underwent biopsy before the endoscopic procedure was similar in both groups, and previous endoscopic biopsies did not affect the complete resection rate. As shown by our results, regardless of whether the endoscopic biopsy was performed before the procedure, EMR-L and ESD could be used for endoscopic treatment of rectal NETs. Furthermore, contrary to popular belief, the preceding biopsy does not increase the incomplete tumor resection rate and incidence of adverse events. Histopathologically, positive lateral and vertical margin involvements are potential risk factors for local recurrence. Three lesions treated by EMR-L showed lateral margin involvement. In cases of lateral margin involvement in the EMR-L group, some lateral margins may have shifted at the band interface when the lesion was aspirated into the ligation band device and the elastic band was then deployed. Four cases of ESD achieved incomplete resection. Of these, 2 had lateral margin involvement, 1 had vertical margin involvement, and 1 had both lateral and vertical margin involvement. In cases with margin involvement in the ESD group, the size of resected specimen was so small that it was difficult to fix and there was a possibility of overdiagnosis in the course of tissue processing. We considered these cases as clinically complete resection because of the cautery effect of resected plane and pseudocapsule formation around the tumor mass [25]. Further, 7 patients with incomplete resection underwent careful observation with repeat rectoscopy, chest radiography, and abdominal CT. No local or metastatic recurrence was observed in either group during the follow-up period (mean, 41.9 months; range: 18-66 months). There might be 2 possible reasons for the absence of recurrences. One is that well-differentiated rectal NETs may have an indolent behavior. There is a risk of recurrence even in the long term, as a previous report showed recurrence at 16 years after initial polypectomy [41]. Another reason is uncertainty in determining a cut margin because of the burning effect of the electrosurgical unit on residual tumor cells in case of a positive margin. Therefore, in such cases with clinically observed complete resection and in the absence of risk factors such as poor differentiation, elevated proliferative index, lymphovascular or neural invasion, and nodal or distant metastasis, a close follow-up may be a better option than surgery. This study has a few limitations. First, since this was a nonrandomized study conducted in a single center, it is subject to the biases inherent in retrospective studies. Although most data were prospectively collected, en bloc resection and complete resection rates, procedure time, and procedurerelated adverse events were retrospectively determined by review of endoscopic images and readings. However, precise data on endoscopic en bloc resection rates, procedure times, and procedure-related adverse events were available. Therefore, we believe that any errors due to assessment of en bloc resection rate, complete resection rate, procedure time, and procedure-related adverse events would be small and less likely to affect our results. Second, technical differences were present between the two endoscopists who performed the procedure, as their expertise and experience may have been different. However, all procedures were performed by two highly experienced endoscopists with >5 years of experience performing therapeutic endoscopy (extensive experience in >3000 colorectal EMR cases and >300 colorectal ESD cases). Therefore, the endoscopist's ability would not have affected the outcome. Third, the number of cases for ESD was smaller than that for EMR-L, and tumor sizes were different between the EMR-L and ESD groups. However, selection bias was likely not very significant because selection of the endoscopic resection method was not based on any predefined absolute criteria. As shown here, modified EMR techniques are comparable to ESD in terms of complete resection rate and adverse events, with ESD being more time consuming. Accordingly, it should be suggested that the optimal method for resection of small rectal carcinoid tumors should be chosen based on endoscopic expertise at a given facility. Therefore, we propose that ESD should be applied to certain NETs that are not an indication for EMR-L, such as NETs larger than 8 mm (if the tumor is larger than 8 mm, it is difficult to aspirate using the ligation device). To establish therapeutic strategies for rectal NETs, the optimal resection method and longterm outcomes after endoscopic treatment should be studied in a large series of patients. Conclusions EMR-L achieves a higher complete resection rate, longer vertical and lateral margin distances, and shorter procedure time than ESD in treating small rectal NETs. Additionally, EMR-L has a low incidence of procedure-related adverse events. Therefore, EMR-L is more favorable for small rectal NETs that can be treated endoscopically. Further prospective large-scale multicenter studies are required to provide additional information on the use of ESD and EMR-L for small rectal NETs. Data Availability The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
2019-09-19T09:13:02.396Z
2019-09-16T00:00:00.000
{ "year": 2019, "sha1": "6c650f5413999343fbfe2d84994b9e1eae6b2223", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/grp/2019/8425157.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "589d7f62ed740599c6485584b839ddbf82957d73", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
210874658
pes2o/s2orc
v3-fos-license
Devotion, Paintings, and the House: The Collections of Ercole and Giuseppe Branciforti, Princes of Scordia This paper interrogates familial devotion and its relationship with parts of the house other than the chapel. In detail, it aims to problematize the issue of the devotional/non-devotional use of paintings inside the house by moving the focus from this dual opposition to the active role of canvases, broadly defined. Informed by Jacques Derrida’s and Pierre Bourdieu’s writings, this paper argues for the structural nature of the paintings inside the house and their meaningful correlation with both the arrangement of the domestic interior and the practices of people experiencing those spaces. To do this, the paper challenges the overwhelming attention paid by early-modern scholars to Northern and central Italy and investigates a precise case study, i.e., Palazzo Scordia in Palermo (Sicily). The research draws upon primary sources and amongst these, upon two detailed inventories of furniture referring to two subsequent generations of an aristocratic clan residing in Palermo between the seventeenth and the eighteenth century, i.e., Ercole and Giuseppe Branciforti, princes of Scordia. Introduction The theme of domestic devotion in the early modern age has received increasingly wide-ranging attention in the last few years. 1 Scholars have explored both domestic spaces exclusively designated for devotion, such as chapels and oratories (Hirschboeck 2011;Lillie 1998;and Mattox 1996), and devotional items located in other parts of the dwelling (Campbell et al. 2013;Musacchio 2008;and Anderson 2007). Particularly, the role of the house as a site for devotional practices of the family has been argued through an extensive examination of coeval treatises, religious literature, depictions of interior environments, and domestic objects listed in inventories. Indeed, a high number of religious paintings, crucifixes, and reliquaries emerge from inventories of Italian early-modern families, listed in different rooms of the house. The large quantity of items with possible devotional association led some scholars to presume that houses were entirely imbued with a sort of Christian spirituality, in continuity-albeit with the proper differences-with religious institutional environments outside the domestic, such as churches, monasteries, and convents. This perspective has been relevantly fuelled by the writings of Margaret Ann Morse (Morse 2018(Morse , 2013a(Morse , 2013b(Morse , 2007. As a result, the domestic has been finally freed from its long-relegation in the secular sphere. In this sense, the interdisciplinary project Domestic Devotions: The Place of Piety in the Italian Renaissance Home (2013)(2014)(2015)(2016)(2017), conducted by the University of Cambridge, undertook significant research on domestic devotion by producing a 3-day conference in July 2015, 1 This paper results from the thesis, which I wrote during my PhD programme at the Art History Department of the University of York, under the supervision of professor Helen Hills. The research gave priority to Italian, English, and Spanish-speaking scholarship, including translations from other languages into these three. an exhibition in 2017, and four volumes (Brundin et al. 2018;Corry et al. 2018;Faini and Meneghin 2018;Corry et al. 2017). However, another issue arose from this result, that is, the problematic idea that the presence of religious images could bestow, per se, a devotional use on domestic objects and on any domestic space that housed them. Conversely, apart from rosaries or prayer books, the use of which is rather clear, the devotional use of a biblical scene engraved on a knife or on a hairbrush is not equally straightforward. This indeterminacy results from the fact that inventories report more quantitative than qualitative data (Nalle 2008, p. 256). Notwithstanding, this paper does not aim to solve the question of the devotional/non-devotional use of images, but to problematize it by moving the focus from this dual opposition to the active role of canvases-broadly defined-inside the house. Informed by Jacques Derrida's and Pierre Bourdieu's writings, this paper argues for the structural nature of the paintings inside the house and their meaningful correlation with both the arrangement of the domestic interior and the practices of people experiencing those spaces. To do this, a precise case study has been investigated, i.e., Palazzo Scordia in Palermo (Sicily), thus challenging the overwhelming attention paid by early-modern scholars to northern and central Italy, which has been decried by the results of the project Domestic Devotions (Faini and Meneghin 2018, pp. 1-2;Corry et al. 2017, p. 7). Despite the lack of extensive research on early-modern Sicily, the collections of paintings seem to have been quite frequent in the greatest aristocratic abodes in Palermo since the end of the sixteenth century (Piazza 2018, p. 118). Amongst these, the collections examined here cannot be considered outstanding or particularly influential. This paper resists both the search for exceptionality and the generalization of the results. Rather, it aims to investigate the specificities of the selected case study, as it offers the occasion to relate with relative certainty the palace articulation, two inventories of paintings, and two precise historical figures, i.e., Ercole Branciforti Campolo Russo et Spatafora (ruled 1658-1687) and his son Giuseppe Branciforti et Morra (ruled 1697-1720), respectively, the second and third princes of Scordia. More than this, the collections of the two nobles marked an important period in the ascent of their family and in its establishment in the city, thus allowing us to explore intersections between architecture and issues of devotion, gender, and rank. The Route towards the Chapel Palazzo Scordia results from the connection of two different buildings, set in the contrata Seralcadi, a district of Palermo ( Figure 1). 2 The original nucleus on the vanella del fico (the current via Trabia) in the mid-sixteenth century was a simple compound of houses, most probably set around the current courtyard (Nobile et al. 2000, pp. 29-38). The addition on via Maqueda was begun between 1600 and 1602, when this new road was created (Fagiolo and Madonna 1981, p. 45). Ercole Branciforti's post-mortem inventory, dated to 24 October 1687 and grouping his possessions in apartments and rooms, reveals that the piano nobile, the first and primary floor of the palazzo, was then divided into three apartments: the quarto della Galleria (the apartment of the gallery), the quarto grande (the big apartment) and the casa piccola (the small house). 3 The quarto grande, located in the oldest nucleus of the palace, was where the prince primarily dwelt with his wife. Whilst the main anterooms remained within or close to the quarto della Galleria, the prince's bedroom was in the northern extreme of the quarto grande preceded by two anterooms and followed by the chapel. Arguably, the location for a chapel 2 In 1683, the words palazzo grande (the big palace) and palazzo picciolo (the small palace) were used to describe the family abode. ASP, Notai defunti, Not. Sardo Fontana Honofrio, vol. 2048 In 1702, the palace was still indicated as "duo palatia magna simul coniunta" ("two palaces joined together" In Palazzo Scordia, the main chapel was easily accessible from the bedroom of the prince. Conversely, the route from the anterooms to the chapel was through several narrow corridors (Viola 2019, pp. 286-309). The location was in line with the suggestions about building given by the Sicilian theologian and architect Giovanni Biagio Amico (Trapani, 1684(Trapani, -1754 in his 1750 treatise L'architetto pratico (Amico [1750(Amico [ ] 1997). In the second volume, dedicated to palaces and secular dwellings, Amico listed the main rooms on the piano nobile and located the chapel close to the bedroom (Amico [1750] 1997, p. 64). The statement "[o]ra gli appartamenti giusta il costume di Sicilia si dipongono così" ("now, the apartments according to Sicilian custom are arranged, as follows." Amico [1750] 1997, p. 66) suggests that the author benefitted from direct knowledge of most residences already built in the area between Palermo and Trapani, where he lived and worked. This arrangement is anything but new: for instance, it is close to the sequence anteroom-bedroom-private oratory-private chapel that Sabina de Cavi argues to be repeatedly proposed in Spanish royal apartments (including the viceroy's apartments in Naples), after Juan de Herrera's articulation of Philip II's Escorial (De Cavi 2008, p. 168). 5 What is more striking is that the bedroom was deemed by Amico as a sort of watershed between anterooms and backrooms, constituted by a single room or divided in two rooms, the camera di parata and the real bedroom behind it. One could also add a further passage through the narrow spaces (gabinetti) flanking the bed-alcove, pushing the backrooms even further away. This complexity makes the boundary between anterooms and backrooms blurred and fuzzy. At any rate, the bedroom area was never thought to be off-limits, as family members, servants, and the closest friends continually crossed it. Amico himself reports the presence of servants who could be asked to sleep in the backrooms, or to cook there. He also mentions the existence of nannies, which implies the presence of children (Amico [1750(Amico [ ] 1997. To sum up, from Amico's description a difference between anterooms and backrooms emerges as realized not through a clear-cut separation but a nuanced gradation implying the crossing of numerous subsequent thresholds. This crossing enabled 4 In aristocratic palaces of early-modern Italy, this frequently excluded the ground floor, which was mostly used for services such as stables and warehouses, but could easily include upper floors, if inhabited by members of the family, in addition to the piano nobile. 5 In this case, oratory and chapel are not synonyms, the former being a non-consecrated religious space from which to watch the services celebrated in the latter. In Palazzo Scordia, the main chapel was easily accessible from the bedroom of the prince. Conversely, the route from the anterooms to the chapel was through several narrow corridors (Viola 2019, pp. 286-309). The location was in line with the suggestions about building given by the Sicilian theologian and architect Giovanni Biagio Amico (Trapani, 1684-1754) in his 1750 treatise L'architetto pratico (Amico 1997, p. 63). In the second volume, dedicated to palaces and secular dwellings, Amico listed the main rooms on the piano nobile and located the chapel close to the bedroom (Amico 1997, p. 64). The statement "[o]ra gli appartamenti giusta il costume di Sicilia si dipongono così" ("now, the apartments according to Sicilian custom are arranged, as follows." Amico 1997, p. 66) suggests that the author benefitted from direct knowledge of most residences already built in the area between Palermo and Trapani, where he lived and worked. This arrangement is anything but new: for instance, it is close to the sequence anteroom-bedroom-private oratory-private chapel that Sabina de Cavi argues to be repeatedly proposed in Spanish royal apartments (including the viceroy's apartments in Naples), after Juan de Herrera's articulation of Philip II's Escorial (De Cavi 2008, p. 168). 5 What is more striking is that the bedroom was deemed by Amico as a sort of watershed between anterooms and backrooms, constituted by a single room or divided in two rooms, the camera di parata and the real bedroom behind it. One could also add a further passage through the narrow spaces (gabinetti) flanking the bed-alcove, pushing the backrooms even further away. This complexity makes the boundary between anterooms and backrooms blurred and fuzzy. At any rate, the bedroom area was never thought to be off-limits, as family members, servants, and the closest friends continually crossed it. Amico himself reports the presence of servants who could be asked to sleep in the backrooms, or to cook there. He also mentions the existence of nannies, which implies the presence of children (Amico 1997, p. 67). To sum up, from Amico's description a difference between anterooms and backrooms emerges as realized not through a clear-cut separation but a nuanced gradation implying the crossing of numerous subsequent thresholds. This crossing enabled the palace inhabitants to distance the outside world, even though they did not isolate themselves completely from it. In the house, the position of the paintings with a religious theme seems to follow the gradualness of this transition from the anterooms to the backrooms. The 1687 inventory, compiled after the death of Ercole Branciforti, allows us to relate the articulation of the house with the location of the paintings, thus revealing that in the main dwelling, the number of religious depictions increases when approaching the bedroom area and the chapel. At the time of Ercole's death, there was only one depiction of a biblical scene in the Sala out of 225 canvases hanging on its walls (0.44%) and 1 out of 31 in the first anteroom (3.22%), whilst the second anteroom accommodated 9 depictions of Christ, the Madonna, and other Saints out of 14 paintings with various subjects (64.28%), and 3 out of 4 paintings had a religious subject in the bedroom (75%). 6 Unsurprisingly, in the chapel there were only religious images (100%). This suggests a connection between these paintings and the location of the chapel. According to Caroline Anderson, along with the subject depicted, "another way in which an artwork's meaning was elicited was through its placement" (Anderson 2007, p. 79). Anderson's doctoral thesis focuses on the importance of the belongings and domestic spaces in forming confessional identities in Florence between 1480 and 1650. Examining a large number of inventories, she compared the paintings' locations to what was prescribed by Silvio Antoniano's Dell'Educazione Cristiana e politica de' figlioli (1609) and Giulio Mancini's Alcune considerazioni appartenenti alla pittura come di diletto di un gentiluomo (1617-'21). For Mancini, paintings "should be distributed, in an orderly manner, and in specific principal places that took into account their subject matter" (Anderson 2007, p. 80). Particularly, he prescribes the use of the bedroom area for religious depictions (Mancini 2005, p. 49). Anderson's findings confirm that Mancini's advice more or less mirrored the contemporary practice in Florence and that sacred images frequently were in camere, being the generic term camera defining both the bedroom and the rooms close to it (Anderson 2007, pp. 79-80). This offers an interesting insight into the association between religious paintings and what Mancini calls "i luoghi ritirati" ("withdrawn places") even though not a univocal interpretation. Mancini himself confuses the rules he has just proposed by suggesting for the bedroom, besides the religious subjects, also lascivious depictions that can foster the excitement of the couple (Mancini 2005, p. 48). Devotional or Non-Devotional? At this point, the question resulting from these data would be whether the increase in religious paintings along the path to the chapel was also marked by an increase in their devotional use. The fact that Mancini prescribes the bedroom and rooms close to it to be the most suitable place for "le cose di devotione" ("items for devotion") in every type of house leads us to dwell further on the meaning of the word "devotion." To do this, however, it is necessary to consider both theoretical indications and common practices. As to the former, the work of the theologian Lodovico Antonio Muratori (1672Muratori ( -1750, although published in a later period, is particularly informative as well as connected to the context of the case study under investigation. 7 In 1747, Muratori stressed that the moment a person is christened they enter into a pact of love with God, a sort of contract between two parties (Muratori 1747, p. 3). The mortal party "obliges and devotes himself to a regular and affectionate homage to his Creator, and to a total obedience to His will and laws" (Muratori 1747, pp. 3-4). This commitment is called "devotion" and is usually put into effect through acts of mercy and worship, which can be facilitated 6 This seems to be in line with results of other investigations. For Brundin, Howard and Laven, "the bedchamber was the room most likely to contain religious objects and images" (Brundin et al. 2018, p. 64). 7 Muratori's ideas about devotion were summed up and spread by a Sicilian treatise (Di Maria 1772). Yet, his relationship with the city started earlier, as Muratori's outspoken opposition to the city's cult of the Immaculate Conception provoked debates in Palermo since the 1714 publication of his De ingeniorum moderatione in religionis negotio (Turco 2006, p. 433). by objects. Muratori argues that even if love, respect, and obedience should be obvious consequences of the agreement, material objects are needed in order to excite the devotion of the uneducated as the sepulchers of Saints and their relics did, because "[b]ooks are not for them; to move them they need material objects, that should catch their eyes and ears" (Muratori 1747, p. 331). He then explains the role of images by way of examples, at the same time warning against idolatry. So far, his contribution is anything but new, as Muratori himself recalls Gregory the Great (540-604)'s definition of pictures as "books for the ignorant" (Muratori 1747, p. 331). 8 However, he adds later that "even people eminent for their intelligence or holiness, praying in front of the sacred image of crucified Jesus, feel their imagination helped by that most pious object and their mind moved to holy thoughts and affection." (Muratori 1747, p. 332). In this way, Muratori includes people of various social and cultural levels in the same discourse; nevertheless, warning all of them against drifting from devotion to superstition. 9 To sum up, for Muratori, devotion is articulated by an action, typically a prayer or an act of mercy, that makes the relationship between God and the devotee effective. In other words, the painting of the Madonna della Grazia, placed on the chapel altar in Giuseppe Branciforti's time, is more likely to be a devotional image than the depiction of David beheading Goliath, hanged in his father's gallery among dozens of other paintings with different topics, simply because a gallery was unlikely to be used for praying. Despite the warnings of theorists like Muratori against superstition and idolatry, it has been argued that the images were also associated with a role independent of any voluntary act of the faithful and considered acting by themselves. According to Corry, vision was perceived as capable of affecting people's life. That is why, for instance, "the fifteenth century Lombard humanist Maffeo Vegio suggested that parents keep an appropriate religious picture in view during conception to ensure the birth of a well-formed child and advised parents not to let children see images of the devil" (Corry 2017, pp. 67-68). Muratori himself describes the role of images as active, when he admits that images, as well as processions, acts of mercy, and pilgrimages, could "move devotion" ("muovere la divozione"). This vague expression implies a broad range of uses: images can act as a focus for either group or individual worship and they might also provoke responses, thus encouraging the practice of devotional acts, e.g., as they showed edifying episodes or figures to be imitated, such as the Good Samaritan or Saint John preaching in the desert. 10 These uses can imply both the conscious and the unconscious participation of the faithful. Extending Muratori's thought further, it could be asserted that images might be reminders of the ongoing pact with God, in the sense that they could remind the faithful that God protects their family in return for prayers and acts of mercy. The encouragement towards these devotional practices could also occur independently of the devotee's awareness. Yet, the reasoning can go further, as images themselves might guarantee God's protection even in the absence of a response from the faithful. Although Muratori would hardly have agreed with this, his contemporaries may have thought in this way. Caroline Walker Bynum's work on late medieval art posits that for all the emphasis theorists put on seeing beyond the images, people kept considering them a "locus of the divine" per se (Bynum 2011, p. 65). Bynum's oeuvre, focusing on the pervasive Christian concern with materiality, argues that devotional images often drew the viewer to themselves, as holy matter instead of guiding him/her beyond themselves, and this regardless of the viewer's education or social status (Bynum 2011, p. 267). Reflecting on the active role of images, David Freedberg argues for the effectiveness of religious images, which could become objects of devotion spontaneously, thus motivating different responses, that is, different actions and behaviours of the beholder (Freedberg 1989, p. 96). Following Freedberg's thought, Brundin, Howard and Laven argue 8 A brilliant summary of the centuries-long debate about images in the Cristian world can be found in Bynum (2011), pp. 44-52. 9 Muratori expressed his fears mainly in De superstitione evitanda (1727). This book supports a Christocentric devotion, exclusively founded on the Bible and on tradition. 10 On the assumed supportive role of images for spiritual improvement, see Corry (2018), pp. 320-321. that the reading of religious images had an unstable subjective dimension, since the image could act on the faithful on several levels and with different results (Brundin et al. 2018, pp. 175-90). I do not want to dispute these arguments but only observe that precisely because of the relativism resulting from this view, it is appropriate to consider the cautious warning of those-like Silvia Evangelisti-who argue that the capacity of objects to act as devotional tools closely depended on their specific use and on the meaning they had for their users, which can be rarely fully grasped from inventories (Evangelisti 2013, p. 395). This is particularly problematic in the case of paintings, since in inventories, the depicted subject mattered less than the gilded frame of a painting or its relative size (Musacchio 2008, p. 211). In other words, this perspective runs the risk of fuelling an unconstructive and infinite discussion on what was devotional and what was not. As Elizabeth Carroll Consavari realized about Venetian collections (Carroll Consavari 2013, p. 153), the second risk is to isolate devotion from other decisive factors, an approach that can lead to an underestimation of the weight that the images had in the active interaction with the beholder. Elements such as the taste, political view, and social position of those who purchased, positioned, and observed the canvases were tightly intertwined with their religious beliefs and could influence their response. Likewise, isolating a painting from others hanging in the same room, because they are not religious in theme, prevents us from perceiving their collective action on the observer. In order to explore the relationship between the paintings and the choices of their owner, Pierre Bourdieu's concept of habitus can be useful. For Bourdieu, the habitus is "the system of structured structures predisposed to function as structuring structures" (Bourdieu 1995, p. 72). In other words, even if apparently determined by the achievement of a future aim, practices are determined by the interiorization of antecedent models (Bourdieu 1995, p. 73). Consequently, the acts of purchasing, displaying, or contemplating specific paintings, were-consciously or unconsciously-produced by the habitus of a ruling class which invested its economic and cultural capitals to perpetuate its distinction from "the other" through a luxury lifestyle (Bourdieu 1996, pp. 283-95) and the appropriation of cultural goods, like paintings (Bourdieu 1986, p. 246). 11 Bourdieu's habitus does not imply a mechanical repetition of models (Bourdieu 1995, p. 95). Rather, it ceaselessly adjusts to the demands inscribed as potentialities in any situation by engendering thoughts and actions consistent with different conditions. Regarding the case study, this means that, although the general objective remained that of aristocratic distinction, the purchase of the canvases and their location in the house could vary according to conditions that were susceptible to change, such as personal preferences and family strategies. The complexity and instability of the relationship between the paintings and those who bought/positioned/observed them emerges significantly when owners and users of the house changed. The two inventories, which were compiled after the deaths of Ercole Branciforti and of his son, Giuseppe Branciforti et Morra, in a span of little more than thirty years (1687 and 1720), allow us to analyse this relationship. Paintings as Ornament of the House (Ercole) Having inherited titles and fiefs from his father Antonio in 1658, Ercole Branciforti showed a particular dedication to the family residence in Palermo that his father had abandoned for a long time. This dedication was part of a plan to establish the lineage in the city after the family power had been secured by a stable feudal authority in the countryside. After the revolt of Messina against the Spanish authority (1673-1678), its repression, and the deliberate degradation of the city by the Spaniards, Palermo became the stable location of the Spanish viceroy court and, therefore, the most attractive place for Sicilian aristocracy (Ligresti 1992, pp. 84-85). Analogously to what occurred in greater European courts, the proximity to the viceroy, even if it put the aristocrats under his control, allowed them to take advantage of his manifestations of favour and to maintain, through the complex court etiquette, the power ensuing from them (Elias 1980, pp. 97-100). Additionally, according to Ronald G. Asch, the court temporarily worked also as a bulwark against the ascent of the new urban nobility that struggled to make its way in a world of conspicuous consumption (Asch 1991, p. 4). Consequently, it is understandable that Ercole paid all the debts related to the house in Palermo and started to renovate it. 12 In his 1683 will, Ercole reported that he had done "a lot of good works" ("molti benfatti") on the larger of the two buildings, even if it had not been completed by that time. 13 The realization of a sort of familial iconographic programme was part of Ercole's renovation, since the Sala and the Galleria at the piano nobile were already full of paintings by 1687. 14 The huge Sala, which was the first room to be seen by visitors to the piano nobile, housed 225 paintings of different size and with various subjects. 15 Among the canvases, there were many iconographies related to classical mythology, some scenes from Dante's Inferno, several landscapes, portraits of famous philosophers and humanists such as Aristotle and Petrarch, and one painting depicting a scene from the Old Testament, Susanna and the Elders. Featuring subjects drawn from the recent and distant past might have been a way to bestow a cultured aura on the house. Considering the openness of anterooms and the Sala, "dov'è lecito venir ad ognuno" ("where it is licit to come to everyone") even Mancini suggests hanging there images like portraits or historical depictions (Mancini 2005, p. 49). For Anderson, the reason for this suggestion was to impress and influence visitors waiting in the Sala or negotiating business with the Master in the anterooms (Anderson 2007, p. 80). The importance attributed by Ercole to this operation emerges from his own words when, in his last will, he left his heir the choice of selling home furnishings "with the exception, however, of the paintings, which must always remain as ornament to the house." 16 Even in the case of relatively mobile elements, like paintings, the word "ornament" cannot refer solely to the embellishment of the house. Jacques Derrida's reading of Kant's Critique of Judgement (1760) warns against the reiterated attempts of drawing a line between the work of art (ergon) and its accessories (parergon), between the internal meaning of architecture and its external circumstances (Derrida and Owens 1979, p. 26). The philosopher highlights how ornament disturbs this attempt, because it is neither simply inside nor simply outside the work. Rather the parergon "has traditionally been determined not by distinguishing itself, but by disappearing, sinking in, obliterating itself, dissolving just as it expends its greatest energy" (Derrida and Owens 1979, p. 26). Paintings articulated the family identity in every room of the palace, as the owner perceived and fashioned it. 17 The "Casa" that Ercole mentioned was the family abode but also, in the broader sense, the Scordia casato, a relatively young lineage in comparison with the other powerful branches of Branciforti. Ercole's purchase of the paintings was intrinsically connected to the structural works on the palace conducted by him, with the aim of decorously settling the family down in Palermo, which had just become the most important city in Sicily. The entire environment of the Sala, including an amazing frescoed vault, was a sort of introduction to the family and to its relatively recent claims to high nobility. A painting depicting Sicily probably made the identification of the family's fiefs 12 The effort must have been considerable, since the amount of Antonio's debts (ca. 21,000 onze) was more than the whole annual income of the Scordia family (ca. 15,000 onze) estimated at the beginning of the eighteenth century. ASP, Corte Pretoriana, n. 5874, fol. 81r. Candela (1996), p. 38. 13 ASP, Notai defunti, Not. Honofrio Sardo Fontana, vol. 2048, fols. 376r-389r. 14 ASP, Notai defunti, Not. Sardo Fontana Honofrio, vol. 2049 Ibid., fols. 1479v-1481r. 16 "[E]sclusi però li quadri, li quali sempre debbiano restare per ornamento della Casa." Ann Matchette wrote that "testamentary mandates of barring the sale of furnishing or collections by heirs have been deployed by scholars in order to advance an argument that objects, identity, and family honour were inextricably linked." Matchette (2006), p. 705. 17 Renata Ago argues that inalienable items were supposed to be capable of prolonging the memory of the testator among the living. Yet to talk about the deceased, they had to belong to his/her social sphere to be socially acknowledgeable, like the family palace or the collections inside it. By passing from one generation to another, these goods implemented continuity and transmission of family's identity/identities. Ago (2013), pp. XVII-XIX. Amongst these kinds of items, the scholar lists also the paintings. Ibid., pp. 137-56. possible for any visitor. 18 More than this Ercole fashioned a cultural identity supporting the title purchased by his father Antonio, claiming a noble spirit beside the riches of his properties. In this respect, the only biblical depiction present in the Sala, Susanna and the Elders, is more likely to be just another item contributing to Ercole's image rather than a strictly devotional object. Yet the inventory reveals something else going on in the anterooms. Pictures like Susanna and the Elders in the Sala or the Carità Romana in the second anteroom could exhort female members of the family to virtuous behaviour, but also allude to the recurrent phenomenon of couples in which husbands were much older than wives. It is worth noticing that a depiction of Susanna was also in the Sala of the casa piccola so that it welcomed the visitor at both the two different entrances of the first floor. These suggestions were part of a patriarchal cultural system, so familiar to be embedded in everyday life. It is hard to determine how conscious was the acceptance of these cultural tropes. Yet, given the prominent positions of these paintings in the house, it is possible to argue that these images were deliberately displayed to an outside audience to stress the belonging of the Master to this cultural system. As a male counterpart, three depictions of King David were distributed in the rooms of the quarto della Galleria, together with five portraits of the kings of Spain hanging in the first anteroom, i.e., at the entrance of the apartment, which was most probably used for public events and had to display the family's loyalty. 19 Instead, the portraits of the family's ancestors were hung beside Ercole's own portrait in the first anteroom of the quarto grande, where Ercole himself dwelt. As already seen, in the three apartments, from the first anteroom to the backrooms the number of religious paintings slightly increases. Among these, there were many depictions of Jesus Christ and male Saints, such as Saint John and Saint Sebastian. In comparison, the depictions of the Virgin Mary are scarce, just eight, and, of these, only five in the main dwelling. Even fewer are the depicted female characters: the casa grande had only one painting with a female subject, a depiction of a "female saint martyrized by a tyrant," in the main bedroom. The violence of the theme seems increased by its proximity to a depiction of battle and the scene of Christ expelling the merchants from the temple (the only episode of the Gospel showing Christ angry), hung in the same room. Although the chapel was already located close to the bedroom and furnished with eleven religious images, it does not seem to have affected the choice of the paintings for the bedroom. Neither were located there the lascivious depictions suggested by Mancini. Rather, an atmosphere of male domination must have transpired from its pictures. Obviously, daily practices do not always correspond to theoretical indications, such as Mancini's, but these indications often reflect a widespread practice. At issue here is the effect that a divergence from this practice could provoke. If all these paintings were reflections of an individual attitude towards life and people, in the case of the main apartment, they would be related mostly to the chief of the household, Ercole, who lived there and most probably bought the paintings, since the palace was almost abandoned before his time. It is definitely not a fully reliable method to understand an individual's personality, but it gives a vague clue about the environment of the inside and about the relations between the two sexes in a space that would have framed their intimacy. Surely, the sight of the only depiction of the Virgin Mary present in the bedroom must have been a source of relief for the wife. Roots in the City's Devotion (Giuseppe) A change occurred in the Scordia palace under Ercole's heir. Giuseppe did not alter the Sala arrangement and left many paintings in the anterooms as his father located them, but variously changed the other rooms, moving extant paintings and purchasing new canvases. A large number of paintings were set in what the inventory compiler called "camerini" (small rooms), which may be identified with the backrooms between the courtyard and the additional structure on via Maqueda. 20 Excluding 16 geographic maps, 24 out of 55 paintings set there had a religious subject. Amongst these, depictions of the Virgin and female Saints, like Agatha, Ann, Catherine, Mary Magdalene, outweighed the male Saints. It is remarkable that a small depiction of Saint Rosalia, patroness of Palermo since 1624, finally appears. Lacking family payment registers, the purchase of new paintings by Giuseppe's mother, Giovanna Morra, could not be excluded, as she shared with her stepson the tutelage of the legitimate heir and the administration of family assets for ten years after Ercole's death. However, the list refers to Giuseppe's possessions at the time of his death and this means that in any case, he approved this arrangement. Collecting paintings was a widespread custom for aristocrats to enhance the palace's magnificence and the casato's glory. As Edward L. Goldberg argues in relation to Leopoldo Medici's collection, a specific selection of items could also be used to distinguish the contribution of one member of the family amongst those of his predecessors (Goldberg 1983, p. 23). According to Goldberg, Leopoldo de' Medici tried to isolate his voice from previous patrons of his family through a relevant specialization of his interests. In this case, the diversification, established by Giuseppe's contribution apparently aimed to further root the family in the cultural and religious life of the city. In this sense, a significant parallel emerges from the comparison of the princes' personal attitudes towards religious matters as they can be inferred from Ercole's and Giuseppe's last wills. Dated to 1683 and 1715, respectively, and arguably the richest sources about these two figures, these documents are highly conditioned by the perspective of the paterfamilias and consequently focused primarily on the issues of the preservation of wealth and the perpetuation of the family (Matchette 2006, p. 705). Nevertheless, some spiritual concerns of the two princes emerge between the lines. Above all, they kept alive the connection with the main fief of Scordia through acts of mercy, namely annual donations to the Franciscan convent founded by Antonio, dowries given to two needy female orphans from Scordia on Saint Roch's day, and the custom to be buried in the church of the above-mentioned convent. It is not unusual to find within wills pious bequests to religious institutions "for which the testator wished to display a devotional preference" (Rusconi 1992, p. 310). Yet this act suggests also how much their devotion was connected to the establishment of the family's authority in the fiefs. However, a slight difference in attitude emerges between Ercole and Giuseppe. Although Giuseppe followed his father's example with charitable acts in Scordia, he also included in his will the possibility for his vassals to file a complaint, within two months of his death, for any wrongs suffered in order to receive compensation in cash. 21 Leaving aside any doubts about the efficacy of this arrangement, in any case, it shows how Giuseppe apparently distanced himself from the despotic attitude towards peasants that had been typical of the heads of the family since his grandfather's rule (De Mauro 1868, pp. 186 and 191). Most probably, he needed to adapt to the new conditions of the family in the polished environment of the city. Similarly, Giuseppe showed a different attitude towards Palermo. The general recommendation of Ercole's soul to the Saints of the Heavenly Court was replaced by Giuseppe's specific invocations to precise Saints in his will. Among these, Saint Francis of Paola, Saint Rosalia, and the Immaculate Conception of Mary are noteworthy, since these cults were tightly connected to the city of Palermo more than to the town of Scordia. Conclusions In line with investigations on other contexts, this case study has illustrated that the path approaching the chapel from the rest of the house was characterized by the crossing of numerous consecutive thresholds marked by an increase in religious images hanging on the walls. However, not only the devotional use of these paintings remains mostly obscure and unverifiable, but its investigation seems also to limit the exploration of the relationship between the depictions and people in the house. After years in which the early-modern house has been assessed as a secular place alien to devotional practices, the risk is now to go towards the opposite extreme; that is, the univocal interpretation of the house as a "devotional place," thus neglecting the complexity and the instability of meanings that spaces, images, and objects sustained in the domestic environment. To avoid this, I have proposed to interpret the purchase and location of the paintings in the house considering Bourdieu's habitus as the source of these moves. This allows us to investigate the arrangement of religious paintings beyond their strictly devotional use, including questions of power, lineage, gender, taste, and so on. Furthermore, the comparison between the two different collections of canvases showed that the relationship between the paintings and the inhabitants of the house could change. The differences emerging from Ercole's and Giuseppe's collections illuminate two different attitudes towards family and the city: although produced by the same attempt of distinction, their actions adapted to different conditions.
2020-01-16T09:06:08.996Z
2020-01-10T00:00:00.000
{ "year": 2020, "sha1": "87aec872e578e6a76615f86058f8108a925f45a0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-1444/11/1/39/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9b2172877e727c13fa0fcbfb9388c0da33acfdf0", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Art" ] }
16797680
pes2o/s2orc
v3-fos-license
Changes of the Bacterial Abundance and Communities in Shallow Ice Cores from Dunde and Muztagata Glaciers, Western China In this study, six bacterial community structures were analyzed from the Dunde ice core (9.5-m-long) using 16S rRNA gene cloning library technology. Compared to the Muztagata mountain ice core (37-m-long), the Dunde ice core has different dominant community structures, with five genus-related groups Blastococcus sp./Propionibacterium, Cryobacterium-related., Flavobacterium sp., Pedobacter sp., and Polaromas sp. that are frequently found in the six tested ice layers from 1990 to 2000. Live and total microbial density patterns were examined and related to the dynamics of physical-chemical parameters, mineral particle concentrations, and stable isotopic ratios in the precipitations collected from both Muztagata and Dunde ice cores. The Muztagata ice core revealed seasonal response patterns for both live and total cell density, with high cell density occurring in the warming spring and summer months indicated by the proxy value of the stable isotopic ratios. Seasonal analysis of live cell density for the Dunde ice core was not successful due to the limitations of sampling resolution. Both ice cores showed that the cell density peaks were frequently associated with high concentrations of particles. A comparison of microbial communities in the Dunde and Muztagata glaciers showed that similar taxonomic members exist in the related ice cores, but the composition of the prevalent genus-related groups is largely different between the two geographically different glaciers. This indicates that the micro-biogeography associated with geographic differences was mainly influenced by a few dominant taxonomic groups. INTRODUCTION A variety of microorganisms including bacteria, archaea, fungi, protozoa, algae, and viruses, and even invertebrates, have been found in glaciers and ice sheets in the Arctic, Antarctic, Greenland, and in other mountains across the world (Skidmore et al., 2005;Nkem et al., 2006;Miteva et al., 2009;Zhang et al., 2009;Branda et al., 2010;Anesio and Laybourn-Parry, 2012;Price and Bay, 2012;Møller et al., 2013;Stibal et al., 2015;Zawierucha et al., 2015;Kaczmarek et al., 2016). Microorganisms can travel long distances and successfully colonize in cryoconite and snow, and then eventually become buried in ice (Prospero et al., 2005;Takeuchi et al., 2006;Miteva et al., 2009;Anesio and Laybourn-Parry, 2012;Yallop et al., 2012;Boetius et al., 2015;Bagshaw et al., 2016). Bacteria are the most dominant life forms in extremely cold, oligotrophic, and frozen water environments. Some of the glacier bacteria have been found to be phylogenetically distinct from those found in temperate environments, demonstrating the biogeography of individual microorganisms in the glacier ice (Christner et al., 2003;Xiang et al., 2010;Anesio and Laybourn-Parry, 2012;Franzetti et al., 2013;Knowlton et al., 2013). Previous studies have also shown apparent geographic patterns of microbial communities across the snow slope surfaces of mountain glaciers Kuytun 51, Qiangyong, and Rongbuk and among the mountain ice cores Dunde (140-m-long, drilled in 1987), Malan (102m-long, drilled in 1999), Muztagata (37-m-long, drilled in 2003), and Puruogangri (89-m-long, drilled in 2000), and deep ice cores Greenland GISP2D and Antarctic Vostok 5G and Byrd, which illustrates the various microbial responses to climatic and environmental changes of glaciers and ice sheets (Xiang et al., 2009An et al., 2010;Knowlton et al., 2013). The micro-biogeography of whole communities may be influenced by the dynamics of taxonomic groups. However, it is still not clear why specific microorganisms live in certain geographical glaciers, namely the geographic difference of the microbial taxonomical groups, which may behave as ecologically coherent units and environmental predictors in glacier systems. Only a few taxonomic groups are able to colonize and dominate in the snow, although numerous microorganisms are trapped in the surface snow (Zhang et al., 2008Xiang et al., 2009Xiang et al., , 2010An et al., 2010). Previous limited data of glacier surface snow have shown that the bacteria Comamonadaceae and Flavisolibacter sp. are common in both the Kuytun 51 and Qiangyong glaciers but only Rhodoferax (Betaproteobacteria) is dominant in the Kuytun 51 glacier . The changes of the dominant bacteria in glaciers are mainly influenced by processes such as wind deposition (airborne or aerosol-associated microorganisms by prevailing winds and dust-associated microorganisms by dust storm events), precipitation deposition (microbial deposition with snow, wet-deposition), and post-deposition by microbial growth in the warming seasons on the glacier surface snow (Xiang et al., 2009;Price and Bay, 2012;Bottos et al., 2014;Peter et al., 2014;Meola et al., 2015;Miteva et al., 2015;Pearce et al., 2016). Among these processes, post-deposition has an important role in the transition of microbial communities in glaciers. Recent studies have shown influences of postdeposition on the transition of communities from the lightsensitive cyanobacteria dominated in the surface snow to the non-light-sensitive bacteria buried in the subsurface snow (Xiang et al., 2009). The geographic differences in microbial communities across the mountain glaciers could be attributed to the mountain barriers, which might control the microbial deposition by changing the prevailing wind directions and moisture sources; while the geographic patterns of the dominant microbial colonizers in glaciers might be also influenced by the local climatic and environmental conditions (Nkem et al., 2006;Xiang et al., 2009Xiang et al., , 2010Demetras et al., 2010;Meola et al., 2015). The primary goal of this study was to evaluate how the geographic difference of bacterial communities at a taxonomic group level was controlled by the prevailing wind patterns across the mountain glaciers in western China. We investigated two different glaciers, the Muztagata glacier (38 • 17 N, 75 • 04 E) and the Dunde ice cap (38 • 06 N, 96 • 24 E). Six structures of bacterial communities were established from the Dunde ice core columns (at field depth 0.8-5.3 m) using bacterial 16S rRNA gene clone library technology. Additionally, live bacteria were examined and related to the physicalchemical parameters from the Muztagata and Dunde ice cores. STUDY AREA, DATA COLLECTION, AND METHODOLOGY In this study, data were collected from the Muztagata Glacier (38 • 17 N, 75 • 04 E), the Dunde ice cap (38 • 06 N, 96 • 24 E), and the Puruogangri ice cap (33 • 54 N, 89 • 10 E) where precipitation patterns were mainly controlled by two different circulations-westerly and monsoon (as indicated by the highlighted arrows in the Figure 1; Table 1). The Muztagata Glacier is located in the most western margin of the Tibetan Plateau, where precipitation is mainly controlled by westerly circulation originating in the arid and semiarid regions, including the deserts Sary-Ishykotrau, Muyun Kum, Kyzyl Kum, Kara Kum, Taklimakan, and Gurbantunggut (Wake et al., 1990). The Dunde ice cap is located in the northern margin of the Qaidam Basin and in the Qilian mountain region on the northeastern Tibetan Plateau, where the winter precipitation results from the incursion of westerly depressions along the southern slopes of the Himalaya (Murakami, 1987;Davis et al., 2005), while the summer precipitation is derived from the monsoon circulation from the Bay of Bengal to central Himalaya, and further to the Qaidam Basin and large depressions in Takalimakan Desert and Daidam Basin (Chen and Bowler, 1986;Davis et al., 2005). The Puruogangri ice caps are located in the center of the Tibetan Plateau, where precipitation is derived from a westerly direction during winter and Indian monsoons in the summer (Wake et al., 1993;Shi and Liu, 2000). The ice core Muztagata (37-m-long) was extracted at 7010 m ASL (above sea level) from the Muztagata Glacier in the summer of 2003 . The Dunde ice core (9.5-m-long) was extracted at 5325 m ASL from the Dunde ice cap summit in October 2002 (Wu et al., 2009). The visible stratigraphic features were recorded immediately after ice core drilling. All ice cores were returned frozen to the freezer room (air temperature between −18 • C to −24 • C) at the Key Laboratory of the Ice Core and Cold Regions Environment of the Chinese Academy of Sciences. The ice core sections were split lengthwise into four portions and stored in a refrigerated room with a temperature of −18 • C to −24 • C. A 10 ml aliquot of melt-water from the Muztagata and Dunde ice cores was used for the analysis of the mineral particles. Total micro-particle concentrations were measured by using a Coulter counter Multisizer 3 (Beckman). A total of 44 ice samples were analyzed from the Muztagata ice core taken at a depth of 2.5-12.5 m, and 74 samples were analyzed from the Dunde ice core taken at a depth of 0.50-9.8 m. A 10 ml aliquot of melt-water from the Dunde ice core was used for analysis of the stable isotopic ratios, 18 O/ 16 O (δ 18 O) in the precipitation. A Finnegan MAT-252 mass-spectrometer was used to determine δ 18 O values within ± 0.05%. The Dunde ice core was dated by using seasonal δ 18 O variations and annual visible dust layers and confirmed by the previous Wake et al., 1993;Shi and Liu, 2000;Zhang et al., 2009 Frontiers in Microbiology | www.frontiersin.org data (Takeuchi et al., 2009). The Muztagata ice core dating and δ 18 O data were previously described by Tian et al. (2006). The ice core sections were cut into small ice columns in intervals of 12-30 cm using a band saw within the walk-in freezers (−18 • C to −24 • C). Microbial analyses were carried out on 156 and 37 samples from Muztagata and Dunde, respectively. The ice samples were cut between the visible dust layers, and ice layers were collected separately. The improved procedures were used for the decontamination of the outer surfaces of ice core samples. The snow and firn-ice columns (length approximately 15 cm, diameter 5 cm) were decontaminated by cutting away the 10-mm annulus with an autoclaved sterile sawtooth knife. The knife was sterilized over an alcohol flame following each ice slice cut. A total of three sterile sawtooth knives were used for each ice sample. The decontaminated samples were then completely melted in clean and sterile glass beakers at 4 • C. These handling procedures were undertaken at temperatures below 20 • C within a sterile, positive pressure laminar flow hood as described before . The freshly melted water (10 ml) from the Muztagata and Dunde ice cores was 10-fold diluted with sterile filtered water. A total of 100 µl of diluted sample was added to the known concentration of fluorescentdyed bead solution Trucount (Becton Dickinson) mixture with the cell sorting markers carboxyfluorescein diacetate (cFDA) and propidium iodide (PI). Three groups of bacteria could be identified based on the difference of the bound probes: cFDA-stained, cFDA/PI-double-stained, and PI-stained group, indicating viable, injured, and dead cells, respectively (Xiang et al., 2009). The cFDA and PI staining were separately prepared by following the method of Amor et al. (2002), except for the cell suspensions that were incubated for 15 min in the dark at room temperature (25 • C) for cell staining. The 100 µl sterile filtered water served as a reagent blank. The live and total cell numbers in the melt-water were determined with a precision ± 0.05% by using a FACSCalibur flow cytometer (Becton Dickinson Immunocytometry Systems, San Jose, CA, USA) and following the manufacturer's instruction. For DNA analysis, six clone libraries of the bacterial 16S rRNA genes were collected from the Dunde ice cap. Approximately 400 ml of ice core melt-water was used for the DNA extraction. DNA extraction and further clone library establishment procedures were conducted by following the same protocols as previously used in a microbial analysis of the Kuytun 51 Glacier samples (Xiang et al., 2009). All reagent transfers for DNA analysis were performed within a sterile, positive pressure laminar flow hood. All reaction tubes and micropipette tips were autoclaved, and all solutions except for the Taq DNA (2.5 U, TakaRa) polymerase were passed through sterile 0.2 µm filters (Xiang et al., 2004). The 16S rRNA gene amplicons used for the establishment of clone libraries from the Dunde ice core were generated by PCR amplification with the bacterial universal primer pair 8f (5 -AGAGTTTGATCATGGCTCAG) and 1492R (5'-CGGTTACCTTGTTACGACTT; Lane, 1991;Weisenburg et al., 1991). To avoid possible bias, the three PCR products were pooled and used to establish a clone library from each ice column. A total of 137 clones were selected for sequencing by HaeIII-based ARDRA (amplified rRNA restriction analysis) out of the 406 clones from the Dunde ice core. Each sequence was named using the initial of Dunde ice cap (DD1, noted for one out of the five ice cores drilled in October 2002, Wu et al., 2009), along with the ice depth (D84, D107, D238, D324, D386, and D466: 84, 107, 238, 324, 386, and 466 cm below the snow surface) followed by the clone number (1-163). For example, clones DD1D84-9, DD1D107-55, and DD1D466-123 were the clone representatives of the ice core DD1 taken at the depth 84, 107, and 466 cm below the snow surface. The GenBank accession numbers of the cloned sequences obtained from the Dunde ice core are KU060881-KU061017. All 137 sequences from the Dunde ice cap were checked by DECIPHER (Wright et al., 2012, sequence chimera check tool 1 ) and aligned with the Blast references (Altschul et al., 1990) by using ClustalX (Thompson et al., 1997). A Neighbor-Joining phylogeny for the aligned sequences was constructed using MEGA 6.0 2 (Tamura et al., 2013) pairwise deletion mode for gaps (with bootstrap analysis, 100 replicates) and subroutines Maximum Composite Likelihood (MCL) for substitutions. The archaeal 16S rDNA sequences from Methanosaeta harundinacea strain 8Ac (accession no. AY817738) and Methanosaeta concilii strain GP6 (accession no. NR102903) were used as outgroup references on all trees. All the obtained sequences from the glaciers were identified by the recognized species and were related to the ecological clusters (e.g., Variovorax sp. and Herbaspirillum sp. in the Betaproteobacteria subphyla). Sequences obtained displaying similarities of >97% with known species were identified as the reported species. Most of the obtained clones were related to known cultivated genera or genus clones (e.g., Ketogulonicigenium sp., Cyanobacterium sp., and Sphingobacterium sp.). A few clones had <97% similarity with reported species, and thus were designated separately. Seasonal Changes in Physical-Chemical and Biological Parameters in the Muztagata Ice Core There was an obvious seasonal effect on temperature and biological parameters along the ice core extracted at 7010 m ASL of the Muztagata Glacier (Figure 2). An apparent seasonal temperature change was indicated by the proxy value of the stable isotopic ratios, 18 O/ 16 O (δ 18 O), with a low value in winter and a high value in summer ( Figure 2B). The live cell density was greatly variable and ranged from 6.5 × 10 2 to 2.1 × 10 4 cells/ml between 1964 and 2000 (Figure 2A). The total cell density varied from 4.4 × 10 4 to 8.7 × 10 5 cells/ml ( Figure 2C). Several live cell density peaks were formed during the summer seasons in 1969,1970,1973,1979,1982,1983,1988,1990, and 1993 for a total of nine events, a1 to a9 (open triangles in Figure 2A), while cell density peaks were found in spring (filled triangles in Tian et al., 2006). Ice core was annually dated by using seasonal δ 18 O variations and annual visible dust layers, and the peak of beta radioactivity by the nuclear weapon test in 1963 was identified at a depth of 37.89 m . (C) Total bacterial cell density estimated by using flow cytometer and cFDA/PI-stain, see the detailed in the "Study Area, Data Collection, and Methodology"). The annual ice layers ranged from 50 to 136 cm, and the years were indicated by the dash lines in the (A-C). The data presented here were only for the ice core section in a depth range from 2.21 to 37 m since the annual layers become thinner below 35 m and the ice layer being near the bottom of the glacier (the depth of the glacier is 52.6 m, Tian et al., 2006). Frontiers in Microbiology | www.frontiersin.org FIGURE 3 | Correlation between mineral particle concentrations and total cell density in the Muztagata ice core. (A) Correlation between total cell density and mineral particle concentrations at ice core depth 2.5-9.3 m. (B) Total bacterial cell density and mineral particle concentrations. Total microparticle concentrations were measured by using a Coulter counter Multisizer3 (Beckman). Figure 2A). This ice core also had an increased density of the total number of microorganisms in the summer of 1978, 1988, and 1993 (open triangles c1, c2, and c3 in Figure 2C), and in the spring of 1995 and 2000 (c4 and c5 in Figure 2C), which was consistent with the live cell density patterns (Figure 2A). The microbial cell density correlated with the concentrations of mineral particles and possessed a high R 2 value of 0.68 only from 1994 to 2000 (from ice core depth 2.5 to 9.3 m, Figures 3A,B), but did not correlate with mineral particle concentrations from 1990 to 1993 (from ice core depth 9.3 to 12.5 m, with R 2 < 0.1, Figure 3B). Changes in Physical-Chemical and Biological Parameters in the Dunde Ice Core Seasonal analysis of the Dunde ice core was not successful due to the limitations of sample resolution (Figure 4). Oxygen isotope ratios of the melt-water samples from the Dunde ice core showed a change range from -10.78 to -8.24 (temperature proxy 18 O/ 16 O, Figure 4D), while microbial cell density varied from 1.2 × 10 3 to 9.1 × 10 4 cells/ml ( Figure 4B) and 1.3 × 10 5 to 1.9 × 10 6 cells/ml ( Figure 4C) for live and total cell density, respectively. Three peaks c2, c3, and c4 of the total cell density were found in the spring of [1988][1989]1992, and 2000, only one peak, c1, was found in the summer of 1985 ( Figure 4C). The live cell density response pattern was consistent with the total cell density tendency (the dash lines in Figures 4B,C). An abundance of microbial cells frequently occurred at the dirty ice layers (Cell density peaks c1, c3, and c4 at the dust layers labeled as a1, a3, and a4 at the dash lines in Figures 4A,C), but were rarely found at the clean ice layer (small density peak c2 at the a1 ice layer in Figure 4). Changes in Proportion of the Main Bacterial Genera along the Dunde Ice Core Profile There was a large difference in the proportion of the main phylogenetic groups along the Dunde glacier depth profile, which indicated the seasonal changes of microbial communities in the glacier (Figures 8A1-A6). The bacterial clones were comprised of five dominant genus groups, Polaromonas sp., Pedobacter sp., Flavobacterium sp., Propionibacterium/Blastococcus sp., and Cryobacteriumrelated, which accounted for more than 55% of the total 406 clones and frequently appeared in the six tested ice layers from 1990 to 2000 (dashed lines in Figures 8A1-A6). Nine genus groups such as Rhodoferax sp., Variovorax sp., Burkholderiales, Flectobacillus sp., Cytophagales, Sphingobacteriaceae, Knoellia sp., Cyanobacteria rarely occurred in the ice. Other opportunistic bacterial clones occasionally appeared in the ice. FIGURE 5 | Phylogenetic analysis of the 16S rRNA genes for Alphaproteobacteria, Betaproteobacteria, Gammaproteobacteria, and Deltaproteobacteria clones from the Dunde ice core and the closest relatives. The tree was generated by the Neighbor-Joining method after sequence alignment, and rooted with two Methanosaeta strains (accession no. AY817738 and NR102903). Bootstrap values (100 replications) were specified for each Node. Cut-off value for the condensed tree was 55%. Numbers of the obtained snow-ice clones (had the same ARDRA pattern to the sequenced representatives listed on the tree) and relative sequence affiliations corresponding to GenBank accession number were provided in parentheses. The sequences discussed in this study were noted bold. See a detailed description for the assigned sequence references and numbers in "Study Area, Data Collection, and Methodology." FIGURE 6 | Phylogenetic analysis of the 16S rRNA genes for the Actinobacteria, Cyanobacteria, Verrucomicrobia, and Firmicutes clones from the Dunde ice core and the closest relatives. The tree was constructed by following the protocol as described in Figure 5. DISCUSSION Previous studies have shown the prevalence of specific bacteria in certain local glaciers (Zhang et al., 2008;Xiang et al., 2009;An et al., 2010;Franzetti et al., 2013;Miteva et al., 2015). However, our findings demonstrate that the members of bacterial genus-related groups are highly similar in the related ice cores at a historical scale, whereas the composition of the prevalent genus-related groups is largely different across the geographically different glaciers. This indicates that the micro-biogeography associated with geographic differences was mainly influenced by a few dominant taxonomic groups. Methodological Considerations Contamination of the DNA samples (from the inner core columns) used in this study is unlikely because the outer surfaces of ice core and reagents for DNA analysis were cautiously decontaminated, and all the procedures were performed within a sterile, positive pressure laminar flow hood. Only small DNA fragments (<100 bp) were detected from the ice column control of (autoclaved sterile water), which were not considered for further sequence analysis in this study. It should be noted that Herbaspirillum sequences, also found in this study, have previously been identified as potential contaminants in glacier debris and ice samples (Cameron et al., 2016). However, the experimental procedures used by Cameron were completely different from ours in the present study. Their protocols and procedures were used for glacier cryoconite debris and surface ice samples. Herbaspirillum sp. found in this study, are well-known plant root-associated nitrogen-fixing (Baldani et al., 1986) and non-nitrogen-fixing environmental species (Ding and Yokota, 2004;Dobritsa et al., 2010). They were also reported in the Alaska Gulkana glacier, an Antarctica glacier forefield and the Antarctica Lake Vida brine (Segawa et al., 2011;Bajerski et al., 2013;Kuhn et al., 2014). Various molecular techniques, CFM with cell stains cFDA, PI, and SYTOX have been used to investigate viable bacteria (Amor et al., 2002;Schumann et al., 2003;Xiang et al., 2009). These tools helped us to examine the abundance of live cells and the potential metabolic activities of microorganisms in an environment. However, the CFM approach has certain limitations because of interference from dust particles or spurious abiotic autofluorescence and underestimation of the accurate cell counts under the typical CFM parameters (Stibal et al., 2015). Despite the limitations, the background noise can be counterweighed by data series from the ice core profiles. In this study, the apparent seasonal tendency suggests that our analyses were based on a substantial fraction of bacteria. For the phylogenetic analysis of bacteria, more than 600 clones were picked and sequenced. A total of 406 valid bacterial clones were obtained from the Dunde ice core after vector, and chimeric checking. The rarefaction curves of six clone libraries from the ice core were approaching asymptotes (dada not shown). Data also showed the prevalence of a few dominant genus-related groups at the different ice core depths (Figures 8A1-A6). This indicated that the identified clones were based on the dominant bacterial taxa. Dust Deposition and Microbial Distribution along the Glacial Depth Profiles The present data sets from the Muztagata glacier at 7010 m ASL (38 • 17 N, 75 • 04 E) revealed a high correlation between dust and microbial abundance from 1994 to 2000, which indicated a strong influence of aeolian activities on the microbial deposition in the glacier snow (Figures 3A,B). This was also consistent with another independent microbial investigation on the Muztagata glacier at 6300 m above sea level (Liu et al., 2013). The Dunde ice core also presented a frequent association of microbial cell density peaks with high concentrations of mineral particles (a1, a3, and a4, verse c1, c3, and c4 in Figures 4A,C). The strong association of microorganisms with dust was also found in previous data from the Antarctic glacier (Abyzov et al., 1998;Priscu et al., 2008), the Malan glacier , and the Guoqu glacier on the Tibetan Plateau . The analyses of trace and rare earth elements extracted from the same series of Dunde ice core sections showed that the fine fractions in the Dunde dust were more similar to those in the western Qaidam Basin and Tarim Taklimakan Desert than those in the Badain Juran and Tengger Desert (Wu et al., 2009). The Nd-Sr isotopic composition of mineral particles in the Dunde ice core is also similar to that in desert sand from Qaidam and Tarim Taklimakan . All results revealed that the Qaidam Basin and the Tarim Taklimakan Desert was the main source of dust in the Dunder glacier, implying the transportation of dust-borne microorganisms from the western desert Tarim Taklimakan and adjacent Qaidam to the Dunde glacier. However, the Muztagata ice core data showed independence of microbial load with dust deposition from ice core depth 9.3 to 12.5 m ( Figure 3B). The Dunde ice core data also showed one small cell density peak c2 appearing at the clean ice layer a2 in Figure 4. These results indicate that microbial deposition in the glacier snow does not always associate with the dust deposits or "dirty" wind and may in fact be transported by "clean" wind or snow, which implies influences of the processes such as aerosol and precipitation deposition, along with other factors (Bottos et al., 2014;Pearce et al., 2016). Seasonal Fluctuation of Bacterial Density at Variable Temperatures The present data sets from the Muztagata glacier at 7010 m ASL (38 • 17 N, 75 • 04 E), revealed clear seasonal patterns with high microbial cell density occurring in the warming summer months (open triangles Figure 2), which indicated positive temperature effects on the microbial density patterns. This was consistent with another independent microbial investigation on the Muztagata glacier at 6300 m ASL (38 • 17 N, 75 • 06 E, Liu et al., 2013). The high repeatability of both ice cores from the Muztagata glacier confirmed the reliability of the data sets discussed here. Evidence for a positive temperature effect includes the algae growth of red Chlamydomonas at the surface snow in New Zealand and on the Alaska Harding icefield and on Greenland and Iceland's glaciers in late spring and summer (Thomas and Broady, 1997;Takeuchi et al., 2006;Yallop et al., 2012;Lutz et al., 2015). Further temperature effects on bacterial growth, colonization and community transition were reported on Kuytun 51 Glacier, where bacterial Cyanobacteria were dominant across the surface snow slope in the warming spring-summer, but rarely in the subsurface, winter-snow-layers (Xiang et al., 2009). As expected, the live cell density during the summer was high as a result of microbial growth in the surface snow. Other groups, Uetake et al. (2006), Yao et al. (2008), and Price and Bay (2012) also found that high microbial abundance was present in the warming spring-summer seasons in the Sofiyskiy glacier in the south Chuyskiy range of the Russian Altai, the Guoqu glacier in the Geladaindong mountain regions, and the deep Arctic and Antarctic ice cores, respectively. The obvious seasonal patterns of bacterial populations with a high cell density in summer strengthen the post-deposition effect on the microbial populations in glaciers. In addition to those cell density peaks during the summer (open triangles in Figures 2A,C), there were also many density peaks in the spring from 1963 to 2000 (filled triangles in Figures 2A,C). The seasonal pattern of bacterial density was generally consistent with the dynamic mineral particle deposition with frequently dust outbreaks in spring and summer (Figures 3 and 4 in this study; Wu et al., 2008;Liu et al., 2015). This indicated an important influence of dust deposition on the microbial communities in glaciers. All of the results suggest the fundamental contribution of dust-microbe deposition to the basic population pool size and the intensifying effect of post-deposition by microbial growth in the warming seasons. Geographic Difference of Microorganisms in the Glacier Ice The present data showed that the Polaromonas sp. from the Dunde ice core clustered together more closely than those from other environments (Figure 5). The phenomena of Polaromonas sp. from the same location readily grouping together was also found in the Muztagata and Puruogangri glaciers Xiang et al., 2010). Although Polaromonas sp. were widely distributed across the geographically different glaciers, statistical analyses demonstrated a large genetic distance among 43 unique glacier Polaromonas sequences, which positively associated with geographic distance (Franzetti et al., 2013). Similar geographic phenomenon of individual microorganisms was also found in the deep ice core. Bacteria Alternaria sp. were common in the deep ice cores of Greenland GISP2D and Antarctic Vostok 5G and Byrd, but their DNA sequences were phylogenetically different between the two polar regions (Knowlton et al., 2013). The geographic differences of Polaromonas sp. and Alternaria sp. across the isolated glaciers suggests that the mountain "barriers" to the microbial transportation can be surmounted by suitable adaptations, which leads to the geographic patterns of individual microorganisms. Geographic differences are not only evident for Polaromonas sp. and Alternaria sp. but also for the taxonomic groups. It is obvious that there is a geographic distinction of taxonomic groups at the cryoconite habitats on three High-Arctic glaciers from the associated moraines and adjacent tundra on the Brøggerhalvøya peninsula, Svalbard (Edwards et al., 2013(Edwards et al., , 2014. Significant differences in the composition of dominant taxonomic groups are also found between alpine and Arctic cryoconite habitats (Edwards et al., 2014). The present data from the Dunde ice core showed that similar taxonomic groups frequently appeared along the ice core profiles as historical events (Figures 8A1-A6). Bacterial genus groups Cryobacteriumrelated, Flavobacterium sp., Pedobacter sp., Polaromonas sp., and Propionibacterium/Blastococcus sp. were frequently found at the six tested ice layers of Dunde glacier from 1990 to 2000 (Figures 8A1-A6). Another example of similar group members sharing the related ice core layers can be found in the recently reported Dunde ice. Genera Polaromonas sp. and Flavobacterium sp. commonly found between 1990 to 2000 were also identified from the Dunde ice column AD 1780-1830 . Although the dominant genus-related groups are similar in the related ice cores, the composition of the main genus-related groups is largely different across the geographically different glaciers. The bacteria Cryobacteriumrelated was more abundant in the Dunde ice cap than in the Muztagata glacier, while Enterobacter sp. appeared throughout the four tested ice layers of the Muztagata glacier but rarely in the Dunde ice cap (Figures 8A1-A6,B1-B4). Seven genus groups Polaromonas sp., Enterobacter sp., Acinetobacter sp., Flexibacter sp., Thermus sp., Propionibacteria/Luteococcus sp., and Flavisolibacter sp. were frequently identified in the four tested ice layers of Muztagata glacier from 1970-1988 (labeled as the dashed lines in Figures 8B1-B4), while Polaromonas sp. and Flexibacter sp. were found at all three tested ice columns of Puruogangri glacier from 1600 to 1920 An et al., 2010). All results clearly show that a few genusrelated groups are dominant in the mountain ice cores and constitute the main taxonomic groups endemic to the local glacier regions. The difference of taxonomic group members across the geographically different glaciers suggests intermingling of the bacterial taxonomic groups to the point of geographic separation. More data of microorganisms in the deep ice are necessary for our better understanding of the biogeography of microorganisms in glaciers. The geographic pattern of bacterial taxonomic groups could be attributed to the influence of the moisture and dust source area, which might vary across the mountain glaciers on the Tibetan Plateau (Figure 1; Table 1). Precipitation over the Muztagata glacier is mostly influenced by the westerly depressions, while precipitation over the Dunde ice cap and Puruogangri ice cap is mainly driven by the westerly depressions in winter and Indian monsoon in summer (Murakami, 1987;Wake et al., 1990;Davis et al., 2005). Dust in the mountain glacier Muztagata is mainly derived from deserts including Sary-Ishykotrau, Muyun Kum, Kyzyl Kum and Kara Kum, Taklimakan, and Gurbantunggut (Figure 1; Wake et al., 1993), while the Dunde ice cap is very close to the Gobi Desert, Qaidam Basin (Figure 1) and thus its dust components are more likely strongly affected by local dust storms and dominated by mineral particles from the two deserts Qaidam Basin and Tarim Taklimakan (Wu et al., 2009. The dramatic changes of the moisture sources and dust pathways across the mountainous glaciers may lead to differences in the microbial communities deposited in the glacier snow. Moreover, the heterogeneity of local conditions such as temperature, light intensity, melt-water availability, and nutrient concentrations in the snow may drive the spatial patterning of the microbial community by influencing the colonization of the dominant endemic species in the snow. Concerns on how surface communities are incorporated into the cores, how much they change after burial, and how the post processes contribute the geographic differences of microbial communities are still open questions. More data on the microbiological, meteorological, and physical and chemical characteristics of the glacier surface and subsurface snow and ice cores will be helpful for a better understanding of the biogeography of microorganisms in glaciers. CONCLUSION The members of bacterial genus-related groups were found to be similar in the related ice cores at a historical scale but largely different between the two glaciers Muztagata and Dunde, even if microbial communities fluctuated along the two ice core depth profiles. Compared to the Muztagata glaciers, the Dunde ice core presented distinct members of the taxonomic groups. The five bacterial genus groups Polaromonas, Pedobacter sp., Flavobacterium sp., Propionibacterium/Blastococcus sp., and Cryobacterium-related frequently appeared at the six tested ice layers, constituting the dominant species endemic to the Dunde ice cap, while seven genus groups Polaromonas sp., Enterobacter sp., Acinetobacter sp., Flexibacter sp., Thermus sp., Propionibacteria/Luteococcus sp., and Flavisolibacter sp. were frequently found at the four tested ice depths of Muztagata glacier. The results demonstrate that the spatial differences in microbial communities between the two ice cores are more significant than the temporal differences. This study also showed a seasonal pattern of microbial cell density with high cell density occurring in the warming spring-summer. AUTHOR CONTRIBUTIONS YC: Design of the laboratory experiment outline, data collection, analysis, and interpretation, and draft of the manuscript. X-KL: Sequence data analysis, and interpretation. JS: Sequence data collection, analysis, and interpretation. G-JW: Mineral particle concentration examination of ice core, data analysis, and interpretation. L-DT: Oxgen isotope ratio analysis, and interpretation. S-RX: Design of the research outline, data analysis and interpretation, and revision of the manuscript. FUNDING This work was supported by the NSF project of China (Grant 31400430, 40471025, and 40871046).
2017-05-03T23:42:49.115Z
2016-11-01T00:00:00.000
{ "year": 2016, "sha1": "33e4b30b8c1d20388cb05091fa105b317869689d", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2016.01716/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "33e4b30b8c1d20388cb05091fa105b317869689d", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Geology", "Medicine" ] }
258108091
pes2o/s2orc
v3-fos-license
Deep Metric Multi-View Hashing for Multimedia Retrieval Learning the hash representation of multi-view heterogeneous data is an important task in multimedia retrieval. However, existing methods fail to effectively fuse the multi-view features and utilize the metric information provided by the dissimilar samples, leading to limited retrieval precision. Current methods utilize weighted sum or concatenation to fuse the multi-view features. We argue that these fusion methods cannot capture the interaction among different views. Furthermore, these methods ignored the information provided by the dissimilar samples. We propose a novel deep metric multi-view hashing (DMMVH) method to address the mentioned problems. Extensive empirical evidence is presented to show that gate-based fusion is better than typical methods. We introduce deep metric learning to the multi-view hashing problems, which can utilize metric information of dissimilar samples. On the MIR-Flickr25K, MS COCO, and NUS-WIDE, our method outperforms the current state-of-the-art methods by a large margin (up to 15.28 mean Average Precision (mAP) improvement). I. INTRODUCTION Multi-view hashing is utilized to solve multimedia retrieval problems. A well-designed multi-view hashing algorithm can dramatically improve the precision of multimedia retrieval tasks. Different from single-view hashing, which only searches in a single-view way, multi-view hashing can utilize data from different sources (e.g., image, text, audio, and video). Multiview hashing representation learning first extracts heterogeneous features from different views, then fuses multi-view features to capture a global representation of different views. Current multi-view hashing algorithms suffer from low retrieval precision. It is mainly caused by the following two aspects. First, the fusion of multi-view features is insufficient for current multi-view hashing algorithms. To get a global representation, typical multi-view hashing methods (e.g., Deep Collaborative Multi-View Hashing) (DCMVH) [1], Flexible Multi-modal Hashing (FMH) [2]) utilizes weighted sum or * Lingfang Zeng is the corresponding author. concatenation to fuse the multi-view features. The relationship between the texts and images is ignored during the fusing process, which incurs a weak expressiveness of the obtained global representation. Second, current methods are confined by the information provided by similar samples. The importance of measuring the distance between dissimilar samples is underrated. For instance, Flexible Graph Convolutional Multimodal Hashing (FGCMH) [3] is a GCN-based [4] multi-view hashing method, which constructs the edges of a graph based on similarity and aggregates features of adjacent nodes. Hence, dissimilar samples do not play a role during this procedure. We propose a Deep Metric Multi-View Hashing method termed DMMVH. It takes advantage of Context Gating [5] to learn the interaction and dependency between the image and text features. Unlike typical methods, DMMVH fuses multi-view features into a global representation without losing dependency on these features. Moreover, deep metric learning is introduced to DMMVH. As shown in Fig. 1 hyper-parameter to reduce the complexity of the designed loss function. The optimal embedding space is obtained through deep metric learning, which follows the semantics-preservation principle of hash representation learning. We evaluate our method on MIR-Flickr25K, MS COCO, and NUS-WIDE datasets in multi-view hash representation learning benchmarks. The proposed method provides up to 15.28% mAP improvement in benchmarks. Our main contributions are as follows: • We propose a novel multi-view hash method, which achieves state-of-the-art results in multimedia retrieval. • We take advantage of Context Gating to learn a better global representation of different views to address the insufficient fusion problem. • Deep metric learning is introduced to multi-view hashing for the first time. A deep metric loss with linear complexity is designed and optimized. II. THE PROPOSED METHODOLOGY DMMVH aims to utilize a newly designed deep metric loss to train a deep multi-view hashing network. We first present the deep multi-view hashing network, which deeply fuses the multi-view features into a global representation. Then the new deep metric loss is turned to illustrate. Eventually, a hyperparameter λ is introduced to reduce the complexity. A. Deep Multi-View Hashing Network Deep multi-view hashing network is designed to convert multi-view data into hash code. As shown in Fig. 2, DMMVH consists of a vision backbone, text backbone, normalization modules, multi-view fusion module, and a hash layer. These modules are described in detail below. 1) Vision Backbone: Deep ResNet [6] is employed to produce visual features. 2) Text Backbone: The BERT-base [7] is utilized to extract text features. where X concat ∈ R n is the multi-view feature vector, σ is the element-wise sigmoid activation, and • is the element-wise multiplication. w fusion ∈ R n×n and b fusion ∈ R n are trainable parameters. The vector of weights σ(w fusion X concat + b fusion ) ∈ [0, 1] represents a set of learned gates applied to the individual dimensions of the input feature X concat . 5) Hash Layer: A linear layer with a tanh activation is hired as the hash layer, which can be represented as where sgn represents the signum function. w hash ∈ R n×n and b hash ∈ R n are trainable parameters. The output has the same number of dimensions as the hash code. B. Deep Metric Loss Assume that the training dataset , where x i ∈ R D is a multi-view instance and y i denotes the category information of x i . Furthermore, F : x → h denotes the deep multi-view hashing network, which maps the input space be the hash code of x i . Then, we have an elegant linear relationship between Hamming distance dist H (·, ·) and inner product ·, · where φ ij = h i , h j . For x i , its label y i ∈ {0, 1} C and C is the number of categories. Notice that, one sample may belong to multiple categories. Given the semantic label information, the pairwise similarity matrix S = {s ij } can defined as follows: if x i and x j are semantically similar then s ij = 1, otherwise, s ij = 0. Provided the matrix Φ = (φ ij ) and S = (s ij ), combining the cross-entropy loss and deep metric learning yields the loss function Since s ij can only be 0 or 1, when s ij = 0, the loss L m vanishes, which means the dissimilar samples do not play any role in the training. Notice that, the first part of the metric loss is log(1 + e φij ). Considering the elegant linear relationship between Hamming distance and the inner product, i.e., Eq. (2), as the inner product φ ij decreases, the Hamming distance will increases. Therefore, this part is a proper metric loss. It punishes the dissimilar samples having a closer distance in the embedding space while rewarding a larger distance between them. Due to the above analysis, we revise Eq. (3) as w d represents the loss weight of dissimilar sample pairs. With this revising, the dissimilar samples can also help the training. The derivation of the metric loss can be found in the appendix. C. Hyper-parameter λ Notice that, calculating the matrix Φ or S is O(N 2 ) complexity. By introducing a hyper-parameter λ, calculating any one of them only has O(λ 2 bN ) complexity, where b is the batch size. We randomly choose a portion of them to calculate the similarity matrix, instead of calculating a global similarity matrix S for every sample. Assume the samples are already shuffled. Let b be the batch size and λ be a hyper-parameter. We take the first λb and last λb samples to calculate the loss. Specifically, let H prec = {h 1 , h 2 , . . . , h λb }, H rest = {h (1−λ)b+1 , h (1−λ)b+2 , . . . , h b }, Y prec = {y 1 , y 2 , . . . , y λb }, and Y rest = {y (1−λ)b+1 , y (1−λ)b+2 , . . . , y b }. Then we have two matrices: Φ batch and S batch where × represents the matrix multiplication operation. With this designing, Eq. (4) reduces to Eventually, a quantization loss is introduced to refine the generated hash codes, which can be represented as: where Combining the metric loss and the quantization loss by weighted sum yields the total loss function of our method where µ is a hyper-parameter obtained through grid search in our work. III. EXPERIMENTS Extensive experiments are conducted to evaluate the proposed DMMVH method against eleven state-of-the-art multiview hashing methods on three public benchmark datasets. Datasets: Three genetic datasets are adopted: MIR-Flickr25K [8], NUS-WIDE [9], and MS COCO [10]. These datasets have been widely used for evaluating multimedia retrieval performance. The statistics of three datasets are summarized in Table I. Evaluation Metric: We utilize the mean Average Precision (mAP) as the evaluation metric. Implementation Details: Our implementation is on the PyTorch platform. For the feature extraction backbones, we use the pre-trained models, specifically ResNet-50 and BERTbase. The dropout probability is set to be 0.1 to improve the generalization capability. We employ the AdamW optimizer with an initial learning rate 1 × 10 −5 and set β 1 = 0.9, β 2 = 0.999. The hyper-parameter λ of the loss function for deep metric learning is 0.5. The combination coefficient µ of the total loss function is set to be 0.5. Let the loss weight w d of dissimilar sample pairs be 1.5. mAP: The results are presented in Table II, which show that DMMVH is overall better than all the compared multi-view hashing methods by a large margin. For example, compared with the current state-of-the-art multi-view hashing method FGCMH, the average mAP score of our approach has increased by 3.51%, 9.58%, and 13.85% on MIR-Flickr25K, NUS-WIDE, and MS COCO, respectively. That is, deep metric learning can indeed enhance the discriminative capability of hash codes. Hash Code Length: Intuitively, a longer hash code should preserve more semantic information and achieve better precision. Further, we study the effect of hash code length on multimedia retrieval mAP. The hash code is learned by setting the same code length for different methods. From Table II, we notice that the mAP of our method increases as the hash code length grows. On the MS COCO dataset, our method obtains a performance improvement of 5.25% when ranging hash code length from 16 bits to 128 bits. The experiments on other datasets show the same conclusion. However, some previous methods show a precision degradation while adding more hash bits, which indicates that these methods cannot scale well to hashing tasks with a longer length of hash code. On the contrary, our results demonstrate that the proposed method has a noticeable improvement in mAP as the length increases. Eventually, the experiments on the hyper-parameters are detailed in the appendix. B. Ablation Study Experiment Settings: To evaluate the effectiveness of our method, we perform an ablation study with different settings and report the performance. • DMMVH-metric: The quantization loss is removed. Table III. Starting with the loss function, the quantization loss can not perform any optimization on the embeddings. The method retrieves data randomly, leading to terrible mAP across all the tasks. Deep metric loss, on the contrary, can help the method learn the embedding well. We notice that DMMVHmetric is slightly worse than the full method due to the lack of binarization constraint. From the view aspect, DMMVHtext is barely better than DMMVH-quant. DMMVH-image outperforms DMMVH-text in all tasks by a large margin indicating the image features contain more information than text. With concatenated multi-view features, our method already outperforms the state-of-the-art methods. But Context Gating further improves mAP. In addition, the comparison experiment of the old backbone network is detailed in the appendix. C. Convergence Analysis We conduct experiments to validate the convergence and generalization capability of DMMVH. We run hash benchmarks on the MIR-Flickr25K dataset in different code lengths. The results are shown in Fig. 3. The figure delivers training loss and test mAP for analysis. As the training goes on, the loss gradually decreases. After 500 epochs, the loss becomes stable, which implies a local minimum is reached. For the test performance, the mAP goes up rapidly at the beginning of training. After 100 epochs, the test mAP stays stable. With further training, no degradation is observed on the test mAP, which indicates a good generalization capability. Similar convergence results are observed on other datasets. D. mAP@K and Recall@K Fig . 4 shows the mAP@K and Recall@K curves with the increasing number of retrieval results on the MIR-Flickr25K dataset in different code lengths. The mAP of the four cases slightly decreases as K increases, while the recall curve shows rapid linear growth. The tendency suggests that our method performs well in the retrieval tasks. Typical users only pay attention to a few results at the beginning of the retrieval results. Our method has even higher precision in this scenario. Experts tend to go through more results than typical users. Our approach can provide a linear growth recall as the number of retrieval results grows. Experts can expect consistent, high-quality results during their searches. To recap, DMMVH can deliver satisfying retrieval results for different user groups. IV. CONCLUSION AND FUTURE WORK We propose a new multi-view hashing framework (DM-MVH). It introduces deep metric learning to solve multi-view hashing problems. We showed that DMMVH provides satisfying retrieval results to different types of users. Compared to typical graph-based methods, DMMVH is less computationally intensive. It utilizes Context Gating for multi-view features fusion and deep metric learning for representation optimization. The proposed method conquers two main challenges of the multi-view hashing problem. Under multiple experiment settings, it delivers up to 15.28% performance gain over the state-of-the-art methods. In the experiment, we noticed some issues. For example, the performance gain is not quite significant as the length of the hash code increases. We will work on these issues to improve the proposed method further.
2023-04-14T01:16:04.641Z
2023-04-13T00:00:00.000
{ "year": 2023, "sha1": "2be9b41058cd894d64753f004f910a09f8ff5173", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2be9b41058cd894d64753f004f910a09f8ff5173", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
115445733
pes2o/s2orc
v3-fos-license
Conceptual design of high performance Unmanned Aerial Vehicle In current day’s Unmanned Aerial Vehicle (UAV) are widely used in every field, almost from military till the commercial purpose. Usage of UAV has decreased the burden on the human, where the manpower and risks during critical conditions (war fields) are reduced. Therefore the demand for the development of the unmanned aerial Vehicle is high. Interpreting the conceptual design data of UAV is difficult because of lack of availability of their data sheets. This paper is addressed to develop a conceptual design process of high-performance UAV, which carries a maximum payload of about 300kg and can travel for the maximum range 900km/hrs. For about 50 hours of maximum endurance. Where these three parameters are considered as the main requirements. By drawing constraint diagram, feasible design space for the aircraft was frozen; using which initial sizing of the wing, wing airfoil selection, and a suitable power-plant selection was done. Fuselage design was carried out looking at the available literature on the existing aircrafts. Propeller design was done to match the thrust requirements obtained from the constraint diagram. Empennage design was done to achieve the desired static margin of the aircraft. This process completes the conceptual design of UAV, where designed aircraft meets the requirement. The outcome of this paper enhances the understanding of the conceptual design process for the academicians as well as the researchers. INTRODUCTION Nature has been the design driver for majority of man-made inventions; one such is the aircraft. Looking at the birds fly in the sky, desire of flight in man was invoked. From the days of first flight of Kitty Hawk by Wright Brothers in 1906 [1] up to the current day, design and development of aviation industry is improved tremendously. Now a day's unmanned aerial vehicle (UAV) is the fascinating concept in Aeronautical field which is used in almost all the fields. The ultra-light UAV are meant to perform the complex makeovers in combat field as it gives the better performance compared to the others. Dueto this fact, usage of UAV is getting more important in both the combat and commercials fields, with the ability to do task at a feasible cost. Designing the high performance UAV is a challenging process as it involves meeting up all the design constraints or customer requirements by keeping the lowest possible take-off mass. Also that the maximum take-off mass of UAV category aircrafts should not be more, which invokes for use of advanced light weight engineering materials that makes design process more interesting and challenging. The manoeuvring capabilities of the current designed UAV is to achieve +5g to -4 g turns. Hence the current plane will be a trend setter design with such high manoeuvring capabilities. This paper deals with providing a concept design of unmanned aerial vehicle with the fixed wing. METHODOLOGY The requirements for UAV were stated by thoroughly understanding the literatures that are reviewed before starting to design.Few methods are carried out to design the UAV, which starts from plotting the constraint diagram based on the requirements posed in order to select the design space. Initial sizing of the plane parts like fuselage, empennage was carried out based on thumb rules and empirical formulae available in the literature. Based on W/S value in constraint diagram, wing airfoil selection is made in order to achieve maximum Cl and wing designing is carried out. Propeller selection is made based on the T/W value obtained in design space of constraint diagram and wing airfoil analysis was carried out using low fidelity inviscid codes to ensure the wing designed from this airfoil produces the desired lift throughout the mission profile. DESIGN PROCESS Design specification/ Requirement -In order to design an aircraft there should be some requirements where the designed UAV meets all specifications. During the design process few parameters are assumed, and few are chosen from the literature survey that is made. Currently for this design, RQ-1 Predator aircraft is referred and few specifications are chosen this aircraft design [2]. Generation of constraint diagram Constraint diagram analysis was carried out by equating each of the design constraints as a function of thrust loading (T/W) and wing loading (W/S). Then constraint boundaries were set up on a graph as thrust to weight ratio (T/W) against wing loading (W/S). The space bounded by the constraint boundaries was identified as the design space. After four iterations, the most feasible combination of 3 1234567890''"" T/W and W/S values were chosen for power-plant selection and wing design respectively for the aircraft [3]. Take-off performance The take-off runway length ( Sg) value is considered is 500m for this, W/S values corresponding T/W values were calculated using the formula as shown below [4]. To be on the safer side, take-off distance was chosen to be 250 m, was considered to estimate the T/W values. Since the information regarding the wing airfoil and the type of high lift device(s) used in the plane is not available at this point of design stage, the maximum lift coefficient value of the plane during take-off (Cl max) was assumed to be 1.6 [4]. Landing performance For the given landing runway distance, W/S value was calculated using the formula as shown below [4]. . To be on the safer side, for the design, landing distancewas chosen to be 100 m, which is 1% lower than the actual design constraint specified (200 m) was considered to estimate the W/S value. Since the information regarding the wing airfoil and the type of high lift device(s) used in the plane is not available at this point of the design stage, the maximum lift coefficient value of the plane during landing ( ) was assumed to be 1.6. Cruise and ceiling performance Since there was no design constraint specified on the cruise Mach number, the plane was assumed to cruise at an altitude of 7000m above sea level at 55 m/s. Service ceiling of the plane was assumed to be 5 km above sea level. This choice was made on the cruise altitude. Based on the manipulation of energy state (kinetic and potential) and the available excess power (Ps), the cruise curve of the plane was computed [5] PS = Based on the knowledge of similar class of existing UAV, wing aspect ratio for the plane was assumed to be 8[2] and the mass fraction of the pane at cruise was assumed to be 0.993. Since the CFD results were not available at this point of design, zero lift drag coefficient (CD0) of the plane was assumed to be 0.03. To ensure that the lift distribution over the wing is close to that on an elliptic wing (ideal), span efficiency factor (e) of the wing was assumed to be 0.9. Now considering constant altitude, constant speed cruise, and T/W value for various W/S values were calculated. Carpet plotsdesign trade-off The number of design variables to be considered during the design of the aircraft is at least 10 and sometimes more than 50 [3]. As a designer it is a difficult task to sort out through all possible combinations in a systematic fashion to find the most feasible combination. In this regard, trade off studies was carried out as a part of design, in order to know the range of feasible T/W and W/S combinations that has to be incorporated in the current design. Several design parameters like the wing aspect ratio, flying speed, flying altitude, wing area, wing loading values were considered, multiple combination of these values were presented on trade study charts called the Carpet plots. Therefore from the constraint diagram figure-4,the design space chosen is with T/W = 0.23 and W/S = 300. Design of wing W/S value corresponding to the design point chosen based on the constraint diagram section used to perform initial sizing of the wing. Wing loading (W/S) obtained from the constraint diagram was 300 N/m2. Knowing the maximum take-off mass of the plane is 500 Kg we will consider as 550 kg just to make sure that designed plane would be able to take up the increased in MTOW, wing area (S) was calculated as: The aspect ratio (AR) of the wing of the plane was assumed to be 8 after the analysis of carpet plots. Using the aspect ratio and wing area values, wing span (b) was calculated which is 12.The plane is expected to cruise at 7 km altitude above mean sea-level with a cruise speed of 55 m/s .Taper ratio (λ) for an UAV ranges between 0.2-0. Mid-wing configuration was chosen. Since the current design is on high performance plane. The wing spar can be cut in the half in order to save space inside the fuselage. The aerodynamic advantages of choosing a mid-wing configuration is it's aerodynamically streamlined compared to the other configurations and it has less interference drag than the low wing or high wing configurations [6]. A dihedral angle for the wing was considered keeping in mind the lateral stability of the aircraft. The lateral stability is mainly the tendency of an aircraft to return to its original trim level-wing flight condition if disturbed by a gust and rolls around the longitudinal axis of the plane. By looking at the surveyed data of the existing aircrafts, a dihedral angle of 2⁰was considered. Cruise mass of the plane was estimated considering the take-off mass and cruise mass fraction [7]. Wcruise = WTO* β * 9.81 = 5034 N At cruise condition, the design lift coefficient (Cl) of the plane was calculated as: (Cl) plane = Wcruise /(Area of wing * q)=> Cl = 0.6745 The design lift coefficient that the wing has to achieve at cruise was approximated as: (Cl) wing = Clplane of the plane at cruise/0.95= 0.71006 Further the design lift coefficient that the wing airfoil has to achieve at cruise was approximated as: (Cl) airfoil = Cl of the wing at cruise/0.9= 0.7889 Details on the use of high lift devices are not decided at this point and hence the maximum lift coefficient (Cl) that the airfoil should yield is unknown. So, NACA airfoil sections that yield an ideal lift coefficient of at least 0.8 with maximum 'CL max' value were looked for. Though the calculated ideal lift coefficient value was 0.78, once the entire wing is built, the effective ideal lift coefficient value reduces due to 3D effects. Hence an airfoil with a higher ideal lift coefficient value than the estimated value was chosen. NACA 631-308 that yields an ideal lift coefficient value of 0.854 at 8⁰wing setting angle was chosen. The reasons for the choice of this airfoil are because it has the lowest Cd min value of 0.00592, with a (Cl / CD) max value of 140 and moderate stall quality. The surface model of the top view of the full wing generated in Catia V5 environment is represented in Figure 1. Selection of power plant The most important criteria to select the engine type relates to the aircraft performance. The plane is designed to cruise at 2.5 km altitude above the sea-level with a cruise speed of 55 m/s (0.1664 Mach). As a designer, putting a constraint on the absolute ceiling of the plane is 5 km above mean sea-level, turbo-prop and piston prop engines are the most suitable means of propulsion. [8] Although, there exists a weight penalty in using turbo-prop due to the additional reduction gear unit. Furthermore, piston engines have the best propulsion efficiency at lower altitudes than turboprop engines [9]. From the constraint diagram, thrust to weight ratio (T/W) of corresponding to the most feasible design point was identified to be 0.23. The total take-off thrust the engine has to generate was calculated as Design of propeller The constraint diagram shows the thrust required for the aircraft to operate at given conditions. A suitable propeller has to be selected from the available manufacturers which can deliver the thrust required. The details of the propeller geometry were obtained from manufacturers such as Hartzell®, Airmaster® etc. The propeller geometry specifications such as blade angles, thrust generated were not provided by the manufacturers. Therefore it was decided to design a propeller. The propeller was to be designed based on the certain constraints. The RPM of the propeller was obtained from the engine manufacturer at full power. Another constraint was the tip Mach number which was not to exceed 0.8 as there would be a huge shock losses if the flow reaches transonic stage [11]. The power available from the engine was also fixed. The Input data for design of propeller is listed below Input data for design of propeller Rotational Speed=2000-2600(max) Power available from engine=260kW Relative Tip Mach no. not to exceed 0.8 Blade Loading=16-24 kW/m2 The design of propeller usually starts with selection of diameter of the propeller. After initial survey of existing propellers in UAV, it was decided that three bladed propeller of 2.4 m diameter would be optimum. The blade loading factor which gives an estimate of the power absorbed by the propeller per area of the blade is to be around 16 to 24 kW/m2. Blade loading*n = (Power available from engine)/(Propeller Disc area) , Where n = no. of blades But it was found that for the propeller diameter of 2.4 m and three blades, the blade loading factor was 24. In order to be on a safer side the blade loading had to be reduced. To reduce the blade loading either the power absorbed from the engine by the propeller has to be reduced or the propeller disc area has to be increased. Since the propeller and the engine output shaft are directly coupled by a common shaft. Reducing the power (lowering the RPM) hinders the thrust required for the airplane. Hence increasing the area of the propeller is an option, but doing so will increase the diameter. Increase in diameter will in turn affect the tip Mach no which is not to exceed 0.8 [12]. Therefore to decrease the diameter by increasing the chord length is preferred. Thus, a four bladed propeller was chosen with a diameter of 2 meters. Balancing is another fact for smooth operation of the propeller, three bladed propellers are statically imbalanced which tends to make the shaft bend on one side. Hence four bladed propellers barge in becoming the right choice for engineers. From literature [2], for micro air vehicle Eppler 193 and combination of NACA series near the hub have been used. Since these operate at low Reynolds numbers, these aerofoils cannot be used. Therefore with further investigation from [6] studies carried out for higher diameter propellers, RAF 6 (Royal Air Force), Clark-Y aerofoils have been used from hub to tip. By considering these data, Clark-Y was selected in this design[13] Design of fuselage The primary design objective of fuselage of the current design is to accommodate the payload which may include the weapons or defence materials along with the sufficient amount fuel/ batteries for the requiredflight. By looking at the fuselage configurations of the existing aircrafts of the same class, the fuselage configuration considered for the current design is represented Sizing of fuselage The current design is a UAV aircraft. If circular cross section is incorporated, at the tail end of the fuselage, the structure will become weak for empennage attachment. As a design choice, the fuselage cross section was selected to be non-circular in shape as per NASA report [14]. A rectangular cross section with round corners was employed in order to avoid sharp corners in the geometry and hence reduce the probability of flow separation at moderate angles of attack or sideslip. Fuselage geometry was determined based on the position of cockpit and length of moment arm required to generate tail moment. Overall fuselage length (Lf) depends on two parameters; namely the fuselage width (W) and fuselage height (H). The rear portion provides attachment of tail surfaces. The slenderness ratio for the present design was estimated as: Slenderness ratio = Lf/Df =7200/1300=5.54 Design of horizontal tail Horizontal tail is the stabilising surface that stabilises the aircraft along the longitudinal axis of the plane. The range of values of the non-dimensional cg limit (Δh), which is the difference between the most forward and the most aft position of the aircraft cg is 0.1 to 0.3 for an aerobatic plane. Therefore, at cruise, a reasonable assumption for the value of h (Xcg/mac) at the early stage of the horizontal tail design would be about 0.2 [2] since the aircraft cg moves during cruising flight, the horizontal tail airfoil must be able to create sometimes a positive and sometimes a negative lift. This requirement necessitates the horizontal tail to behave similarly for both positive and negative angles of attack. For this reason, a symmetric airfoil section is a suitable candidate for a horizontal tail. Also knowing that the t/c ratio of the horizontal tail airfoil used in aerobatic planes should be between 8-10% from the survey of existing aircrafts of the same class NACA 0009 airfoil was chosen for the horizontal tail. The wing-fuselage combination pitching moment coefficient at zero lift condition (Cm0_wf) was calculated as [15] The horizontal tail volume coefficient (VH) and the tail efficiency factor (ɳh) were assumed to be 0.6 and 1 respectively at this starting point of tail design [4]the value can be refined in the successive iterations of the tail design process. The optimum tail arm length (lopt) was estimated using the formula, Where Kc is the correction factor; it varies between 1 and 1.4 depending on the aircraft configuration. Kc = 1 is used when the aft portion of the fuselage has a conical shape. As the shape of the aft portion of the fuselage goes further away from a conical shape, the Kc factor is increased up to 1.4. For the current aircraft, a tadpole configuration fuselage was used. So, Kc value was assumed to be 1.1. Thus, lopt was calculated as: lopt = 1.1√((4*17.985*12.72*0.6)/(π*1.3))=4.049 m The horizontal tail area (Sh) was calculated as: The horizontal tail required lift coefficient (CLh) at cruise was calculated using the trim equation as: Cm0_wf + CLw (hcghac) -ɳh VH CLh = 0 = CLh = -0.0226 As a design choice, the tail aspect ratio was assumed to be two-third of wing aspect ratio [6] ARh = 2/3 * ARw = 2/3 * 9 = 6 Design of vertical tail The primary purpose of the vertical tail is to maintain the aircraft directional stability and directional trim. The current aircraft has symmetry about the xz plane, so the directional trim is naturally maintained. But there is always a slight asymmetry in the aircraft's xy plane. The horizontal tail volume coefficient (Vv) and the tail efficiency factor (ɳv) were assumed to be 0.03 and 1 respectively at this starting point of tail design [4]. The value can be refined in the successive iterations of the tail design process. The vertical tail arm length (lvt) was estimated using the formula: The vertical tail aspect ratio was assumed to be 1 [6] Using the vertical tail aspect ratio and area values, tail span (bv) was calculated as: The design goal in high lift devices (HLD) is to maximize the capability of the wing. Design of landing gear Looking at the literature and as a design decision, tail dragger type landing gear system was incorporated for the present design [5. 16]. RESULTS AND DISCUSSION Constraint diagram: The combination of constraints graphically identifies the feasible design space.In the current design, positive g-manoeuvre, maximum velocity and landing performance curves were observed to be the design drivers. Any combination of T/W and W/S values chosen within this region of design space will meet all the performance requirements. The design point chosen for the current design of UAV has been highlighted with a black lines. This particular point was chosen because it corresponds to the minimum value of T/W (to ensure the power plant size is kept as small as possible) and minimum value of W/S (to ensure manoeuvring capability of the wing). At the design point chosen. T/W = 0.23 and W/S = 300 N/m2. This T/W and W/S values were used for power-plant selection and initial sizing of wing respectively. Wing design Based on the outputs of constraint diagram, wing design was done as an iterative process. NACA 631-308 was chosen as the wing airfoil. After initial sizing of wing, the geometry details of wing for last iteration are represented in Airfoil analysis Initially, Panel code analysis was done using the Javafoil applet using 101 panels at the Reynolds number corresponding to cruise (527909) with Calcfoil as the stall model. The lift curve slope obtained is shown in figure 5. It was verified that the design lift coefficient of 0.8 was achieved at that Reynolds number corresponding to cruise phase. The wing setting angle (iw) was initially determined to be the angle corresponding to the airfoil ideal lift coefficient. From figure 4, the airfoil ideal lift coefficient is 0.8, and the angle corresponding to the airfoil ideal lift coefficient was identified to be 1.2⁰. CONCLUSION Typical design procedure for the unmanned aerial vehicle with high performance is calculated using the empirical relations and the data sheets are created based on the specifications. Conceptual design process goes on step-wise which gives the brief overview of how industries work in building and designing the aircrafts with the requirements posed at them by the customers. Each parts designing of the aircraft is done which help us to understand conceptual design concept. Designed aircraft is meeting the requirements to perform at higher altitudes of about 7000m for about 8-11 hours of endurance and to perform -5g and +6g turns which helps in combat flights.
2019-04-16T13:28:28.237Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "450041e910e2e7b1ba8b22d7036a8ecffd55ea9f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/376/1/012056", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "0498e729a14e931fc2ae06957fe8be7e2180bc5c", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
670881
pes2o/s2orc
v3-fos-license
Relationship between differentially expressed mRNA and mRNA-protein correlations in a xenograft model system Differential mRNA expression studies implicitly assume that changes in mRNA expression have biological meaning, most likely mediated by corresponding changes in protein levels. Yet studies into mRNA-protein correspondence have shown notoriously poor correlation between mRNA and protein expression levels, creating concern for inferences from only mRNA expression data. However, none of these studies have examined in particular differentially expressed mRNA. Here, we examined this question in an ovarian cancer xenograft model. We measured protein and mRNA expression for twenty-nine genes in four drug-treatment conditions and in untreated controls. We identified mRNAs differentially expressed between drug-treated xenografts and controls, then analysed mRNA-protein expression correlation across a five-point time-course within each of the four experimental conditions. We evaluated correlations between mRNAs and their protein products for mRNAs differentially expressed within an experimental condition compared to those that are not. We found that differentially expressed mRNAs correlate significantly better with their protein product than non-differentially expressed mRNAs. This result increases confidence for the use of differential mRNA expression for biological discovery in this system, as well as providing optimism for the usefulness of inferences from mRNA expression in general. Here, we specifically examine mRNAs that are differentially expressed across experimental conditions and their relationships to their protein products. We calculate individual gene correlations across a time series in ovarian cancer xenograft models. We identify mRNAs that are differentially expressed between drug-treated and control samples. We compare those correlations where a gene's mRNA was differentially expressed within that condition to those where the mRNA was not. We find that individual gene correlations of differentially expressed mRNA were significantly shifted to higher values, providing support for the assumption that differential mRNA expression has biological meaning. Genome-wide Individual gene a b Figure 1. Genome-wide versus individual gene correlation analyses. (a) Most studies of mRNA-protein correspondence calculate a single correlation coefficient (or other correspondence metric) representing the correlation between mRNA expression and protein expression across all genes. Individual points upon which the correlation is calculated are a single value or an average of multiple samples/conditions, which do not necessarily have to be the same between mRNA and protein experiments. This correlation represents a general measure of how well mRNA and protein expression corresponds across the entire genome. (b) Studies of individual gene correspondence calculate a correlation coefficient (or other correspondence metric) for every gene, representing the correlation between the expression of a mRNA and its protein product across multiple samples or conditions. Individual points upon which the correlation is calculated are a single value or averaged samples for a single condition, which must correspond to the same sample or condition for both mRNA and protein measurements. This correlation represents how well the expression levels of a particular mRNA-protein pair correspond dynamically across the samples or conditions in the experiment. Three genes in (a) and (b) and two samples in (b) are highlighted for illustrative purposes only. Results We used mRNA expression data previously collected and published in Koussounadis et al. 32 and newly collected protein data from the same experiment. Protein expression was measured by quantitative immunofluorescence as described in Methods and a representative image is illustrated in Supplementary Figure 1. Protein and mRNA expression measurements for twenty-nine genes were taken across a five-point time series (protein and mRNA 'profiles') in each of four different conditions, two xenograft models treated with two drug regimes (time-points ranging from 1 to 14 days after drug treatment, see Methods). Comparisons were made between the mRNA expression of each time point in each condition to that of pooled untreated controls. An mRNA was considered differentially expressed within a condition if any of the five time points showed a significant difference from control (at FDR < 0.05, see Methods; henceforth referred to as a differentially expressed mRNA profile). We calculated individual gene correlation coefficients between mRNA and protein profiles for each condition, using the five points in the time series. Compared to non-differentially expressed mRNA profiles, correlations for differentially expressed mRNAs profiles had a distribution significantly shifted towards higher values ( Fig. 2; Kolmogorov-Smirnov test, p = 0.008) and a higher median ( Fig. 2; Wilcoxon test, p = 0.03). However, it is possible that this shift could be due to the fact that profiles containing differentially expressed mRNAs had a larger dynamic range than those without, leading to a higher signal-to-noise ratio and thus a tighter correlation. To evaluate this possibility, we implemented two models: (1) a simple model of random samples all having identical relationship between mRNA and protein, but with the mRNA drawn from either high-or low-variance distributions (equal to the variance of the differentially expressed and non-differentially expressed mRNA profiles, respectively) and calculating the same Both models supported the significance of our results. The Kolmogorov-Smirnov and Wilcoxon statistics were significantly higher than those calculated from comparison between high-and low-variance samples from the simple model (Kolmogorov-Smirnov D, p = 0.008; Wilcoxon W, p = 0.022). The distribution of correlations for differentially expressed mRNA profiles was shifted to significantly higher values and had a higher median than those calculated from shuffled differentially expressed mRNA profiles (Kolmogorov-Smirnov test, p = 0.03; Wilcoxon test, p = 0.04). In contrast, the correlations for non-differentially expressed mRNA profiles were not significantly different from those calculated from the same profiles shuffled (Kolmogorov-Smirnov test, p = 0.29; Wilcoxon test, p = 0.25). Because of the relatively small size of our dataset, we further investigated the robustness of our results by repeating the analysis using different FDR cut-offs for selecting differentially expressed mRNA profiles, ranging from 0.01 to 0.50 (see Methods). If the phenomenon is robust and linked to the differential expression of mRNA, we would expect to see increasing levels of significance (lower p-values) as the FDR cut-off decreases. We found precisely this expected positive relationship between p-value and FDR cut-off for both the distribution and median comparisons (Fig. 3a,b; linear regressions: for Kolmogorov-Smirnov test, b = 0.41, t(48) = 6.10, p < 0.0001; for Wilcoxon test, b = 0.49, t(48) = 6.12, p < 0.0001). The exceptions to this rule were the three lowest FDR cut-off values, which had relatively high (and non-significant) p-values for both comparisons. This can be explained by a small sample size of less than ten differentially expressed mRNA profiles (Fig. 3c), leading to reduced power in the statistical test. For comparison with previous studies, we also explored genome-wide correlations on our data. From the genome-wide perspective, there was low correlation between all measured mRNA and protein expression levels ( Fig. 4; r = 0.08, n = 579, p = 0.07). Consideration of mean values by condition (averaging across time points) or by gene (averaging across all measurements) resulted in higher correlations than using individual values, although only the correlation by condition was significant ( Fig. 4; by condition r = 0.19, n = 116, p = 0.04, by gene r = 0.27, n = 29, p = 0.14). Genome-wide correlations considering only those measurements in differentially expressed mRNA profiles produced results similar to the above ( Supplementary Fig. 2). Discussion For the first time, we explore the relationship between an mRNA being differentially expressed and the mRNA-protein correlation for that gene. Compared to genes whose mRNA is not differentially expressed within a condition, we find that genes with differentially expressed mRNA have significantly higher correlations between mRNA and protein. This result provides support for the implicit assumption that differential mRNA expression reflects a difference between conditions at the functional level of proteins. We found this significant correspondence despite also finding poor correlations from a genome-wide perspective, as have other studies. In fact, our correlations (r = 0.08 -0.27) are on the low end 1 . It is interesting to note how the correlations increased as more averaging was performed. Most such studies collapse over several measurements to obtain one value per mRNA and per protein, either a mean 5,13,22 or maximum 17 . Averaging across samples appears to approach capturing an overall correspondence of mRNA and protein levels across the genome, not reflecting specifics of dynamic responses to regulation. However, such dynamic changes are the question of interest when looking for differential expression. Even against this background of seemingly low correspondence, our analysis revealed that the differentially expressed mRNA profiles were significantly shifted to higher correlations with their protein products than non-differentially expressed mRNA profiles. It was only a shift in distribution, not two distinct populations of profile correlations; however, this is to be expected. Within the non-differentially expressed mRNAs profiles, it would be expected that some have high correlations with their protein product for reasons unrelated to the experimental condition. Within the differentially expressed mRNAs profiles, there were some low and even negative correlations. Firstly it is always possible on a genome-scale analysis to have false positives; however, false positives could not account for all the low/negative profile correlations. It is also known that mRNA and protein profiles can be decoupled in time 15,30 , thus it is possible that correspondences for some of these correlations might have existed if different time scales were used. Finally, it may be the case that other levels of regulation overrode the transcriptional level, providing biological fine-tuning for the specific conditions encountered by the cells. It is notable that the non-differentially expressed profiles showed correlation distributions no different from those profiles randomised, whereas the differentially expressed profiles had significantly higher correlations than random. Thus, it appears differential expression does co-occur with tighter connection between mRNA and protein levels. We show a subtle phenomenon: genes whose transcription is modified by experimental manipulation are more likely to show concordant protein expression across these same experimental conditions, compared to genes whose transcription is not strongly influenced by the experimental manipulation. This shift in likelihood supports the view that multiple levels of regulation act together to prepare a cell most effectively for the conditions it encounters. Transcription differences can be triggered by changes in the environment. These transcription changes can be modified or overridden by translational regulation. Rates of protein degradation, as well, will influence how closely a protein level tracks its transcript. Our results suggest that when the environment triggers a transcription difference, it is more likely that the same environment triggers concordant reactions for translational regulation for a gene-or perhaps less regulation at the translational level-compared to genes whose transcription is not being dynamically changed by these conditions. It may also be the case that protein degradation could be slowed or hastened to more quickly bring protein levels in line with changing transcript levels, compared to transcripts not currently in flux. The alternate possibility, that changes in mRNA expression are irrelevant to the protein composition of a cell-essentially meaningless-makes little biological sense. However, this concern has plagued the field since the low mRNA-protein correlations first began being reported. Thus, it is reassuring that our results show a real, if subtle, influence of differential mRNA expression on protein levels in our experimental system. It may at first seem concerning that the profile correlation coefficients of all genes ranged the full -1 to 1 spectrum (r = − 0.95 to 0.94, precisely). However, this wide range is perfectly in line with previous studies. Vogel et al. 30 Shankavaram et al. 28 , reported positive-shifted correlations (r = − 0.1 -0.8); however, these values were generated after several levels of filtering on the mRNA data for high autocorrelation and maximal correlation with protein, which could account for the positive shift compared to other studies. Many studies that examined individual mRNA-protein pairs across samples did not calculate correlation coefficients, but did report phenomena such as protein and mRNA changing in opposite directions 11,13,24,27 . Thus, as many previous studies have noted, protein and mRNA expression is often discordant. It is encouraging that against this background, we were still able to discern a significant signal of increased concordance between mRNAs with differentially expressed profiles and their protein product, compared to mRNAs which were not actively regulated by the environment. Analysed from the same perspective as previous studies, our results are no different: overall low correspondence between mRNA and protein expression, implying strong contribution of post-transcriptional levels of regulation. However, by taking a different perspective-specifically addressing the implicit assumption of mRNA expression analysis that differential mRNA expression has some functional impact, most likely via protein expression-we found that differentially expressed mRNAs are more likely than non-differentially expressed mRNAs to translate into concordant behaviour at the protein level, giving confidence for the use of mRNA data for biological discovery. Our study was relatively limited, with only 29 genes examined in 4 conditions for both protein and mRNA expression (116 correlations in total). However, our analysis using differing FDR cut-offs, which showed increasingly clear differences as cut-off decreases, indicates our result is not a statistical fluke of a particular set of correlations, but instead a general phenomenon within this experiment. Because there is no reason to suppose that our experimental system would produce greater mRNA-protein correspondence than any other system, we suggest this connection between dynamic regulation of mRNA and mRNA-protein correlation should be present in other systems. Thus, we believe that further and more extensive studies on this phenomenon are called for. Such studies should include multiple measurements of both mRNA and protein expression nested inside an experimental manipulation: this enables calculation of both individual-gene correlations (from the multiple measurements) and differential mRNA expression (from the experimental manipulation). Further exploration of the correlation between mRNA and protein under those conditions in which we apply differential mRNA expression analysis could shed more light on what information we can extrapolate from differentially expressed mRNAs. Methods Xenografts. Xenograft experiments are previously described in Koussounadis et al. 32 . Briefly, two ovarian cancer tumour models, OV1002 and HOX424 33 , were implanted subcutaneously in the flanks of adult female nu/nu mice and allowed to grow to 4-6 mm in diameter. The mice received one of two drug treatments via intraperitoneal injection on day 0, carboplatin (50 mg/kg) only or carboplatin (50 mg/ kg) + paclitaxel (10 mg/kg), or were left untreated as controls. Xenografts were harvested from treated mice on days 1, 2, 4, 7, and 14, and from untreated controls on days 0, 1, 2, 7, and 14. The xenograft studies were conducted under a UK Home Office Project Licence in accordance with UK guidelines and regulations. Experiments were approved by the University of Edinburgh Animal Welfare and Ethical Review Body. mRNA expression. Full details of mRNA expression measurement and analysis are provided in Koussounadis et al 32 . Briefly, total mRNA was prepared from 10-50 mg of frozen tissue, and divided into two aliquots for two technical replicates per sample. Total RNA (0.5 mg) was amplified and biotinylated, diluted to 150 ng/ml, and hybridized to Illumina HT-12 BeadChips (Illumina, San Diego, CA, USA). This platform had been previously validated via PCR in a breast cancer xenograft study 34 . Expression values were processed with Bioconductor's lumi package 35 . mRNA expression was measured in 3-4 biological replicates per time point per condition (except one which had 2 replicates; Supplementary Table 1); controls were only measured on days 0, 1, 7, and 14. Agreement among biological replicates was good as measured by Pearson correlation coefficients (mean 95% confidence interval r = 0.987 -0.990; Supplementary Table 1). Protein expression. Protein was measured via immunofluorescence as previously described in Faratian et al 36 . Briefly, tissue microarrays were prepared from paraffin blocks of formalin-fixed xenograft material. Target proteins were chosen based on expectation of showing response in ovarian cancer treatment (e.g., representatives of MAPK, beta-catenin, ER, cell cycle, DNA-damage response pathways, among others). Antibodies for the proteins and conditions used are shown in Supplementary Table 2. Pan-cytokeratin antibody was used to identify tumour cells and normal epithelial cells, DAPI counterstain to identify nuclei, and Cy-5-tyramide detection for target for compartmentalised (tissue and subcellular) analysis of tissue sections. Monochromatic images of each TMA core were captured at 20X objective using an Olympus AX-51 epifluorescence microscope, and high-resolution digital images analysed by the AQUAnalysis software to generate AQUA (Automated quantitative analysis) expression scores for each sample (representative images shown in Supplementary Figure 1). Protein expression was measured in 3-8 biological replicates per time point per condition (except two, which had 1 and 2 replicates each; Differential expression. Raw mRNA expression data were background corrected, variance stabilised transformed (VST) and robust spline normalised (RSN) using Bioconductor's lumi package. AQUA protein expression scores were log-transformed with base 2. For both mRNA and protein expression, log fold-change values for each time point in each drug treatment condition were calculated by comparing mean expression levels across biological replicates to pooled controls for that tumour model using the Bioconductor package limma 37 . Both mRNA and protein expression exhibited similar dynamic ranges in log fold-change, from approximately -1 to 1. The output of limma was used to identify differentially expressed mRNAs, defined as those having FDR-adjusted p-values below 0.05. When evaluating varying FDR-cut offs, differentially expressed mRNAs were defined using FDR-adjusted p-values from 0.01 to 0.50 in steps of 0.01. The mRNA dataset has been deposited to Gene Expression Omnibus (GEO) with accession number GSE49577. The protein dataset (raw AQUA scores and limma-produced log-fold change values) is provided in Supplementary Data 1. Correlations. Pearson correlation coefficients were calculated between mRNA and protein profiles for each drug treatment condition by correlating the log fold-change values for mRNA and protein expression across the five time points in each condition. For genes with multiple mRNA probes corresponding to a single protein, the probe with the highest average expression level across time points was used 35 . Genome-wide correlations were calculated on three different scales, by calculating Pearson correlation coefficients between mRNA and protein for: (1) all measurements taken, (2) mean across all time points in a condition, and (3) mean across all time points and conditions for each gene. Simple model of variance differences. To evaluate whether the higher correlation coefficients from differentially expressed mRNA profiles could be due to drawing from samples with higher variance, we implemented a simple model containing a single mRNA-protein correspondence, but drawing samples from high-and low-variance populations. Correspondence between mRNA and protein was modelled as Where y is protein expression, x is mRNA expression, m is the slope, b is the intercept, and e is normally distributed noise. The values for m and b represent a single mRNA-protein correspondence identical for all pairs, and was calculated based on the corresponding values from a linear regression of log fold-change of mRNA against log fold-change of protein using all data. The noise e was generated by drawing from a normal distribution with mean equal to zero and standard deviation equal to the standard deviation of the residuals from the linear regression. The mRNA expression x was generated by drawing from a normal distribution with mean and standard deviation equal that of the mRNA log fold-change values for either: for high-variance, differentially expressed mRNA profiles; for low variance, non-differentially expressed mRNA profiles. Values for all variables used in the simple model are presented in Supplementary Table 3. A single comparison was performed using this simple model as follows. For the high-variance correlations, one mRNA profile was created by randomly generating five samples from the high-variance distribution. The corresponding protein profile was calculated by determining protein expression for each of the group of five using equation (1). A correlation coefficient was then calculated between the mRNA and protein profiles. This process was repeated as many times as correlations existed for differentially expressed mRNA profiles. Similarly for the low-variance correlations, one mRNA profile was created by randomly generating five samples from the low-variance distribution. The corresponding protein profile was calculated by determining protein expression for each of the group of five using equation (1). A correlation coefficient was then calculated between the mRNA and protein profiles. This process was repeated as many times as correlations existed for non-differentially expressed mRNA profiles. Kolmogorov-Smirnov and Wilcoxon tests were then performed between the high-variance and low-variance correlation distributions. The actual Kolmogorov-Smirnov and Wilcoxon statistics were compared to those generated from 100,000 repetitions of a single comparison from the simple model. For each test, the number of statistics greater than or equal to that of the actual statistic was counted to generate a p-value. Monte Carlo shuffling of profiles. Further tests for the significance of the higher correlation coefficients from differentially expressed mRNA profiles were calculated by shuffling the mRNA-protein pairing of profiles and calculating new correlations based on the shuffled profiles. A shuffling event shuffled the mRNA label for an entire profile, maintaining the order of time points within the profile: e.g., if mRNA X was swapped with mRNA Y, the first time point for mRNA X would be swapped with the first time point for mRNA Y, the second time point for mRNA X with the second time point for mRNA Y, and so on. This led to two different analyses. In the first, mRNA-protein pairing of differentially expressed mRNA profiles were shuffled and new correlations calculated. Shuffling and correlation generation was repeated 1,000 times to provide a large distribution of shuffled profile correlations. The actual distribution of correlations for differentially expressed mRNA profiles was compared to this shuffled distribution using Kolmogorov-Smirnov and Wilcoxon tests. The second was similar to the first, except that mRNA-protein pairing for non-differentially expressed mRNA profiles were shuffled. New correlations were calculated from these shuffled profiles. Shuffling and correlation generation were repeated 1,000 times to provide a large distribution of shuffled profile correlations. The actual distribution of correlations for non-differentially expressed mRNA profiles was compared to this shuffled distribution using a Kolmogorov-Smirnov and Wilcoxon tests.
2017-04-01T08:57:50.662Z
2015-06-08T00:00:00.000
{ "year": 2015, "sha1": "9750ac4e0c8c8edf7b7205b60fb9b70c60b9a06c", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep10775.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e14f27e0d8ab579f1caa8a1dbc18fbddbc2c4b37", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
219970719
pes2o/s2orc
v3-fos-license
A Novel E2F1-EP300-VMP1 Pathway Mediates Gemcitabine-Induced Autophagy in Pancreatic Cancer Cells Carrying Oncogenic KRAS Autophagy is an evolutionarily preserved degradation process of cytoplasmic cellular constituents, which participates in cell response to disease. We previously characterized VMP1 (Vacuole Membrane Protein 1) as an essential autophagy related protein that mediates autophagy in pancreatic diseases. We also demonstrated that VMP1-mediated autophagy is induced by HIF-1A (hypoxia inducible factor 1 subunit alpha) in colon-cancer tumor cell lines, conferring resistance to photodynamic treatment. Here we identify a new molecular pathway, mediated by VMP1, by which gemcitabine is able to trigger autophagy in human pancreatic tumor cell lines. We demonstrated that gemcitabine requires the VMP1 expression to induce autophagy in the highly resistant pancreatic cancer cells PANC-1 and MIAPaCa-2 that carry activated KRAS. E2F1 is a transcription factor that is regulated by the retinoblastoma pathway. We found that E2F1 is an effector of gemcitabine-induced autophagy and regulates the expression and promoter activity of VMP1. Chromatin immunoprecipitation assays demonstrated that E2F1 binds to the VMP1 promoter in PANC-1 cells. We have also identified the histone acetyltransferase EP300 as a modulator of VMP1 promoter activity. Our data showed that the E2F1-EP300 activator/co-activator complex is part of the regulatory pathway controlling the expression and promoter activity of VMP1 triggered by gemcitabine in PANC-1 cells. Finally, we found that neither VMP1 nor E2F1 are induced by gemcitabine treatment in BxPC-3 cells, which do not carry oncogenic KRAS and are sensitive to chemotherapy. In conclusion, we have identified the E2F1-EP300-VMP1 pathway that mediates gemcitabine-induced autophagy in pancreatic cancer cells. These results strongly support that VMP1-mediated autophagy may integrate the complex network of events involved in pancreatic ductal adenocarcinoma chemo-resistance. Our experimental findings point at E2F1 and VMP1 as novel potential therapeutic targets in precise treatment strategies for pancreatic cancer. INTRODUCTION Pancreatic ductal adenocarcinoma (PDAC) is one of the most aggressive human malignancies with 8-9% 5-year survival rate (1). Despite the progress in the knowledge of the disease, it remains a calamitous neoplasia. Up to 60% of patients have advanced pancreatic cancer at the time of diagnosis, and their median survival time is 3-6 months (2). Its poor prognosis has been attributed to a tendency regarding early vascular dissemination and spreading to regional lymph nodes, and to the incapacity to make a diagnosis while the tumor is still surgically removable (2). This is caused both by the aggressive nature of the disease, the lack of specific symptoms and early detection tools, and the refractory response to traditional cytotoxic agents and radiotherapy (3,4). Furthermore, pancreatic cancer cells get more malignant and survive with an extremely low blood supply (2,3). Up to now, contradictory data are available concerning autophagy activity and its regulation by specific autophagy related (ATG) proteins in pancreatic cancer cells. Experimental evidence places autophagy as a mechanism for survival of tumor cell under adverse environmental conditions, or as a defective mechanism of programmed cell death that promotes pancreatic cancer cell resistance to treatment (5)(6)(7). At present, the first option for resectable tumors in pancreatic cancer is adjuvant chemotherapy before surgical resection (8,9). However, most patients are in an advanced stage at the time of diagnosis and in these cases, chemotherapy is used as the first option. Regarding chemotherapy, gemcitabine used alone or in combination with nabpaclitaxel represent one of the most effective therapy (9,10), despite its poor efficacy in terms of overall patient survival (11). Gemcitabine works by causing apoptosis of malignant cells in pancreatic cancer (12,13). Intrinsic and acquired factors are involved in gemcitabine resistance. Several of them related to the transport and metabolism of gemcitabine (14) and associated with the tumor microenvironment, among others (15,16). Interestingly, recent studies highlight the importance of autophagic flux in acquiring resistance to gemcitabine in pancreatic cancer tumor cells (17)(18)(19). Macroautophagy (hereafter autophagy), is an evolutionarily conserved process that involves the sequestration and delivery of cytoplasmic components into the lysosome, where they are degraded and recycled (20). Autophagy is involved in the turnover of longlived proteins and other cellular macromolecules. It has also been involved in the physiological responses to exercise and aging and is implicated in different pathophysiological processes such as neurodegenerative disorders, cardiovascular, pulmonary Autophagy involves the formation of double-membrane structure, autophagosomes, around the cellular components targeted for degradation, which include large structures such as organelles and protein aggregates (29). Autophagy is mediated by a set of evolutionarily conserved gene products (termed the ATG proteins) originally discovered in yeast (30). In mammalian cells, the sequential association of at least a subset of the ATG proteins, referred to as the core molecular machinery (29), leads to the autophagosome formation. VMP1 belongs to these essential ATG proteins. We have demonstrated that VMP1 expression triggers autophagy in mammalian cells even under nutrient-rich conditions (31,32). By contrast, autophagy is completely blocked in the absence of VMP1 expression (31). VMP1 autophagyrelated function requires its hydrophilic C-terminal domain of 20 amino acids (VMP1-ATGD) (32). This domain binds directly to the Bcl-2 binding domain (BH3) motif of beclin 1 (BECN1) leading to the formation of a VMP1-BECN1-PI3KC3 (phosphatidylinositol 3-kinase catalytic subunit type 3) complex at the site where autophagosomes are generated (33,34). VMP1 is not expressed in normal pancreas, however its expression is early activated in pancreas suffering experimental diabetes mellitus, experimental and human pancreatitis, and in human pancreatic cancer cells (35)(36)(37)(38)(39). Interestingly, VMP1 prevents pancreatic cell death induced by acute pancreatitis (35). In previous studies, we found that VMP1 expression is induced by mutated KRAS in pancreatic tumor cells (28). KRAS is a member of the Ras family of GTP-binding proteins that mediate a wide variety of cellular functions including proliferation, differentiation, and survival. KRAS mutation is one of the earliest genetic events in human PDAC (40). Besides, it has been demonstrated that VMP1 down-regulation reduces cell resistance of pancreatic cells to chemotherapeutic drugs as Imatinib, Cisplatin, Adriamycin, Staurosporin, and Rapamycin (41). In colon cancer cells, we have recently shown that the HIF-1A-VMP1 autophagic pathway is involved in the resistance to photodynamic therapy in colon cancer cells (42). Therefore, we hypothesized that VMP1 is involved in the tumor cell response to chemotherapy in pancreatic cancer cells. Here, we study the role of autophagy and its molecular mechanism involved in the pancreatic tumor cell response to chemotherapy. We identified a new regulatory pathway, which is activated in high resistant pancreatic tumor cells, carrying oncogenic KRAS, under gemcitabine treatment but not in sensitive cells to chemotherapy. This molecular mechanism includes the activation of E2F transcription factor 1 (E2F1) that binds to VMP1 promoter to enhance VMP1-mediated autophagy. We also identified the histone acetyltransferase EP300 (E1A binding protein p300), as a modulator of this promoter activity. Our data show that the E2F1-EP300 activator/coactivator complex is part of the regulatory pathway controlling VMP1 expression triggered by gemcitabine. Together these data point at E2F1 as a regulatory factor modulating VMP1-mediated autophagy in human pancreatic cancer cells and integrate this degradative cellular process into the complex network of events involved in PDAC chemoresistance. Mammalian Cell Lines, Transfections, and Treatments Human pancreatic cancer cell lines with mutated KRAS, PANC-1 (KRASG12D), and MIAPaCa-2 (KRASG12C), and human pancreatic cancer cell line with wild type KRAS, BxPC-3 (43), and also a human HeLa cell line were obtained from American Type Culture Collection. PANC-1, MIAPaCa-2, and HeLa cells were cultured in Dulbecco's modified Eagle's medium (Biological Industries) containing 10% fetal bovine serum (Natocor). BxPC-3 cells were cultured in RPMI 1640 medium (Biological Industries) containing 10% fetal bovine serum (Natocor). All cell culture mediums were supplemented with 100 U µl −1 penicillin, and 100 µg µl −1 streptomycin (Biological Industries). All cell lines were maintained at 37 • C under a humidified atmosphere with 5% CO 2 . Mycoplasma contamination is periodically checked by PCR, each time a cell line enters the laboratory, and then monthly for each cell line currently in use. Cells were seeded 24 h before transfection and treatments to reach a 60% confluence. Cells were transfected using FuGENE-6 Transfection Reagent (Promega) as indicated by the manufacturer. Gemcitabine (Elli Lilly) and chloroquine (Sigma-Aldrich) were prepared according to the manufacturer's instructions. Cells were treated with 20 µM gemcitabine (Elli Lilly) and/or 10 µM chloroquine (Sigma) for different times when appropriate. Cell Viability Assay by Trypan Blue Method Cells were seeded into 6-well-plates at 1 × 10 5 cells per well with 3 ml growth medium. After the incubation, the membrane was washed four times with TBST and twice with TBS, then incubated with anti-rabbit HRP-conjugated (1:3,000, Amersham NA934, GE Healthcare) secondary antibody in TBST with 5% (w/v) non-fat milk for 2 h at room temperature. Next, the membrane was washed four times with TBST and twice with TBS and incubated with PIERCE ELC Plus Western blotting Substrate (Cat# 32134, Thermo Scientific) according to manufacturer's instructions. Finally, the membrane was scanned with cDigit Blot Scanner (LI-COR). We used ImageJ software to determine protein bands density. Relative densitometry normalized to actin is expressed as the mean ± SD of three different experiments. Fluorescence Microscopy To determine autophagy, cells were growing on glass slides into 24-well-plates. They were seeded at 5 × 10 4 cells per well with 1 ml growth medium. Twenty-four hours later, cells were cotransfected with a red fluorescent protein fused to LC3 (RFP-LC3) expression vector and the indicated plasmid, and then treated with gemcitabine. Next, cells were fixed with 4% pformaldehyde in PBS for 15 min, and immediately washed several times with PBS. Samples were mounted in DABCO (Sigma-Aldrich) and observed using a fluorescence microscope Nikon Eclipse 200 (Plan100), or an inverted LSM Olympus FV1000 using an UPLSAPO 60X O NA: 1.35 objective. We consider a cell positive for autophagy when RFP-LC3 has a punctate staining and not diffused protein remains. The number of fluorescent cells with punctate staining per 100 fluorescent RFP-LC3 transfected cells was determined in three independent experiments. To quantify, the number of fluorescent cells with punctate staining was counted in six random fields representing 100 fluorescent cells and expressed as the mean ± SD of combined results. In silico Analysis Genomic details and characteristics from human VMP1 gene were collected from the Ensembl Genome database. VMP1 promoter prediction was done using the Gene2Promoter utility; transcription factors binding sites and additional information was obtained using RegionMiner, MatBase, and MatInspector tools (Genomatix Software). Supporting evidence was found using the Neural Network Promoter Prediction program (BDGP version 2.2) and FPROM (Softberry). Alibaba 2.1 and PROMO 3.0 were also used for transcription factors' consensus sequences search. Cloning of VMP1 Promoter Into Reporter Vector Genomic DNA from HeLa cells was extracted using TRIzol reagent (Invitrogen Luciferase Reporter Assays For luciferase assays cells were plated 24 h before transfection in 12-well-plates at 1.4 × 10 5 cells per well. Cells were used at 60% confluence for pGL3.vmp1 promoter constructs transfection with FuGENE6 Transfection Reagent (Promega). Ratio used in each case was 1.5 ml FuGENE6 per 1 mg DNA. When two plasmids were transfected, we used 0.4 mg of pGL3 reporter and 0.6 mg of expression vector. In shRNA assays we used 1.5 mg of shRNA and 0.5 mg pGL3 reporter vectors. Treatments were done 24 h after transfection. In co-transfection experiments with Chromatin Immunoprecipitation (ChIP) Assay Chromatin immunoprecipitation was conducted following the Pierce Agarose Chip kit (Thermo Scientific). Briefly, PANC-1 cells were culture into 100 mm culture dishes at 3 × 10 6 cells per dish with 13 ml growth medium. The next day, cells were treated with 20 µM gemcitabine and 24 h were cross-linked with 1% formaldehyde directly into the media for 10 min at room temperature. The cells were then washed and scraped with PBS and collected by centrifugation at 800 × g for 5 min at 4 • C, resuspended in cell lysis buffer and incubated on ice for 15 min. The pellet was then resuspended in nuclear lysis buffer and sheared to fragment DNA to about 700 bp. Samples were then immunoprecipitated using a E2F1 antibody or normal rabbit IgG (Millipore) overnight at 4 • C on a rotating wheel. Following immunoprecipitation, samples were washed and eluted using the chromatin immunoprecipitation kit in accordance with the manufacturer's instructions. Cross-links were removed at 62 • C for 2 h followed by 10 min at 95 • C and immunoprecipitated DNA was purified and subsequently amplified by PCR. PCR was performed using seven primer sets for the seven areas containing potential E2F1 binding PCR products were visualized on a 2% agarose gel. Statistical Analysis Data are expressed as mean ± SD. We performed a minimum of three independent experiments, where individual data points were based on at least technical duplicates each. Student's t-test was used for comparisons between two groups and ANOVA test to assess more than two groups. P < 0.05 were considered statistically significant. Statistical analysis of data was performed using GraphPad Prism 6. Gemcitabine Requires VMP1 Expression to Induce Autophagy in Pancreatic Cancer Cells Carrying Oncogenic KRAS In order to analyze the time course effect of gemcitabine treatment on VMP1 expression we used PANC-1 and MIAPaCa-2 pancreatic tumor cells harboring a KRAS activating mutation, that are highly resistant to chemotherapy, and BxPC-3 pancreatic tumor cells that do not carry KRAS mutation. The relative sensitivity of pancreatic cancer cells to gemcitabine treatment was analyzed. PANC-1, MIAPaCa-2, and BxPC-3 cells were treated with 20, 200, and 2,000 µM gemcitabine. The relative number of viable cells was determined by the trypan blue dye exclusion test 24 h later. Figure 1A shows cell viability as a percentage relative to control according to the gemcitabine dose. BxPC-3 cells were more sensitive in all doses of gemcitabine analyzed ( Figure 1A). However, cell viability was significantly reduced from 200 µM gemcitabine in PANC-1 and MIAPaCa-2 cells (Figure 1A). Consequently, from now on we use this dose of gemcitabine for the following experiments. Next, PANC-1 and MIAPaCa-2 cells were incubated with 20 µM gemcitabine for 24 h and we evaluated VMP1 mRNA expression by qPCR assays. A significant induction of VMP1 mRNA was found after 12 and 24 h of gemcitabine treatment in PANC-1 and MIAPaCa-2 cells (Figure 2A). LC3 is currently used as a specific marker of autophagy (46). During the autophagic process, the cytosolic form of LC3 (LC3-I) undergoes C-terminal proteolytic and lipid modifications (LC3-II) and translocates from the cytosol to the autophagosomal membrane (47,48). Then, we analyzed VMP1 expression and LC3 lipidation to determinate autophagy by western blot. Figure 2B shows that gemcitabine induced VMP1 protein expression and LC3-II formation, and therefore autophagy, in PANC-1 and MIAPaCa-2 cells. The quantification of western blots is shown in Figure 2C. Afterward, PANC-1 and MIAPaCa-2 cells were co-transfected with RFP-LC3 and shVMP1 expression vectors and treated with 20 µM gemcitabine for 24 h. A diffuse pattern of LC3-RPF expression in the cytoplasm indicates that autophagy is not occurring, and when autophagy is induced RFP-LC3 is relocated as dots indicating the formation of autophagosomes. We counted cells with punctate RFP-LC3 staining to determinate autophagy (46). Figure 2D shows that gemcitabine induced RFP-LC3 redistribution to autophagosomes was significantly reduced when VMP1 expression was down-regulated in both cell lines. Figure 2E shows downregulation of VMP1 by shVMP1 expression in PANC-1 and MIAPaCa-2 cell lines. Therefore, gemcitabine requires VMP1 expression to induce autophagy in pancreatic tumor cells harboring a KRAS activating mutation. Gemcitabine Induces E2F1 Activation of VMP1-Mediated Autophagy Only in Pancreatic Tumor Cells That Carry Oncogenic KRAS Following, in order to analyze VMP1 expression and autophagy in cells that carry wild type KRAS, BxPC-3 cells were incubated with 20 µM gemcitabine for 24 h. Interestingly, gemcitabine treatment did not increase VMP1 mRNA and protein levels compared to basal conditions in BxPC-3 cells (Figures 3A,B). Then, we analyzed LC3 lipidation to determinate autophagy, and western blot analyses shows that gemcitabine treatment did not induce autophagy evidenced by LC3-II formation in BxPC-3 cells (Figure 3B). In addition, choloroquine treatment was used to evaluate autophagic process by inhibiting autophagy flux (46). The use of chloroquine alone or in combination with gemcitabine treatment induced LC3-II accumulation compared to control cells or cells only treated with gemcitabine in the same proportion, respectively, indicating that gemcitabine did not interrupt autophagy flux in these cells. The quantification of western blots is shown in Figure 3C. These results suggest BxPC-3 cells have a basal VMP1 expression and autophagy that are not up-regulated by gemcitabine. E2F transcription factors are involved in cell proliferation and DNA repair (49). As gemcitabine incorporation into DNA is critical for its toxicity (50), we evaluated E2F1 expression in response to gemcitabine treatment. To characterize this response of E2F1 to gemcitabine, we chose the sensitive cell line BxPC-3 and the most resistant to treatment by comparison, PANC-1 cells. Western blot assay shows E2F1 protein levels were significantly increased after 12 and 24 h of gemcitabine treatment in PANC-1 cells, and they were similar with respect to control in treated BxPC-3 cells (Figure 3D). In consequence, E2F1 is a candidate to mediate increased VMP1 expression and autophagy in response to gemcitabine in PANC-1 but not in BxPC-3 cells. Considering that VMP1 expression induces autophagosome formation (31), we tested whether E2F1 expression is capable of inducing VMP1 expression and autophagy in PANC-1 cells. We performed qPCR on samples from cells transfected with an expression vector for E2F1. As seen in Figure 3E, E2F1 expression induced VMP1 mRNA expression in PANC-1 tumor cells. Then, PANC-1 cells were concomitantly transfected with an expression plasmid encoding for the RFP-LC3 fusion protein and E2F1 expression vector. Figure 3F shows the recruitment of LC3 fusion protein in punctate structures in E2F1-transfected cells in contrast to the diffuse RFP-LC3 fusion protein signal observed in control cells. Quantitation showed that recruitment of LC3 was significantly increased in cells expressing E2F1 compared to control cells (Figure 3F). These results demonstrate that E2F1 is capable of inducing VMP1 expression and autophagy in pancreatic tumor cells resistant to gemcitabine treatment. VMP1 Promoter Is Activated by Gemcitabine We have previously demonstrated that starving conditions and rapamycin treatment induce VMP1 expression in HeLa cells (31). On the other hand, VMP1 expression is activated in PANC-1 human tumor cells carrying mutated (G12D) KRAS (28). In order to study the molecular mechanism that regulates VMP1 expression in the context of gemcitabine induced-autophagy in pancreatic tumor cells, a 3,005 bp sequence of the 5 ′ upstream region of the human gene VMP1 was amplified and cloned in the pGL3 reporter vector (pGL3.vmp1-3005) ( Figure 4A). Following, we analyzed if this sequence cloned has a promoter activity. In these experiments, we used HeLa cells under starving conditions and rapamycin treatment as a positive control of autophagy induction, and the pGL3.vmp1-3005 construct was used to perform luciferase reporter assays. As a result, we found increased VMP1 mRNA expression and VMP1 promoter activity in response to starving conditions and to rapamycin treatment in PANC-1 and HeLa cells (Figures 4B,C). Next, we analyzed if gemcitabine was able to increase VMP1 promoter activity. Figure 4D shows that the activity of the 3,005 bp sequence of VMP1 promoter was significantly increased when PANC-1 and HeLa cells were treated with 20 µM gemcitabine for 24 h. In order to localize the essential promoter sequence in the 3,005 bp sequence of the 5 ′ upstream region of the human gene VMP1, we amplified and subcloned consecutive shorter fragments of this region. Three more constructs were created and named as follows: pGL3.vmp1-1977, pGL3.vmp1-1469, and pGL3.vmp1-883 ( Figure 4E). Luciferase activity assays were performed in HeLa cells transfected with each construct and submitted to starvation, rapamycin, or gemcitabine treatment. Relative promoter activity was analyzed for each sequence ( Figure 4F). Results showed a decreased activation for the pGL3.vmp1-1977 and pGL3.vmp1-1469 constructs comparing to the initial 3,005 bp sequence. On the other hand, the activity was increased when the shorter pGL3.vmp1-883bp construct was used. The same results were observed in all the conditions analyzed. These data suggest that essential regulation motifs involved in VMP1 expression are contained into this VMP1 shorter promoter region of 883 bp. E2F1 Directly Activates the VMP1 Promoter and Regulates VMP1 Expression Under Gemcitabine Treatment Considering that maximal promoter activity in luciferase reporter assays was observed for the pGL3.vmp1-883 construct, we analyzed this sequence using bioinformatics tools. This in silico analysis data showed a putative promoter in this sequence containing a TATA. In addition, we identified several putative binding sites for relevant transcription factors related to the cellular stress response including E2F1 (Figure 5A). In order to know if VMP1 gene is a direct target of E2F1, we used a combination of transcriptional and chromatin assays to determine a possible involvement of E2F1 transcription factor in VMP1 promoter activation. First, we performed luciferase reporter assays using the shorter sequence of VMP1 promoter, pGL3.vmp1-883bp construct. Reporter studies demonstrate that expression of E2F1 led to an increase in VMP1 promoter activity ( Figure 5B). Moreover, endogenous E2F1 can bind to VMP1 promoter in PANC-1 cells treated with gemcitabine as demonstrated by ChIP assay (Figure 5C). Also, gemcitabine increased VMP1 promoter activity (see shControl transfected PANC-1 cells) ( Figure 5D). Moreover, down-regulation of E2F1 expression in PANC-1 cells treated with gemcitabine significantly reduced VMP1 promoter activity ( Figure 5D) and VMP1 expression ( Figure 5E). These results demonstrate that VMP1 is a novel direct target of the E2F1 transcription factor under gemcitabine treatment. E2F1 and EP300 Cooperate in VMP1 Promoter Activation Transcription factors regulate gene expression through their inherent activation or repression properties, and through functional interactions with co-regulatory molecules. Here, we tested whether activation by E2F1 involves the histone acetyltransferases EP300. First, we analyzed if EP300 could induce VMP1 mRNA expression in PANC-1 cells. Figure 6A, shows that VMP1 mRNA expression was up-regulated in PANC-1 cells expressing EP300. Additionally, EP300 expression significantly activated the VMP1 promoter compared to control cells ( Figure 6B). Then, we evaluated if EP300 participates with E2F1 in VMP1 promoter activation. Interestingly, downregulation of EP300 impaired E2F1-mediated activation of the VMP1 promoter in PANC-1 cells (Figure 6C). Furthermore, co-expression of E2F1 and EP300 led to a synergistic activation of the VMP1 promoter ( Figure 6D). These findings demonstrate that E2F1 and EP300 cooperates in VMP1 promoter regulation. Altogether, these data show that the E2F1-EP300 activator/coactivator complex is part of the novel signaling pathway controlling the promoter activity and, consequently, the expression of the autophagy-related gene VMP1 in pancreatic tumor cells carrying oncogenic KRAS. DISCUSSION Tumor cells with a high prevalence of KRAS activating mutations, like pancreatic cancer, have the distinction of a poor prognosis (4). Previously, it has been demonstrated that many human cancer cell lines with KRAS activating mutations have basal levels of autophagy (51,52). Yang et al. (6), have showed that pancreatic cancer cells exhibit constitutive autophagy under basal conditions, and it is increased in the advanced stages of PDAC being required for malignant transformation. In a previous work, we have identified VMP1 as a transcriptional target of oncogenic KRAS signaling pathway and demonstrated that KRAS requires VMP1 to induce and maintain basal autophagy in pancreatic tumor cells (28). On the other hand, VMP1-mediated autophagy is early induced above basal conditions by gemcitabine treatment in MIAPaCa-2 cells (36). Moreover, VMP1 is highly expressed in poorly differentiated human pancreatic cancer (41). In this study, we identified a regulatory pathway, which is activated by gemcitabine treatment, in pancreatic tumor cells carrying a KRAS mutation at amino acid position 12. This molecular mechanism involves a novel E2F1-EP300-VMP1 pathway controlling VMP1 expression triggered by gemcitabine. According to the Ensembl Genome database, VMP1 gene is localized in Chromosome 17 of the human genome. Using informatics tools, we found out four putative promoter regions given for VMP1 gene; among them we particularly focus on one of 821 bp localized in the positive strand at the following position 57784363-57785183 of chromosome 17. In the analysis, it was also found the proposed transcriptionstarting site (TSS) (Figure 5A) at position 501 of the promoter area. This prediction would regulate the expression of a 12 exons transcript and we consider this transcript to be the one coding for the 406 aa protein VMP1. The TSS is located at the start of the first exon sequence, while the starting codon (ATG) is in the second exon. Considering these data, we amplified and cloned a 3,005 bp sequence of the 5 ′ upstream regions of the VMP1 gene sequence which includes the promoter area analyzed above. Using luciferase reporter assays we demonstrated that this region is activated by previously reported VMP1 stimuli, such as rapamycin, starvation, and gemcitabine (Figure 4). We found that regulatory motifs involved in VMP1 expression are contained into an 883 bp fragment in the 5 ′ upstream region of the VMP1 gene. This essential promoter sequence showed the highest activation Frontiers in Endocrinology | www.frontiersin.org by rapamycin, starvation, and gemcitabine treatment using luciferase reporter assays. The in silico analysis of VMP1 essential promoter sequence revealed the presence of binding sites for several transcription factors related to cellular stress. We focus our attention on E2F1 because it has been correlated with high-grade tumors and unfavorable patient survival in PDAC (53,54). E2F1 was the first identified member of the E2F family of transcription factors. E2F activity is linked to retinoblastoma tumor suppressor (RB)-dependent cell-cycle control. E2F transcription factors are found downstream of growth factor signaling cascades, acting as transcriptional activators or repressors of genes necessary for cell cycle progression (55). Most human tumors harbor the functionally inactivated retinoblastoma protein, resulting in deregulated E2F1 and its target genes are highly up-regulated in these transformed cells (56). This up-regulation leads to the activation of cytoplasmic (PIK3CA/AKT and RAS/MAPK/ERK) and nuclear signaling cascades related to invasion and metastasis (54). Also, activation of E2F1 transcription factor has been shown to induce autophagy (57) by up-regulating the expression of the autophagy genes LC3, ULK1 (unc-51 like autophagy activating kinase 1), ATG5 and DRAM1 (DNA damage regulated autophagy modulator 1). The E2F1-mediated induction of LC3, ULK1, and DRAM1 is direct (through interaction with the promoter), whereas the up-regulation of the expression of ATG5 is indirect (58). In this work, we provide evidence of another gene related to autophagy that is up-regulated by E2F1. We demonstrated that E2F1 is able to induce autophagy in pancreatic tumor cells and regulates VMP1-mediated autophagy by a direct binding to VMP1 promoter. Gemcitabine inhibits DNA synthesis via a process called masked chain termination where gemcitabine is incorporated into DNA via DNA polymerase α, leading to the inhibition of DNA repair and synthesis (59). It has been shown that E2F1 induces genes involved in DNA repair in normal cells and in tumor cells undergoing chemotherapy through complex formation on the promoters of these genes (54). In this sense, BxPC-3 cells that are sensitive to the dose of 20 µM gemcitabine did not increase the expression of E2F1 or VMP1 during treatment. In contrast, PANC-1 cells resistant to treatment increased the expression of E2F1, result consistent with data of previous works (60,61). Lai et al. (60) have demonstrated that PANC-1 cells respond to gemcitabine by increasing the expression of ribonucleotide reductase M2 catalytic subunit (RRM2) through E2F1-mediated transcriptional activation, as a DNA damage response to enhance DNA repair capacity in these cells. Here, we demonstrated E2F1 and VMP1 expressions are both increased in PANC-1 cells treated with gemcitabine. Besides, pancreatic tumor cells transfected with an expression vector for E2F1 induced VMP1 expression and activated autophagy. Therefore, in this study we demonstrated another mechanism activated by E2F1 in response to chemotherapy, in which E2F1 activates VMP1 expression and autophagy as a resistance response to gemcitabine. Autophagy is constitutively activated in oncogenic KRASdriven tumors and is necessary for the development of these tumors (6). Previously, we identified the PI3KCA (phosphatidylinositol-4,5-biphosphate 3-kinase catalytic subunit alpha)-AKT1 (AKT serine/threonine kinase 1) pathway as the signaling pathway mediating the expression and promoter activity of VMP1 in KRAS driven tumors (28). PANC-1 cells harbor a KRAS mutated allele (KRAS G12D) and these cells present basal expression of VMP1 and autophagy. In this work, we show that gemcitabine treatment increased that basal VMP1 expression and autophagy in PANC-1 cells. As well as GLI3 (GLI family zinc finger 3) is the transcription factor implicated in VMP1 promoter activation in basal conditions in PANC-1 cells (28), E2F1 plays that function under gemcitabine treatment. Here, we show that down-regulation of E2F1 reduced VMP1 expression and VMP1 promoter activity induced by gemcitabine. Besides, we demonstrated a direct binding of E2F1 on VMP1 promoter under gemcitabine treatment in pancreatic tumor cells. Therefore, these data suggest VMP1 expression is regulated by different transcription factors depending on the cellular context that autophagy is induced. Further research will be necessary to clarify if function or mechanisms involved in basal VMP1induced autophagy differs from VMP1-induced autophagy under gemcitabine treatment. The regulation of gene expression depends on the characteristic activation/repression properties of each transcription factor, but also on functional interactions with co-regulatory molecules. EP300 belongs to the type 3 family of lysine acetyltransferases (KAT3) (62), and this enzyme is involved in the regulation of important physiological processes such as proliferation, differentiation, and apoptosis, due to its ability to function as transcriptional coactivator interacting and regulating more than 400 transcription factors (63,64). However, the role of EP300 in gene regulation is not only restricted to its property of allowing the binding of transcription factors to large protein complexes in the transcription machinery, but also implies the required KAT activity for the acetylation of transcription factors and histones that allow access to chromatin (65). Thus, EP300 contributes to DNA repair through histone acetylation, facilitating the recruitment of DNA repair factors to the site of damage (66). We demonstrated that EP300 potentiates the VMP1 promoter activation by E2F1. Even, down-regulation of EP300 impaired E2F1-mediated activation of the VMP1 promoter in pancreatic tumor cells. These findings demonstrate that E2F1 and p300 cooperate in VMP1 promoter regulation. Our results agree with Hashimoto et al. (67), who have shown that autophagy has a cytoprotective effect against 5-fluorouracil and gemcitabine in pancreatic cancer cells. They demonstrated that inhibition of autophagy potentiates the inhibition of PANC-1 cell proliferation by 5-fluorouracil and gemcitabine. Here, we showed an induction of VMP1 expression and autophagy in PANC-1 and MIAPaCa-2 cells under gemcitabine treatment and down-regulation of VMP1 expression significantly reduced autophagy induced by gemcitabine. These data strongly suggest that VMP1 expression is involved in PDAC chemoresistance to gemcitabine. In conclusion, we have identified the E2F1-EP300-VMP1 pathway that mediates gemcitabine-induced autophagy in pancreatic cancer cells (Figure 6E). This pathway would be activated by gemcitabine like a resistance mechanism. Our results point at E2F1 as a regulatory factor modulating gemcitabine induced VMP1mediated autophagy in human pancreatic cancer cells and mechanistically integrate the autophagic degradative process into the complex network of events involved in PDAC chemoresistance. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. AUTHOR CONTRIBUTIONS AR designed and performed experiments, analyzed and interpreted data, prepared figures, and wrote the manuscript. CC performed experiments, contributed to interpret data, and prepared figures. FR performed experiments. VB developed analytical tools. TO developed expression vectors. CG contributed to interpret data. MV developed the hypothesis, designed experiments, analyzed, and interpreted data, and wrote the manuscript. All authors reviewed the manuscript.
2020-06-23T13:07:23.280Z
2020-06-23T00:00:00.000
{ "year": 2020, "sha1": "9b61ec53ec3a495e67f9fda26d1a41bc07c28e25", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2020.00411/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9b61ec53ec3a495e67f9fda26d1a41bc07c28e25", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
43685881
pes2o/s2orc
v3-fos-license
(CON)TATTO. Image and Mental Imagery in Childhood Visual Impairment Mental imagery is a familiar aspect of most individuals’ mental lives, considered as an experience which occurs in the absence of actual stirrings for relevant perceptions. The primary importance of mental imagery has been demonstrated in several domains: learning and memory, reasoning and problem solving, inventive or creative thought and rehab. The project primarily refers to the analysis of infant visual impairment for a first scientific and social approach, with specific references to significant figures who have worked on these issues for years. Research is enriched by the contribution of educators working with children with these diseases. Thus, the proposal of a freehand illustrated tactile book, originally conceived for sighted and then reworked for blind and visually impaired, through the humanization of fantastic creatures designed to facilitate imaginative faculties, allowing the child to project his to an inanimate model. Introduction In "Letter on the Blind for the Use of those who can see", Denis Diderot in 1749 takes on the question of visual perception in certain cases of blindness from birth.According to Diderot's essay, a blind person who is suddenly able to see for the first time does not immediately understand what he/she sees.A blind person must spend an amount of time establishing relations between his/her experience of forms and distances.Then the image(s) will be, thereafter, apparent to him/her by sight.The background of Diderot's essay presents a philosophical question, known as the problem of Molyneux.This issue had been published by John Locke had in 1689, to be discussed, later, in 1798 by Immanuel Kant in his Anthropology from a Pragmatic Point of View.When medical surgery reached the level of giving blind people eyesight, this debate has found its conclusive, empiric answer by testing involved patients.Nevertheless, it keeps the learning aspect and understanding of reality open for the ones who stay blind. Related to blind born children, this paper wants to elaborate an experience of how important mental imagery becomes within an educational project [1].Mental imagery is a familiar aspect of most individuals' mental lives; it can be considered an experience [2].Whether mental images have a pictorial format or not, they have been the object of the well-known imagery-debate.In this debate, and for several decades, two main theoretical positions have been challenging each other: the pictorial (or depictivist) theory [3] and the propositional (or descriptivistic) theory [4]. According to Kosslyn's studies [5], mental images are formed through representational brain states that are genuinely "picture-like".Based on specific findings and tests that are not mentioned here, images in the mind were considered to be functionally equivalent to inner pictures that can be considered as a sort of copy of previous sensory impressions [6].Maintaining information in the form of an image, where a lot of data can be simultaneously combined, is more economical than maintaining it in the form of propositional description [7] Jean Paul Sartre [8] and Ludwig Wittgenstein [9], stated clearly that, from imagery, no new information derive, a strong contrast to perception.Thus, perception is the input or "platform" on which the mental imagery can be built.Mainly, sensorial observation and logic inferences can lead to knowledge.It has been argued that mental imagery can and does support types of inference that bring genuine and new knowledge about the real world, [10].Nielson and Norman's study [11] on virtual interfaces can be a good example.In that study, the mental image becomes a virtual, non-real model, through which the user builds his/her interaction with an untouchable system, i.e., web pages, primarily based on previous experiences. Furthermore, the primary importance of mental imagery has been demonstrated in several domains, such as learning and memory [12], reasoning and problem solving [13], inventive or creative thought [14] and rehabilitation, to name the main ones. This topic is very complex and deeply structured.Therefore, the book Blind Vision treats it with extreme rigor.The authors tried to convey the idea that blindness is not "less", but "alter" [6].They have used both visual and spatial cognition, as well as compensatory mechanism, that are activated in people with partial or complete visual impairment. The Educational Project As stated before, mental imagination occurs in the absence of actual stirrings by relevant perceptions.There are regions of the brain that process information even when there is no sensorial deficit.Regardless of its original sensorial sources, they respond to a specific object whether it is seen or touched [15]. The learning process is already complex; it involves abstract forms and requires the transformation into realistic and imaginative content.Thus, it is an advantage to refer to the general capacity of children to learn reading abstract letters and words.Regardless of the existing sense of view, the human brain needs to adapt and create new neurologic connections in order to read and transform a word into a real image.Wolf stated "there are no genes which are specialized in reading.In contrast to its components, such as seeing and talking, which are organized on a genetic level at the birth of the child, for reading there not direct genetic program available" [16]. This means that as any cultural invention, reading must be learned through exercise.Recent researches have shown that the capacity of learning how to read, depends highly on the time spent by parents and educators in reading stories as early as preschool age.By time, the child starts to connect stories with forms and meanings, and revive emotions, to analogue mental imagery.The latter is hardly studied by researchers in the field of educational sciences. Thus, if these issues to be applied to an educational project, the topic would require a degree of complexity.Many variables related to the learning and development capabilities of the different areas of the brain (linguistic, mathematical logic, scientific, historical, anthropological, technical, and expressive) need to be taken into account reinforcing mental representations and cognitive experiences that determine the learning processes. The project of this paper primarily refers to the analysis of infant visual impairment, for a first scientific and social approach, with specific references to significant personalities who have worked on these issues for years.Research is enriched by the contribution of educators working with children with these problems. As stated before, it is necessary to deeply investigate the strong relationship between imagination and emotions.The emotional growth of children is not an appendix; it must be considered as an essential element of evolutionary development through narrative experiences as the children grow. Furthermore, imagery enriches the real, as much as rationality, since it explains and orders it.Simply, it works in its own way.Logic seeks rules, links and laws that can always, and in any case, be in agreement with science, while anybody can agree with it.Imagery, however, becomes imagination and develops connections that are linked to the inner world of those who use it; so they may be true and real just for this person [17]. Defining the context of intervention is crucial for the design process.In this project, it has been achieved with the valuable contribution of Adriana Rosso, an educator and expert in orientation and autonomy at the Non-Visual Documentation Center, Turin, Italy.She was helpful for the realization of structured materials based on specific rules referring to Wibur Schramm [18] that precisely determine the structural and constructive characteristics, such as size (Figure 1), shape and texture.The aim was to stimulate a single type of response.Such products are generally used in nurseries and elementary schools [19], as they are dedicated to development phases between the pre-conception and intuitive phases [20]. The Case Study: Hi, I Am Lini Vision is the primary sensory modality in the spatial-cognition and object identification [21].As previously stated, if vision is lost in the blind, the audition, touch and olfaction are still functioning and they represent the channel through which a blind person gets to know about the world.Touch and hearing can provide sufficient information for a blind person to generate a reliable internal representation of the external world.Thus, the proposal of a freehand-illustrated tactile book was originally conceived for sighted and then reworked for blind and visually impaired.Adaptation is done through the humanization of fantastic creatures designed to facilitate imaginative faculties, thus, allowing the child to project his/her physicality (having the nose, the mouth, the eyes, etc.) to an inanimate model.The major design rules described in the rich literature references have been respected.Moreover, these standards were implemented in various products; for instance, Braille writing tactile book, story-building cuddly toys, memory games, and puzzles specifically designed to work on tactile sensations. In addition, child's knowledge is based on the interaction between subject and object.The subject acts on the object and manipulates it.The game, as it is known, has a remarkable social function of interaction and sharing, and consequently becomes the means to understand the development and the imaginative capacity of the child [22]. The proposal, therefore, extends the interaction from a two-dimensional surface with relief elements (a special printing technique between cut image and Braille) to use in other game elements that integrate and confirm the child's mental image.The presentation of the protagonist is made of a cuddly toy, while its activities can be expressed through table games, as the famous memory game.The memorization is finally reinforced with a puzzle that allows children to rebuild the character by aggregating individual moments in the storytelling. This type of education is capable of adding a touch of curiosity and more shades.This is thanks to its collaboration of cross-channel variables, learning abilities, and advancement of cerebral areas, i.e., linguistic, logic and math, scientific, historical and anthropological, technical and expressive areas.Its closeness to growth makes its role fundamental.Thus, a communicative channel, suitable for children's imagination, can be effectively used as a growth tool. This project is directed to the specific period of childhood that Piaget referred to as pre-conceptual and intuitive phases, from both social and scientific points of view, with many references to authoritative figures who have worked on these topics through the years.The whole work aims at providing a path, giving the children the chance to create a connection between bi-dimensional and three-dimensional elements of a story. The scope of this project has to be cleared due to fact parameters and measurements changes, according to the requirements it involves.Each of the aspects considered follows certain guidelines to satisfy specific needs.Dimensions, shapes and textures are the vectors the child will use while experiencing the surrounding environment. The first introductory element is a hand-illustrated book, describing the main character (see Figure 2), and his friends.The fairy tale, perhaps more than anything else, is considered a "democratic" element that would reduce the distance between the two realities, normal sighted children and the ones with visual impairments, as far as imagination and fantasy are main players.In both cases, it possible to direct the involvement into a cognitive, emotional, relational and/or bodily dimension(s), whether separately or combined.Through a proper grammar of the imaginary, it was possible to lead the reader to intelligent use of the fairy tale, transforming it from an instrument of entertainment into a tool of human growth and personal maturity.A Braille text, relating to the images, completes the amount of content provided.Reinaudo [23]. The protagonist of the story is Lini, a groundhog living in a canary cage which floats in the air and contains fluffy pillows, a tiny table, and a couple of chairs in it.A ladder drives to the entrance and some flowers perfume this little space.Cats, flies, and many other animals animate his life, sharing interests such as music and sweets.The printing technique called 'Minolta 3D printing' makes outlines able to be sensed through touch (Figure 3).Cold transferred graphics on thermo-sensitive micro-capsules paper and then passed in a special oven, to make it possible to see the dark parts of an illustration rise.A following coloring session adjusts tone and tints for those who need contrast to aid understanding the medium (Figure 4).The experience, then, develops into a memory game to strengthen the information collected while reading.After that comes the turn of a puzzle to see if children remember the protagonist's main traits, and are able to transfer them onto a larger version of the creature (Figure 5).In the end, the child is called to handle a big cuddly toy depicting Lini.On the left, one of the two designs for the memory game.On the right, dimensions and complete drawing of the puzzle.The creation of a single big box allows users not to go out of the game space.The number and size of the pieces take into account the perceptible size of the touch, and the number of information that the child can store and his age. Conclusions Imagination is the ability to create relations between known elements, leads to the representation of virtual concepts which otherwise would lack a real representation.The use of fantastic or analogue languages is thought to stimulate the inner world, to communicate emotions, and to develop the personal creativity of children (Figure 6).The fairy tale represents a rich language that helps children to enhance their inner life skills, which can help to form associations between emotional and bodily experiences.In the absence of vision, the other senses work as functional substitutes and thus are often improved (i.e., sensory compensation).Consequently, sensory compensations allow blind individuals to interact with the external world and perfume the everyday activities [24]. Imagination visualizes, capture, and creates mental pictures.Creativity comes to the surface and inventions circumscribe its contents, needs and requirements, describing its final physical declination.Haptic and auditory experiences are necessarily correlated, while vision allows parallel processing of multiple distinct input into a unique meaningful representation.This variable is the main element to be considered.The invariant aspect has been supported by the latest neuroscience studies, concerns the production of mental images, equivalent to inner picture, intended as copy of previous sensory, not only visual, impressions. Figure 1 . Figure 1.Synoptic schedule related to the anthropometric relationships of children as their height increases. Figure 2 . Figure 2. The choosing of the protagonist: from the left, to the right, a prairie dog, a crib and a marmot.Below, the progressive drawing of the marmot: the simplification is useful to facilitate children affected by visual impairments in the recognition of the different parts.Drawings made by E. Reinaudo [23]. Figure 3 . Figure3.Some cards used for the memory game.The name of the subject is written in Braille, while the drawings have the contour lines in relief, in order to help the children to recognize the subjects.The cards have beveled edges: three of them are rounded, while the upper right corner is more linear.This is a tactile indicator that facilitates the correct reading direction of the card. Figure 4 . Figure 4.The playing cards depict similar and simple elements: one subject is the protagonist; the second is a subordinate and aggregate, which can be a cue of wonder, question, or necessity by stimulating the ability to relate the information.This mechanism, at the basis of fantastic elaboration, can lead into the imaginative process and, in the future, drives to creativity and invention. Figure 5 . Figure 5.On the left, one of the two designs for the memory game.On the right, dimensions and complete drawing of the puzzle.The creation of a single big box allows users not to go out of the game space.The number and size of the pieces take into account the perceptible size of the touch, and the number of information that the child can store and his age. Figure 6 . Figure 6.The protagonists of the full story: the male companion of the protagonist, a bumblebee, a cat, and a canary.All the drawings, shown born as sketches on paper, are later imported to the Adobe Illustrator application.The final version of the various subjects is reported before being sent to the printer, then heated in the oven and colored by hands.
2017-12-15T11:23:24.092Z
2017-11-14T00:00:00.000
{ "year": 2017, "sha1": "c2698a5df6fcc8c3540554879fa16207b8b26e9d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2504-3900/1/9/903/pdf?version=1510715085", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "42bcb5da54862a2edc6074a9c15e8e392f324244", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
237445436
pes2o/s2orc
v3-fos-license
New Sensor to Measure the Microencapsulated Active Compounds Released in an Aqueous Liquid Media Based in Dielectric Properties in Radiofrequency Range In recent years, the general and scientific interest in nutrition, digestion, and what role they play in our body has increased, and there is still much work to be carried out in the field of developing sensors and techniques that are capable of identifying and quantifying the chemical species involved in these processes. Iron deficiency is the most common and widespread nutritional disorder that mainly affects the health of children and women. Iron from the diet may be available as heme or organic iron, or as non-heme or inorganic iron. The absorption of non-heme iron requires its solubilization and reduction in the ferric state to ferrous that begins in the gastric acid environment, because iron in the ferric state is very poorly absorbable. There are chemical species with reducing capacity (antioxidants) that also have the ability to reduce iron, such as ascorbic acid. This paper aims to develop a sensor for measuring the release of encapsulated active compounds, in different media, based on dielectric properties measurement in the radio frequency range. An impedance sensor able to measure the release of microencapsulated active compounds was developed. The sensor was tested with calcium alginate beads encapsulating iron ions and ascorbic acid as active compounds. The prediction and measurement potential of this sensor was improved by developing a thermodynamic model that allows obtaining kinetic parameters that will allow suitable encapsulation design for subsequent release. Introduction Even though in recent years the general and scientific interest in nutrition, digestion, and what role they play in our body has increased, there is still much work to be carried out in the field of developing sensors and techniques that are capable of identifying and quantifying the chemical species involved in these processes. Until now, researchers have been monitoring the release of microencapsulated or nanoencapsulated compounds in liquid media in a static process, taking a sample of the medium and measuring it by a spectrophotometer [1][2][3][4][5], proximate analysis (Folin-Ciocalteu method) [6][7][8][9][10], or HPLC [11], which are static and invasive measurements. According to WHO (Geneva, Switzerland), at present, more than 25% of the world's population suffers from anaemia, mainly due to iron deficiency [12,13]. Therefore, it can be considered that iron deficiency is the most common and widespread nutritional disorder; it is a deficiency disease that mainly affects the health of children and women, mainly in developing countries, but also in industrialized countries [13]. Iron from the diet may be available as heme (organic iron), or as non-heme (inorganic iron). Heme iron is found mainly in meats (myoglobin) and blood (hemoglobin) and, on the other hand, the main sources of non-heme iron are of plant origin, milk and eggs, and it is found mostly in D Printing Material: Acrylonitrile Butadiene Styrene (ABS) This material (FrontierFila, Shenzen, China) was printed with the following parameters: 235 • C extrusion temperature, 90 • C bed temperature, 100% filling, 40 mm/s print speed and a layer height of 0.1 mm. The filament has a diameter of 1.75 mm according to the specifications of the printer extruder. Experimental Procedure A specific measurement system for dielectric properties was designed to allow continuous measurements during the release of active compounds. This measuring system consists of an outer shell and a measuring tank, both printed by 3D printing. Subsequently, the parts and metals chosen to obtain the dielectric properties sensor were assembled and connected to an impedance analyzer in order to perform measurements. Standard solutions were prepared with known amounts of iron-protein-succinylate and L(+)-Ascorbic acid. In both cases, they were prepared at both pHs (3 and 4.7). Standard solutions were prepared with mass fractions from 100 to 500 ppm of iron ion, and of ascorbic acid in mass fractions from 50 to 3000 ppm. Once the standard solutions were prepared, they were measured to analyze the possibility of determining specific amounts of these compounds in different media using the developed sensor. Standard solutions of active compounds were made from mass fractions of 100-500 ppm of iron ion, and of ascorbic acid in mass fractions from 50-3000 ppm at two pH levels: 3 and 4.7. After this process of tuning and testing the sensor, three types of beads were made: calcium alginate (alginate beads), calcium alginate with iron-protein-succinylate (iron ion beads), and finally, with L(+)-Ascorbic acid (ascorbic acids beads), all submitted to a drying process at 40 • C and 0.8 bar for 24 h in a vacuum drying oven (Vaciotem-T, Grupo Selecta, Abrera, Barcelona, Spain). Amounts of 0.05 g of beads and 200 µL of medium (solutions at different pH 3 and 4.7) were put into the measuring tank and measurements of dielectric properties were made for two hours. These measurements were taken at 5,15,30,60,120,150,200,240, 300, 360, 420, 480, 540, 600, 1200, 1500, 1800, 2500, 3600, 4500 s. Finally, a measure of expansion was made within different media (solutions at both pH 3 and 4.7) of the alginate beads, iron ion beads and ascorbic acid beads. D printing Protocol The protocol followed for the design and obtaining of the external shell and the measuring tank is divided into three steps. First, the piece was designed using a 3D design program (Tinkercad, Autodesk, Inc., Mill Valley, CA, USA), in which all the dimensions of the desired pieces were specified (Figure 1). Once the design prototype is obtained, it is sent to the 3D printer (Anet A8), setting all the previously established parameters. For this purpose, the Repetier-Host software was used to control and calibrate the printer, as well as to transmit data in GCode file to be replicated by the 3D printer, previously heated to the optimum temperature established for each material. To convert the 3D digital design into the instructions and steps necessary to achieve the physical design, the Slic3c tool was used to cut the model into layers, generate the paths to fill it and calculate the amount of material that will need to be extruded. 15,30,60,120,150,200,240,300,360,420,480,540, 600, 1200, 1500, 1800, 2500, 3600, 4500 s. Finally, a measure of expansion was made within different media (solutions at both pH 3 and 4.7) of the alginate beads, iron ion beads and ascorbic acid beads. D printing Protocol The protocol followed for the design and obtaining of the external shell and the measuring tank is divided into three steps. First, the piece was designed using a 3D design program (Tinkercad, Autodesk, Inc., Mill Valley, CA, USA), in which all the dimensions of the desired pieces were specified ( Figure 1). Once the design prototype is obtained, it is sent to the 3D printer (Anet A8), setting all the previously established parameters. For this purpose, the Repetier-Host software was used to control and calibrate the printer, as well as to transmit data in GCode file to be replicated by the 3D printer, previously heated to the optimum temperature established for each material. To convert the 3D digital design into the instructions and steps necessary to achieve the physical design, the Slic3c tool was used to cut the model into layers, generate the paths to fill it and calculate the amount of material that will need to be extruded. Calcium Alginate Beads Encapsulating Iron-Protein-Succinylate and Ascorbic Acid Preparation Protocol Alginate and iron ion or ascorbic acids beads were prepared by ionic gelation. For this, it was necessary to prepare a 1:100 solution of sodium alginate (together with the reagent to be encapsulated: iron-protein-succinylate or L(+)-ascorbic acid) and another 1:100 solution of calcium chloride, both prepared with the sodium acetate buffer. A frequency inverter (Inverter DV-700 Panasonic, Osaka, Japan) was installed to control the revolutions per minute of the peristaltic pump (Damova S.L, Barcelona, Spain, model CPM-045B). The peristaltic pump drips the solution formed by sodium alginate and the reagent that was to be encapsulated on the CaCl2 solution with a ratio of 1:10, the CaCl2 solution was in continuous agitation (IKA ® MS3 basic, Staufen im Breisgau, Germany). The speed used to make a correct drip must be based on the percentage of power that the frequency inverter gives to the pump, in this case, it was 30% and the distance between the tip of the needle (Thermo Fisher Scientific Oy, Vantaa, Finland) and the solution of CaCl2 is 10 cm. Once the beads were obtained, they were left under stirring in the CaCl2 solution for 15 min to ensure optimum gelation [47]. After 15 min, the beads were extracted from the solution and washed 5 times with distilled water to remove the excess of CaCl2. Calcium Alginate Beads Encapsulating Iron-Protein-Succinylate and Ascorbic Acid Preparation Protocol Alginate and iron ion or ascorbic acids beads were prepared by ionic gelation. For this, it was necessary to prepare a 1:100 solution of sodium alginate (together with the reagent to be encapsulated: iron-protein-succinylate or L(+)-ascorbic acid) and another 1:100 solution of calcium chloride, both prepared with the sodium acetate buffer. A frequency inverter (Inverter DV-700 Panasonic, Osaka, Japan) was installed to control the revolutions per minute of the peristaltic pump (Damova S.L, Barcelona, Spain, model CPM-045B). The peristaltic pump drips the solution formed by sodium alginate and the reagent that was to be encapsulated on the CaCl 2 solution with a ratio of 1:10, the CaCl 2 solution was in continuous agitation (IKA ® MS3 basic, Staufen im Breisgau, Germany). The speed used to make a correct drip must be based on the percentage of power that the frequency inverter gives to the pump, in this case, it was 30% and the distance between the tip of the needle (Thermo Fisher Scientific Oy, Vantaa, Finland) and the solution of CaCl 2 is 10 cm. Once the beads were obtained, they were left under stirring in the CaCl 2 solution for 15 min to ensure optimum gelation [47]. After 15 min, the beads were extracted from the solution and washed 5 times with distilled water to remove the excess of CaCl 2 . Once the wet beads of calcium alginate and iron-protein-succinylate or calcium alginate and L(+)-ascorbic acid were formed, they were put into previously weighed crucibles and the total mass was registered. The crucibles were then placed in a vacuum drying oven (Vaciotem-T, JPSELECTA, Barcelona, Spain) at 40 • C and 0.8 bar for 24 h. After 24 h, the dried samples with a water activity (a w ) less than 0.35 were weighed and subsequently Protocol for the Determination of the Expansion Capacity of the Beads in Different Media Once the dry beads were obtained, the protocol for determining the capacity of expansion was performed in triplicate: expansion capacity of the control beads in the different media (pH 3 and 4.7), expansion capacity of ascorbic acid beads in the different media (pH 3 and 4.7) and, the expansion of iron ion beads in the different media (pH 3 and 4.7). The beads were subjected to a rehydration process to quantify and replicate the increase in size that they experience inside the sensor when they come into contact with a solution of a specific pH. The assembly consists of a microscope (Juision USB Microscope) connected to a computer (MacBook Air, Apple, Cupertino, CA, USA), using the "Photo Booth" software for taking photos. As a reference distance, a micrometered glass located at the base of the bead was used ( Figure 2). San Louis, USA) to avoid possible rehydration. Protocol for the Determination of the Expansion Capacity of the Beads in Different Media Once the dry beads were obtained, the protocol for determining the capacity of expansion was performed in triplicate: expansion capacity of the control beads in the different media (pH 3 and 4.7), expansion capacity of ascorbic acid beads in the different media (pH 3 and 4.7) and, the expansion of iron ion beads in the different media (pH 3 and 4.7). The beads were subjected to a rehydration process to quantify and replicate the increase in size that they experience inside the sensor when they come into contact with a solution of a specific pH. The assembly consists of a microscope (Juision USB Microscope) connected to a computer (MacBook Air, Apple, Cupertino, CA, USA), using the "Photo Booth" software for taking photos. As a reference distance, a micrometered glass located at the base of the bead was used (Figure 2). A single bead was placed inside a glass crucible, then 100 µL of the corresponding pH (3 and 4.7) solution was added. The first photo was taken once the liquid phase comes into contact with the bead. After this the first, more photographs were taken at 5, 15, 30, 60, 120, 150, 200, 240, 300, 360, 420, 480, 540, 600 s from that time a measurement was taken every 5 min until 30 min. The images were analyzed in Photoshop ® (CS5, ver. 12, Adobe Systems InCorp., San Jose, CA, USA) by analyzing with the measuring tools the circumference of the 2D image of the bead with a sphere shape and a square millimeter provided by the micrometered glass, in order to transform the measurement from square pixels to square millimeters, thus obtaining the radius of the remainder of the bead sphere, and finally, the volume of each time point. Determination of Liberation Kinetics In order to obtain the release kinetics of bioactive compounds in different media, a system for measuring dielectric properties was designed and constructed (See results section). The measurement system was connected to an Agilent 4294A Impedance Analyzer, A single bead was placed inside a glass crucible, then 100 µL of the corresponding pH (3 and 4.7) solution was added. The first photo was taken once the liquid phase comes into contact with the bead. After this the first, more photographs were taken at 5, 15, 30, 60, 120, 150, 200, 240, 300, 360, 420, 480, 540, 600 s from that time a measurement was taken every 5 min until 30 min. The images were analyzed in Photoshop ® (CS5, ver. 12, Adobe Systems InCorp., San Jose, CA, USA) by analyzing with the measuring tools the circumference of the 2D image of the bead with a sphere shape and a square millimeter provided by the micrometered glass, in order to transform the measurement from square pixels to square millimeters, thus obtaining the radius of the remainder of the bead sphere, and finally, the volume of each time point. Determination of Liberation Kinetics In order to obtain the release kinetics of bioactive compounds in different media, a system for measuring dielectric properties was designed and constructed (See results section). The measurement system was connected to an Agilent 4294A Impedance Analyzer, (Agilent Technologies, Santa Clara, CA, USA) ( Figure 3). An open and short calibration was performed. The measurement range was from 40 Hz to 1 MHz. A triplicate of each release process was performed: release of alginate beads in different media (pH 3 and 4.7), Sensors 2021, 21, x FOR PEER REVIEW (Agilent Technologies, Santa Clara, CA, USA) ( Figure 3). An open and short c was performed. The measurement range was from 40 Hz to 1 MHz. A triplica release process was performed: release of alginate beads in different media (pH 3 release of iron ion beads in different media (pH 3 and 4.7), and finally, release o acid beads in different media (pH 3 and 4.7). Results Throughout the last decade, the scientific community has tried to simula digestive processes, which allow the design of more efficient foods and drugs, w design of sensors gains relevance, representing a technological and scientific cha the research community. In this context, the design of encapsulates that allow t of active compounds to fix digestive problems, nutritional deficiencies or dis quires the design of digestive simulation sensors to quantify said release. In thi a sensor was developed for measuring the release of encapsulated active comp different media, based on dielectric properties measurement in the radio frequen The sensor consisted of two parts (see Figure 4), an outer shell, and a measu Figure 4a,b and c show the measuring tank in plan, elevation and cross-section Figure 4d-f, the outer shell where the measuring tank is fixed to the circuit (show and cross-section view). The material selected for the final design was ABS, ability to resist acidic and basic pH medium. The measuring tank was designed with the aim of introducing the two pa talum plates (0.75 cm × 1.5 cm) inside the tank, glued to the inner walls, betwe the liquid phase and the beads were located (see Figure 5). In addition, two rectangles were added to the base, which were put into the outer shell to impro ation. The measuring tank is connected to the impedance analyzer, as Figure obtaining the complex impedance, being able to transform to complex permitt lectric constant (ε') and loss factor (ε"), by means of the equations shown in th thanks to the parallel arrangement of the tantalum plates. Results Throughout the last decade, the scientific community has tried to simulate human digestive processes, which allow the design of more efficient foods and drugs, where the design of sensors gains relevance, representing a technological and scientific challenge for the research community. In this context, the design of encapsulates that allow the release of active compounds to fix digestive problems, nutritional deficiencies or diseases, requires the design of digestive simulation sensors to quantify said release. In this context, a sensor was developed for measuring the release of encapsulated active compounds in different media, based on dielectric properties measurement in the radio frequency range. The sensor consisted of two parts (see Figure 4), an outer shell, and a measuring tank. Figure 4a-c show the measuring tank in plan, elevation and cross-section view, and Figure 4d-f, the outer shell where the measuring tank is fixed to the circuit (shown in plan and cross-section view). The material selected for the final design was ABS, due to its ability to resist acidic and basic pH medium. The measuring tank was designed with the aim of introducing the two parallel tantalum plates (0.75 cm × 1.5 cm) inside the tank, glued to the inner walls, between which the liquid phase and the beads were located (see Figure 5). In addition, two clamping rectangles were added to the base, which were put into the outer shell to improve its fixation. The measuring tank is connected to the impedance analyzer, as Figure 5 shows, obtaining the complex impedance, being able to transform to complex permittivity, dielectric constant (ε') and loss factor (ε"), by means of the equations shown in this figure, thanks to the parallel arrangement of the tantalum plates. Figures 6a and 7a show the swelling of alginate beads with iron ion and ascorbic acid, respectively, when they are put into both pH media. The swelling of both types of beads in the different media causes a subatmospheric pressure variation inside the beads that causes the entry of media from the outside. The liquid phase (LP) flux entering the beads is calculated from the volume variation, using the following equation: where ∆V is the volume change, ρ LP is the density of LP (being considered equal to the density of water since the content of solutes is very low), S is the bead surface and t is the time in seconds. The evolution of the LP flux entering the alginate beads with iron ion and ascorbic acid is shown in Figures 6b and 7b. Figures 6a and 7a show the swelling of alginate beads with iron ion and ascorbic acid, respectively, when they are put into both pH media. The swelling of both types of beads in the different media causes a subatmospheric pressure variation inside the beads that causes the entry of media from the outside. The liquid phase (LP) flux entering the beads is calculated from the volume variation, using the following equation: where ∆V is the volume change, ρLP is the density of LP (being considered equal to the density of water since the content of solutes is very low), S is the bead surface and t is the time in seconds. The evolution of the LP flux entering the alginate beads with iron ion and Figures 6a and 7a show the swelling of alginate beads with iron ion and ascorbic acid, respectively, when they are put into both pH media. The swelling of both types of beads in the different media causes a subatmospheric pressure variation inside the beads that causes the entry of media from the outside. The liquid phase (LP) flux entering the beads is calculated from the volume variation, using the following equation: where ∆V is the volume change, ρLP is the density of LP (being considered equal to the density of water since the content of solutes is very low), S is the bead surface and t is the time in seconds. The evolution of the LP flux entering the alginate beads with iron ion and The beads immersed in an aqueous medium will release the active compound, with high ionic strength (iron ion) or moderate (ascorbic acid), varying the dielectric properties of the medium depending on the concentration of these chemical species. It will be necessary to determine the relationship of this compound with the dielectric properties and the swelling of the beads, and thus quantify the release of the active compound from the encapsulation. In Figures 6a and 7a, it is possible to observe how the swelling of the beads relaxes after 360 s, the volume variation remains constant approximately over 60% in the case of iron ion at pH 3, 70% at pH 4.7, and in the case of ascorbic acid, 80% in both cases. In order to obtain the calibration of the release measurement system, standard solutions of active compounds were made from mass fractions of 100-500 ppm of iron ion, and of ascorbic acid in mass fractions from 50-3000 ppm at two pH levels: 3 and 4.7. Considering the nature of the chemical species released, with a high or moderate ionic strength, a relation between the content of these species and the dielectric properties should be observed, in the section of the electromagnetic spectrum comprised in the alpha dispersion, the counterion effect. For this reason, the spectra in the region of 40 Hz to 1 kHz were analyzed, observing a greater relationship at 200 Hz, mainly in the reactance. Moreover, it was observed that the pH had no significant effect on the measurements of dielectric properties at frequencies of kHz. For this reason, all the measurements were The beads immersed in an aqueous medium will release the active compound, with high ionic strength (iron ion) or moderate (ascorbic acid), varying the dielectric properties of the medium depending on the concentration of these chemical species. It will be necessary to determine the relationship of this compound with the dielectric properties and the swelling of the beads, and thus quantify the release of the active compound from the encapsulation. In Figures 6a and 7a, it is possible to observe how the swelling of the beads relaxes after 360 s, the volume variation remains constant approximately over 60% in the case of iron ion at pH 3, 70% at pH 4.7, and in the case of ascorbic acid, 80% in both cases. In order to obtain the calibration of the release measurement system, standard solutions of active compounds were made from mass fractions of 100-500 ppm of iron ion, and of ascorbic acid in mass fractions from 50-3000 ppm at two pH levels: 3 and 4.7. Considering the nature of the chemical species released, with a high or moderate ionic strength, a relation between the content of these species and the dielectric properties should be observed, in the section of the electromagnetic spectrum comprised in the alpha dispersion, the counterion effect. For this reason, the spectra in the region of 40 Hz to 1 kHz were analyzed, observing a greater relationship at 200 Hz, mainly in the reactance. Moreover, it was observed that the pH had no significant effect on the measurements of dielectric properties at frequencies of kHz. For this reason, all the measurements were The beads immersed in an aqueous medium will release the active compound, with high ionic strength (iron ion) or moderate (ascorbic acid), varying the dielectric properties of the medium depending on the concentration of these chemical species. It will be necessary to determine the relationship of this compound with the dielectric properties and the swelling of the beads, and thus quantify the release of the active compound from the encapsulation. In Figures 6a and 7a, it is possible to observe how the swelling of the beads relaxes after 360 s, the volume variation remains constant approximately over 60% in the case of iron ion at pH 3, 70% at pH 4.7, and in the case of ascorbic acid, 80% in both cases. In order to obtain the calibration of the release measurement system, standard solutions of active compounds were made from mass fractions of 100-500 ppm of iron ion, and of ascorbic acid in mass fractions from 50-3000 ppm at two pH levels: 3 and 4.7. Considering the nature of the chemical species released, with a high or moderate ionic strength, a relation between the content of these species and the dielectric properties should be observed, in the section of the electromagnetic spectrum comprised in the alpha dispersion, the counterion effect. For this reason, the spectra in the region of 40 Hz to 1 kHz were analyzed, observing a greater relationship at 200 Hz, mainly in the reactance. Moreover, it was observed that the pH had no significant effect on the measurements of dielectric properties at frequencies of kHz. For this reason, all the measurements were grouped in the same graph that related, on the one hand, the mass fraction of iron ion with regard to the reactance, and on the other hand, the amount of ascorbic acid with the reactance (Figure 8). This figure shows that there is a linear relationship between the mass fraction of iron and the reactance at 200 Hz. Ascorbic acid also had a linear relationship with the reactance at 200 Hz. Sensors 2021, 21, x FOR PEER REVIEW 9 of 15 grouped in the same graph that related, on the one hand, the mass fraction of iron ion with regard to the reactance, and on the other hand, the amount of ascorbic acid with the reactance (Figure 8). This figure shows that there is a linear relationship between the mass fraction of iron and the reactance at 200 Hz. Ascorbic acid also had a linear relationship with the reactance at 200 Hz. During the releasing process of each active compound in the measuring tank, the evolution of the reactance was obtained at 200 Hz, for each condition of the external liquid phase, as was specified in the materials and methods section. These measurements are shown in Figure 9a. With the values of reactance measured at 200 Hz, and from the calibration of standard solutions shown in Figure 8, the dielectric measurement was transformed into the concentration of each chemical species in the liquid phase. As can be seen in Figure 9b, the main quantity of iron ion and ascorbic acid is released in the first 450 s, reaching the asymptote of release at approximately 900 s for the two active compounds. In order to obtain the release flux of each active compound, it is necessary to establish mass balances to the system bead/liquid phase. Figure 10 shows an outline of the bead/liquid phase system and the mass balances applied to this system. From these balances, it is During the releasing process of each active compound in the measuring tank, the evolution of the reactance was obtained at 200 Hz, for each condition of the external liquid phase, as was specified in the materials and methods section. These measurements are shown in Figure 9a. Sensors 2021, 21, x FOR PEER REVIEW 9 of 15 grouped in the same graph that related, on the one hand, the mass fraction of iron ion with regard to the reactance, and on the other hand, the amount of ascorbic acid with the reactance (Figure 8). This figure shows that there is a linear relationship between the mass fraction of iron and the reactance at 200 Hz. Ascorbic acid also had a linear relationship with the reactance at 200 Hz. During the releasing process of each active compound in the measuring tank, the evolution of the reactance was obtained at 200 Hz, for each condition of the external liquid phase, as was specified in the materials and methods section. These measurements are shown in Figure 9a. With the values of reactance measured at 200 Hz, and from the calibration of standard solutions shown in Figure 8, the dielectric measurement was transformed into the concentration of each chemical species in the liquid phase. As can be seen in Figure 9b, the main quantity of iron ion and ascorbic acid is released in the first 450 s, reaching the asymptote of release at approximately 900 s for the two active compounds. In order to obtain the release flux of each active compound, it is necessary to establish mass balances to the system bead/liquid phase. Figure 10 shows an outline of the bead/liquid phase system and the mass balances applied to this system. From these balances, it is With the values of reactance measured at 200 Hz, and from the calibration of standard solutions shown in Figure 8, the dielectric measurement was transformed into the concentration of each chemical species in the liquid phase. As can be seen in Figure 9b, the main quantity of iron ion and ascorbic acid is released in the first 450 s, reaching the asymptote of release at approximately 900 s for the two active compounds. In order to obtain the release flux of each active compound, it is necessary to establish mass balances to the system bead/liquid phase. Figure 10 shows an outline of the bead/liquid phase system and the mass balances applied to this system. From these balances, it is possible to obtain the variation in mass fraction of each active compound, inside beads, during the active compound release process. possible to obtain the variation in mass fraction of each active compound, inside beads, during the active compound release process. Once the mass fractions of each active compound are obtained, during the release process, it is possible to calculate its flux with the following equation. where Ji es de molar flux (moli/s m 2 ), Δmi B is the active compound mass variation in bead (g), during Δt time (s), S B (m 2 ) is the bead surface at this process time, and M i is the molecular weight of the active compound (g/mol). Figure 11 shows the active compound fluxes release from the beads in a liquid media. The engine that produces the release of active compounds is the chemical potential gradient at the bead-liquid phase interface. Within the chemical potential, there are chemical or mechanical gradients that will affect mass transport. The main engines for the active compound transport are the gradient concentration of the chemical species and the pressure variation induced by the bead swelling, therefore, the gradient of the chemical potential of each active compound may be defined by the Gibbs-Duhem expression as [48]: Once the mass fractions of each active compound are obtained, during the release process, it is possible to calculate its flux with the following equation. where J i es de molar flux (mol i /s m 2 ), ∆m i B is the active compound mass variation in bead (g), during ∆t time (s), S B (m 2 ) is the bead surface at this process time, and M i is the molecular weight of the active compound (g/mol). Figure 11 shows the active compound fluxes release from the beads in a liquid media. Sensors 2021, 21, x FOR PEER REVIEW 10 of 15 possible to obtain the variation in mass fraction of each active compound, inside beads, during the active compound release process. Once the mass fractions of each active compound are obtained, during the release process, it is possible to calculate its flux with the following equation. where Ji es de molar flux (moli/s m 2 ), Δmi B is the active compound mass variation in bead (g), during Δt time (s), S B (m 2 ) is the bead surface at this process time, and M i is the molecular weight of the active compound (g/mol). Figure 11 shows the active compound fluxes release from the beads in a liquid media. The engine that produces the release of active compounds is the chemical potential gradient at the bead-liquid phase interface. Within the chemical potential, there are chemical or mechanical gradients that will affect mass transport. The main engines for the active compound transport are the gradient concentration of the chemical species and the pressure variation induced by the bead swelling, therefore, the gradient of the chemical potential of each active compound may be defined by the Gibbs-Duhem expression as [48]: The engine that produces the release of active compounds is the chemical potential gradient at the bead-liquid phase interface. Within the chemical potential, there are chemical or mechanical gradients that will affect mass transport. The main engines for the active compound transport are the gradient concentration of the chemical species and the pressure variation induced by the bead swelling, therefore, the gradient of the chemical potential of each active compound may be defined by the Gibbs-Duhem expression as [48]: where ∆µ i is the chemical potential of each active compound, R is the ideal gas constant (8.314 J/mol K), T is the temperature (K), c i is the molar concentration (mol i /m 3 ), ν i is the specific volume of i and ∆P is the pressure gradient between bead and LP. The relationship between the molar flux and the chemical potential is defined by the first Onsager reciprocity relation, according to the following equation [49]: where L i is the phenomenological coefficient expressed in mol 2 /J s m 2 . This phenomenological coefficient describes the ability of a chemical species to transport itself through a medium. From the mass flux and the chemical potential, it is possible to calculate the phenomenological coefficient. However, with the experimental data obtained, it is only possible to calculate the term of concentrations of the chemical potential, as shown in Equation (5). where ∆µ * i is the chemical potential of each active compound, considering only the concentration term. In Figure 12, the chemical potential is shown without the mechanical term; however, it is possible to assume that when the swelling of the capsule is negligible, the mechanical term will be as well. where Δμi is the chemical potential of each active compound, R is the ideal gas constant (8.314 J/mol K), T is the temperature (K), ci is the molar concentration (moli/m 3 ), νi is the specific volume of i and ΔP is the pressure gradient between bead and LP. The relationship between the molar flux and the chemical potential is defined by the first Onsager reciprocity relation, according to the following equation [49]: where Li is the phenomenological coefficient expressed in mol 2 /J s m 2 . This phenomenological coefficient describes the ability of a chemical species to transport itself through a medium. From the mass flux and the chemical potential, it is possible to calculate the phenomenological coefficient. However, with the experimental data obtained, it is only possible to calculate the term of concentrations of the chemical potential, as shown in Equation (5). ∆ * = (5) where ∆ * is the chemical potential of each active compound, considering only the concentration term. In Figure 12, the chemical potential is shown without the mechanical term; however, it is possible to assume that when the swelling of the capsule is negligible, the mechanical term will be as well. Considering the mechanical term negligible when the bead swelling is negligible, it is possible to calculate the phenomenological coefficient for each type of bead. The result from this is 5.6 ± 0.7 × 10 −10 mol 2 /J s m 2 for iron ion at pH 3, 2.0 ± 0.3 × 10 −10 mol 2 /J s m 2 for iron ion at pH 4.7, 3.7 ± 0.3 × 10 −10 mol 2 /J s m 2 for ascorbic acid at pH 3 and 9.8 ± 0.8 × 10 −10 mol 2 /J s m 2 for ascorbic acid at pH 4.7. Using these phenomenological coefficients, it is possible to obtain the gradients of the chemical potential during the entire release process, and with it and using Equation (3), the mechanical term can be obtained. Figure 13 a,b shows the evolution of the mechanical term throughout the active compound release process. This mechanical term is induced by the swelling of the beads. This phenomenon occurs since, in their preparation, the beads undergo a dehydration process, which generates a drastic shrinkage and vitrification that causes storage of mechanical Considering the mechanical term negligible when the bead swelling is negligible, it is possible to calculate the phenomenological coefficient for each type of bead. The result from this is 5.6 ± 0.7 × 10 −10 mol 2 /J s m 2 for iron ion at pH 3, 2.0 ± 0.3 × 10 −10 mol 2 /J s m 2 for iron ion at pH 4.7, 3.7 ± 0.3 × 10 −10 mol 2 /J s m 2 for ascorbic acid at pH 3 and 9.8 ± 0.8 × 10 −10 mol 2 /J s m 2 for ascorbic acid at pH 4.7. Using these phenomenological coefficients, it is possible to obtain the gradients of the chemical potential during the entire release process, and with it and using Equation (3), the mechanical term can be obtained. Figure 13a,b shows the evolution of the mechanical term throughout the active compound release process. This mechanical term is induced by the swelling of the beads. This phenomenon occurs since, in their preparation, the beads undergo a dehydration process, which generates a drastic shrinkage and vitrification that causes storage of mechanical energy, which can only be released when hydrated again, changing to a rubbery state, and recovering its native elasticity. recovering its native elasticity. The mechanical term causes a flux of external liquid phase to enter, and slows the transport of active compound to the outside of beads. This phenomenon is observed in Figure 13c,d, where the mechanical term is compared with bead swelling. It is possible to observe that both evolve together and stop at the same time, the moment where they reach the maximum swelling. Moreover, these figures show that the mechanical terms are higher in the iron beads, although the ascorbic acid beads have more swelling, this may be due to the fact that the iron ions have a greater ionic strength, which will affect the formation of the beads and which are based on ionic gelation. Figure 13. (a,b) represents the evolution of the mechanical gradient of (a) the iron ion beads and (b) the ascorbic acid beads at 3 and 4.7 pH; (b,c) is the relationship between the mechanical gradient and volume variation of (c) the iron ion beads and (d) the ascorbic acid beads at 3 and 4.7 pH. Where (•) is iron ion released at pH of 3; (○) is iron ion released at pH of 4.7; (▲) is ascorbic acid released at pH of 3 and (∆) is ascorbic acid released at pH of 4.7. Conclusions A system for measuring the release of microencapsulated active compounds was developed from impedance measurements in the radio frequency range. This system was tested with calcium alginate beads encapsulating iron ions and ascorbic acid as active compounds. The prediction and measurement potential of this sensor was improved by developing a thermodynamic model that allows quantifying kinetic design parameters such as the phenomenological coefficient. The sensor was tested in an aqueous liquid medium in the pH range in which digestive media are found in the stomach phase, in order to determine interferences in impedance measurements in the radio frequency range, showing great precision in the measurements and no interference with the medium. However, an effect of pH was observed on the swelling processes of the beads, possibly induced by ion-ion relationships in the gel matrix of calcium alginate. The phenomenological coefficients obtained are in the same range of values, for iron (2-5.6 × 10 −10 mol 2 /J s m 2 ) and ascorbic acid (3.7-9.8 × 10 −10 mol 2 /J s m 2 ), showing adequate encapsulation design, since it will release a similar proportion of iron and ascorbic acid, Figure 13. (a,b) represents the evolution of the mechanical gradient of (a) the iron ion beads and (b) the ascorbic acid beads at 3 and 4.7 pH; (b,c) is the relationship between the mechanical gradient and volume variation of (c) the iron ion beads and (d) the ascorbic acid beads at 3 and 4.7 pH. Where (•) is iron ion released at pH of 3; ( ) is iron ion released at pH of 4.7; ( ) is ascorbic acid released at pH of 3 and (∆) is ascorbic acid released at pH of 4.7. The mechanical term causes a flux of external liquid phase to enter, and slows the transport of active compound to the outside of beads. This phenomenon is observed in Figure 13c,d, where the mechanical term is compared with bead swelling. It is possible to observe that both evolve together and stop at the same time, the moment where they reach the maximum swelling. Moreover, these figures show that the mechanical terms are higher in the iron beads, although the ascorbic acid beads have more swelling, this may be due to the fact that the iron ions have a greater ionic strength, which will affect the formation of the beads and which are based on ionic gelation. Conclusions A system for measuring the release of microencapsulated active compounds was developed from impedance measurements in the radio frequency range. This system was tested with calcium alginate beads encapsulating iron ions and ascorbic acid as active compounds. The prediction and measurement potential of this sensor was improved by developing a thermodynamic model that allows quantifying kinetic design parameters such as the phenomenological coefficient. The sensor was tested in an aqueous liquid medium in the pH range in which digestive media are found in the stomach phase, in order to determine interferences in impedance measurements in the radio frequency range, showing great precision in the measurements and no interference with the medium. However, an effect of pH was observed on the swelling processes of the beads, possibly induced by ion-ion relationships in the gel matrix of calcium alginate. The phenomenological coefficients obtained are in the same range of values, for iron (2-5.6 × 10 −10 mol 2 /J s m 2 ) and ascorbic acid (3.7-9.8 × 10 −10 mol 2 /J s m 2 ), showing adequate encapsulation design, since it will release a similar proportion of iron and ascorbic acid, which will act as an antioxidant, maintaining the reduced state of iron and, therefore, facilitating its absorption.
2021-09-09T13:21:21.281Z
2021-08-27T00:00:00.000
{ "year": 2021, "sha1": "f0d2a4e8c59a8352f84f555dbe04a11b4ea513d6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/21/17/5781/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "446564f28b7c1501ba7e0179cebbb7690eb45d42", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
232353125
pes2o/s2orc
v3-fos-license
Introduction to Volume One: Future of Human Resource Development—Disruption Through Digitalisation This chapter aims to discuss the key elements of the book and highlights the main areas for discussion and investigation. It provides an assessment of the current and future trends of technology and how it coalesces with human resource development (HRD) to impact organisations today and those in the future. The chapter also provides a review of the book’s structure, objectives and context. Background to the Volume The purpose of this section is to introduce the reader to the main themes of the book. It seeks to outline the key context and concepts explored across the chapters and enables the reader to examine the importance of understanding future trends in Human Resource Development (HRD) across the globe. The idea of producing this volume arose from the 20th University Forum for Human Resource Development (UFHRD) conference in Nottingham. Participants across the globe travelled to the city to advance HRD thinking and practices and, together, celebrate the achievement of the HRD community. This was a great platform to debate how organisations prepare themselves to address future HRD in establishing effective organisations. It was the beginning of a journey to produce a set of chapters that offer the reader insightful knowledge on how to address future challenges and opportunities. It is simply not enough to highlight the important role of academic debate in organisational development, but resources that can have a meaningful impact upon organisational and individual thinking must also be produced. It is essential to explore how HRD influences organisations and individuals from a multi-level perspective. This entails considering the effect of context, both internally and externally, as well as employee perceptions and understanding of HRD and what this means for learning, creativity and growth. Covid-19 reinforces this point and the need for HRD to shape future practices including innovation, performance, flexibility, well-being and management behaviour. The scale of the change is extraordinary as the pandemic has drastically changed, in just a few days, the way we work, communicate, socialise and learn. The range of the latter is not restricted to organisations or employment. Millions of children and higher education students across the world are studying at home, many in the case of the latter with support from academic staff now also facilitating learning from home. Some degree of such a scenario will remain with us for a foreseeable time. Home study is not, however, exclusively focused on formal learning contexts with many people taking advantage of tuition provided by professionals in, for example, cooking, baking, gardening and a wide range of crafts. Those learners and those providing tuition are doing so as a means of occupying some of the time that has become available to them because of the pandemic requirements to stay at home and are using technology to facilitate learning. Therefore, this volume could not be more timely given the new realities that everyone is now facing. People need to rethink how they learn, how they implement learning activities, identify new methods of learning resources and, most importantly, how technology can change the way HRD is understood and conceptualised by the academic and professional communities. On a personal level, it was sometimes difficult to understand existing management perceptions in addressing organisational HRD needs. We believe that any attempt to fully utilise HRD principles requires sufficient knowledge (both at individual and at organisational level), effective leadership skills and appropriate assessment of the wider business environment. It is now the time to take effective actions in changing oldfashioned perceptions on learning and development and offer the space where organisations can feel secure in making effective changes through evidence-based information. In an increasingly technology-driven business environment, significant changes are taking place which are challenging long-standing assumptions about the nature of work and the roles that humans will play in the workforce of the future (Schwab 2016;Manyika et al. 2017). Digitalisation is a significant and influential factor in shaping the roles of humans in future workforces, hence the title and focus of this volume. The following section provides a further assessment of how HRD can respond to some of the challenges associated with digitalisation and related changes in the future. Loon (2017) lists fifteen learning technologies current at the time of writing his book. These include virtual learning environments; digital/learning repository and document sharing tools; blogs (and vlogs); media streaming systems and video learning; synchronous communication tools; digital/video games; simulation games and mobile learning (p. 8). Some of these have come to the fore in response to Covid-19. For example, virtual learning environments have long been established but have probably been the saviour of being able to continue provision of higher education courses which have switched to online learning across the world. Synchronous communication tools such as MS Teams and Zoom have been the lifeline of many business operations by facilitating staff meetings, client/customer interactions and other processes carried out from homes rather than from or in offices. The technologies in the list are also being utilised to deliver and facilitate learning required to prepare for a return to work during the crisis; for example, training employees on how requirements for continued social distancing will be met in workplaces. Status and Future of Digitalisation and HRD The final item on Loon's list is the ubiquitous 'other' and thus implies more than the fifteen discussed in detail. Two forms of technology that enabled learning but are not specifically mentioned in the list are Webinars and Massive Open Online Course (MOOCS). Webinars can utilise a range of software and be incorporated into learning platforms and virtual learning environments. They have been found to be welcomed by learners as a development tool (Gegenfurtner et al. 2020). However, Gegenfurtner et al. (2020) make a number of points on possible drawbacks in the use of webinars. These include the length, timing and opportunities for interaction with those delivering the webinar. They also make the point that strong and reliable internet connections and bandwidth are essential requirements, which varies across countries. That point could, of course, apply to most forms of digital learning. MOOCS is an acronym for Massive Open Online Courses. Then use of the word 'courses' may suggest learning associated with education and qualifications. This impression may be reinforced by the origins of MOOCS in open educational resources, and so early MOOCS being made available by universities. However, while many are still provided by universities, this is no longer the case and other providers are now active. Those still provided by universities are also not necessarily linked to qualifications and can be taken for whatever reason an individual has for engaging in them. There is also no reason why employing organisations cannot take advantage of MOOCS by recommending selected courses to their employees as a means of meeting their development needs or indeed by incorporating completion of such courses in their own in-house development programmes. MOOCS are by definition open access. They are also, according to Farrow (2017), an argued exemplar of disruptive innovation in learning. Farrow though does also question the potential of MOOCS and not least by challenging the claimed levels of disruption that they are argued to represent. We have chosen to highlight webinars and MOOCS because they are likely to have been among the most common responses to the 'stay at home' conditions introduced by national governments. The former will have been a fairly easily implemented way for employers to continue to deliver learning to employees. The latter, if not necessarily being a firstchoice response by employers, may well have enjoyed increased use by individuals with unexpected time on their hands at home. So, those two forms of digitalisation of learning are probably among the most common current examples at the time of writing during the Covid-19 crisis. For that reason, they may well also quickly become more ubiquitous postcrisis and so two of the more common examples in our everyday experience. There is one further aspect of digitalisation that we are confident will also become more common, although in a less overt or obvious manner. This is the use of learning analytics. Learning analytics can be an umbrella term to encompass data, metrics and analytics which can be used to enhance the effectiveness of learning experiences. However, it is also used in a specific sense to refer to collection and analysis of learner behaviour and interaction with digital learning (Stewart 2017). For example, time spent on the learning programme or on individual components, such as reflective exercises or progress checks, can be monitored and compared across the learning populations. More sophisticated data such as time spent in discussion boards; number, nature and content of contributions to discussion boards and learner preferences for different components of multi-media programmes as measured by usage of each can be monitored and analysed. Analysis can also include differences against variables such as age, gender and timerelated variables such as day of week or time of day. Statistical techniques are often applied to produce such analyses. The primary purpose of learning analytics is to improve digital learning experiences, sometimes for current learners where adjustments are possible but always for future learners. There are nevertheless legal and ethical questions that need to be addressed in the use of learning analytics (Jisc 2018). That said, it is believed that their use will continue to grow, especially in digital learning, and that the results of that growth are likely to lead to innovative and disruptive impacts on digital learning. Artificial intelligence (AI) is the notion that machines can, one day, perform the same cognitive tasks as human beings. AI is a broad suite of technologies that also include machine learning and learning analytics. A fundamental characteristic of AI, such as Apple's Siri, is its ability to learn effectively, which places learning in the same frame as intelligence. The case AI in HRD, or learning and development (L&D) as it is perhaps more widely known in the workplace, provides some key insights as to the trajectories that are likely to further grow in the future. Read and Think AI can help to address the long-standing tension of being able to be efficient in the delivery of learning and development opportunities while at the same time being able to personalise the learning experience. In many organisations, mandatory training such as those involving occupational health and safety have to be retaken regularly to ensure that staff's knowledge and skills are to up-to-date. However, while there are fundamental foundations of such training that needs to be shared by everyone, the typical nature of such training tends to be undifferentiated in terms of the experience of the person, their professional needs and the degree in which the training needs to be delivered on demand. At the person-level, AI enables the learning opportunity to be moulded to the needs of the person such as allowing learning to be shaped according to intrapersonal attributes and preferences of the individual such as their learning styles for those that prefer text-based, audio or audio and visual formats. In terms of professional needs, AI allows for sophisticated differentiation based on the person's role such as their organisational function for example outdoors or in the office, with heavy machinery or whether they are a manager. Different roles will have distinctive needs. Finally, AI can help track when a person last underwent training and remind them when they need refresher training, identify the learners' areas for improvement and provide more targeted training at the right time and pace. Aims and Objectives This volume has a primary focus on how what might be termed information and communications technologies (ICT) affect organisational and individual life through innovation, creativity and learning. Here, we use the term digitalisation to encompass emerging, as well as established, technologies. For example, learning analytics, virtual reality and artificial intelligence are currently limited in their impact but will be much more significant in their influence on HRD in the future. It is also debatable whether these concepts are accurately placed under the umbrella term of ICT. The term 'digital learning' has also gained currency with the UK's Chartered Institute of Personnel and Development (CIPD 2019). Hence, while the term ICT may have more familiarity, it is believed the idea of digitalisation is more appropriate to the content of this volume. The scope of the volume is to capture the growing trends around digitalisation and how HRD can respond to these changes at micro and macro levels. The lessons of responding to Covid-19 to facilitate learning in a wide range of contexts will only add to the knowledge of how best to utilise technology in designing and delivering HRD. This volume provides a unique blend of chapters that offer critical assessment around HRD practices and outline how technology can be used as a learning tool to support individual and organisational goals. It aims to create a number of learning resources that will enable the reader to examine a range of wider implications on how to address learning needs in the future through utilising technological tools and innovations. Thus, it provides a sound platform for efficient and effective use of technology in HRD and for applying the lessons that will emerge from innovations arising from the work and non-work learning activities associated with the circumstances caused by Covid-19. In turn, this will enable practitioners to harness the potential benefits of digitalisation, and to avoid the potential drawbacks and pitfalls of simply being either fascinated or inhibited by technology, rather than assessing and evaluating how best to put it to productive use. Book Content With the aims and objectives in mind, this volume contains ten chapters (including this chapter) that cover distinct perspectives of the role of technology through the lens of HRD and its impact on organisations in a digitally connected world. The prominence of technology and organisations' dependency on it of course varies. It can facilitate, mediate, moderate, impede or create opportunities. However, while the impact of technology is relative, the two viewpoints that most people agree on are its high degree of ubiquity and that sooner or later technology will eventually become more disruptive and have a more significant impact, even in areas that initially seemed unlikely. Chapter 1 provides an introductory assessment of the book's key dimensions and offers an insight into the key themes arising from the impact of technologies' disruption on HRD and organisations. The chapter allows the reader to get an overview of the context and access the key objectives of the book. Chapter 2 offers insight to how new technologies, such as tools for digital communication or artificial intelligence, can have an impact on the quality of jobs by affecting work outcomes such as job satisfaction, performance, health or professional development. This chapter provides HRD with the empirical evidence it has been craving by demonstrating the degree of impact of technology in the field. The chapter contains an investigation that addresses the research questions; what are the effects of new technologies at work on individual work outcomes? and what are the implications thereof for the role of HRD to improve the quality of jobs? By reviewing and systematically analysing twenty-two studies, this chapter provides insight into the definition of technology and components of HRD from theories explaining relationships between the work context and different kind of work outcomes. Two sources were applied: studies from a concurrent review were reanalysed for the present purpose of identifying relationships between new technologies and work outcomes, and additional searches within domain-specific databases were conducted in finance and healthcare. Chapter 3 compares their 'special way' regarding HRD education provision in the era of digitalisation to inform HRD professionals and policy makers on possible future actions. In particular the chapter undertakes a comparative assessment between the UK and Switzerland given that they are non-EU members and have autonomy in charting their own digital strategy. A nation's digital policy is ever increasingly important because technological advancements heavily impact the way people work while most recent socio-political and demographic changes (e.g. 'Brexit', economic instability, higher education reforms, generation attitude changes and a pandemic crisis) increase the need for critical insights on how digital competences of the workforce can improve and sustain business competitiveness and sustainability. The European Union (EU) and most national governments globally have placed emphasis on digitally equipping graduates to satisfy governmental and organisational needs. While some organisations remain reluctant to foster their workforce's digital qualifications in the belief of having them poached by competitors, many view digitalisation as an opportunity to enhance employees' skillsets with company-specific competences for competitive advantage. Chapter 4 addresses the calls for research exploring the implications of HRD and its likely role in the gig economy. This chapter reflects on a case study of a 'new law' digital platform firm that sought to implement an HRD strategy for its highly diverse and gig-based workforce. At a time when HRD has seen its role move from specialist to distributed, demonstrating ongoing relevance and contribution to global, real-world issues become paramount. The amorphous, often hidden and fast changing nature of the gig economy presents renewed challenges for scholarship and practice in HRD. This chapter proposes how a critical HRD lens can reassert HRD as a key discipline in supporting a broader range of interests and needs in the gig economy. The critical HRD lens contributes to understanding the nature of precarious work in the gig economy by exposing localities of power and disadvantage but also practical solutions for leveraging equality, capability development and knowledge transfer in the gig economy. Chapter 5 presents and assesses key areas of HRD and how they can be used to enhance an organisation's creativity and innovation capability. Particular focus is paid to recruitment and selection (e.g. the personality traits that organisations should prioritise for developing innovation capability, such as extraversion and openness, and the gamification of their measurement), training (providing content-specific knowledge and building confidence in equal measure, facilitated by coaching) and reward (ideally non-financial rewards focused at team level), and where technology may play a role. These topics are reviewed within a multi-level context, that is, one that considers both individual and team levels. This approach is particularly important given that much of the innovation process is team-led as organisations seek a holistic understanding of the complex phenomenon of innovation. Also considered is the role of innovation climate, the development of which can be facilitated by HRD practices such as training and reward as they signal that the organisation values innovation which, in turn, solidifies a climate of innovation. Chapter 6 responds to challenges facing the HRD community in how far it should proactively take responsibility and get involved in shaping future skill development and human interactions with technology? Or will HRD, as in the past, retain a passive observer position? There is much talk of the displacement of humans by technologies with some analyst reporting that employment in 44% of occupations in the UK is creating uncertainty about which jobs will continue. The disruption to current approaches to skill development and identification of what new skills are needed requires attention. For people to retain relevance, more attention is needed on those skills that resist automation and technology replacement by the Fourth Industrial Revolution. Chapter 7 provides an overview of e-learning and the value it offered during lockdowns imposed by many governments. The sudden lockdown of many businesses and educational institutions at the start of the pandemic necessitated the need of e-learning more than ever before. E-learning is a well-known training approach and is well-practiced by many businesses globally to support and enhance their employees' learning experience. E-learning represents the safest way to train in times of global crisis events as it allows the trainers and the trainees to virtually interact through an online platform which serves as the virtual classroom, free of the dangers entailed by physical interaction. Yet, the extent to which this sudden shift to online learning represents the future of workplace training and learning or whether it is just a temporary alteration for human resource development is debatable. The Covid-19 outbreak is expected to accelerate learning and work reinvention, resulting in multiple implications for businesses in relation to institutional resilience. Chapter 8 explores the benefits of technological innovations involving increased productivity and efficiencies, and minimising pressures on human workers, freeing up their time to provide more complex forms of care. However, technology is costly, subject to failure and can also impede care provision and cause issues such as being more time consuming, changing working relationships, roles and responsibilities. As a result, the chapter considers the human resource development implications in operationalising technological innovations in care, comprising careful and well-communicated implementation; systematic integration into work practices, taking account of revised roles and responsibilities; addressing user anxieties and ensuring provision of training and development activities which reflect changing skills and competencies. Chapter 9 unpacks how small and medium enterprises (SMEs) in the creative enterprise industry play a critical role in a nation's economic growth, its development of jobs and subsequent wealth creation. It explores the constraints facing creative enterprises have been seldom explored or critiqued extensively. This chapter investigates the external factors hindering the growth and development of SMEs in creative enterprises in Gulf Co-operation Council (GCC) countries (Saudi Arabia, Kuwait, Bahrain, Qatar, Oman and the United Arab Emirates) and suggests implications for research and practice. By doing so the chapter demonstrates how HRD plays a vital role in overcoming issues facing SMEs in creative enterprises at a national level. Challenges in realising a truly genuine SME industry based on creative enterprise initiative and implementation are many and often profound. This study highlights how economic and labour market factors-compounded by a faltering education system-have negatively impacted the development of creative enterprise in GCC. Chapter 10 is an interview with Dr Wilson Wong who is the Head of Insight and Futures at the Chartered Institute for Personnel and Development (CIPD) and Chair of the Human Capital Standards Committee at the British Standards Institution (BSI). Wilson shares with us his insights as to what the future holds for HRD and organisations in a post-pandemic world. In the interview Wilson argues that technology has not lived up to its promise in our fight against Covid-19. He intimates that there are underlying problems that are multi-faceted, and therefore the solution(s) need to be holistic from taking care of the environment to preclude these events from occurring again, government policies in preparing nations for these highly unlikely but impactful events to organisational business models that mediate technology to enhance cost efficiencies.
2020-10-28T19:17:17.999Z
2020-10-18T00:00:00.000
{ "year": 2020, "sha1": "ac612cfb76b1f98cc4c26622f6693bbec5918c9d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b3ba47b74ae751042307a89d2a621bd115da7aa9", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Political Science" ] }
14345248
pes2o/s2orc
v3-fos-license
Infection with influenza A viruses causes changes in promoter DNA methylation of inflammatory genes Background Replication of influenza virus in the host cells results in production of immune mediators like cytokines. Excessive secretion of cytokines (hypercytokinemia) has been observed during highly pathogenic avian influenza virus (HPAI‐H5N1) infections resulting in high fatality rates. Objective The exact mechanism of hypercytokinemia during influenza virus infection is still not known completely. As promoter DNA methylation changes are linked with expression changes in genes, we intend to identify whether changes in promoter DNA methylation have any role in expression of cytokines during influenza A virus infection. Methods A panel of 24 cytokine genes and genes known to be involved in inflammatory response were analyzed for their promoter DNA methylation changes during influenza A virus infections. Four different strains of influenza A viruses, viz. H5N1, H1N1, pandemic (2009) H1N1, and a vaccine strain of H5N1, were used for the study. Results We found seven of the total 24 inflammatory genes studied, showing significant changes in their promoter methylation levels in response to virus infection. These genes included proinflammatory cytokines CXCL14, CCL25, CXCL6, and interleukines IL13, IL17C, IL4R. The changes in DNA methylation levels varied across different strains of influenza viruses depending upon their virulence. Significant promoter hypomethylation in IL17C and IL13 genes was observed in cells infected with HPAI‐H5N1 virus compared with other influenza viruses. This decrease in methylation was found to be positively correlating with the increased expression of these genes. Analysis of IL17C promoter region using bisulfite sequencing resulted in identification of a CpG site within Retinoid X receptor‐alpha (RXR‐α) transcription factor binding site undergoing demethylation specifically in H5N1‐infected cells but not in other influenza‐infected cells. Conclusion Thus, the study could demonstrate that changes in promoter methylation in certain specific cytokine genes actually have a possible role in their expression changes during influenza A virus infection. Background Replication of influenza virus in the host cells results in production of immune mediators like cytokines. Excessive secretion of cytokines (hypercytokinemia) has been observed during highly pathogenic avian influenza virus (HPAI-H5N1) infections resulting in high fatality rates. Objective The exact mechanism of hypercytokinemia during influenza virus infection is still not known completely. As promoter DNA methylation changes are linked with expression changes in genes, we intend to identify whether changes in promoter DNA methylation have any role in expression of cytokines during influenza A virus infection. Methods A panel of 24 cytokine genes and genes known to be involved in inflammatory response were analyzed for their promoter DNA methylation changes during influenza A virus infections. Four different strains of influenza A viruses, viz. H5N1, H1N1, pandemic (2009) H1N1, and a vaccine strain of H5N1, were used for the study. Results We found seven of the total 24 inflammatory genes studied, showing significant changes in their promoter methylation levels in response to virus infection. These genes included proinflammatory cytokines CXCL14, CCL25, CXCL6, and interleukines IL13, IL17C, IL4R. The changes in DNA methylation levels varied across different strains of influenza viruses depending upon their virulence. Significant promoter hypomethylation in IL17C and IL13 genes was observed in cells infected with HPAI-H5N1 virus compared with other influenza viruses. This decrease in methylation was found to be positively correlating with the increased expression of these genes. Analysis of IL17C promoter region using bisulfite sequencing resulted in identification of a CpG site within Retinoid X receptor-alpha (RXR-a) transcription factor binding site undergoing demethylation specifically in H5N1-infected cells but not in other influenza-infected cells. Introduction Influenza A viruses are an important causative agent of respiratory tract infections and diseases. The clinical outcome of influenza virus infection, which includes fever, pneumonia, and even death, is a complex interplay of viral and host factors. Along with the viral factors, host cellular responses also play a significant role in virus pathogenesis. Influenza viruses, which infect the epithelium of the upper and lower respiratory tract after entry through the oral or nasal route, have been shown to cause secretion of many cytokines and chemokines in avian and mammalian hosts. [1][2][3][4] The production of cytokines by infected cells, which is caused by viral surface glycoproteins, double-stranded RNA, and intracellular viral proteins, is also dependent on host immune responses. 4,5 Cytokine-mediated inflammatory responses have been linked to influenza pathogenesis. The mechanism of induction of cytokines by influenza virus is not completely understood but was found to vary depending on the cell type and the strain of influenza virus. 4 Host immune response in the form of excessive secretion of cytokines (hypercytokinemia/cytokine storm) was found to be characteristics of highly pathogenic avian influenza virus HPAI-H5N1 infection and believed to be associated with human mortality. 1,6,7 However, this response was found to vary among different strains of H5N1 viruses. 8,9 Also, influenza viruses of other subtypes like panH1N1 (2009) could induce cytokine production in infected cells comparable to H5N1 viruses. 9 This clearly indicates greater involvement of host cellular factors in responses to influenza virus infection. In this study, we wanted to investigate whether epigenetic modifications like DNA methylation changes are being involved in the expression of inflammatory genes during influenza virus infection. DNA methylation has been shown to play an important role in regulating gene expression in eukaryotes. 10,11 Modifications at regulatory regions particularly gene promoters correlate well with the transcriptional state of a gene: hyper-methylation represses transcription, while hypo-methylation can lead to increased transcription levels. Gene silencing by means of hypermethylation of tumor suppressor genes is a well-known feature of viruses which cause cancers in human cells. 12,13 Changes in host DNA methylation has been shown to be caused by the viruses which integrate into host genome. 13,14 Viruses such as Epstein-Barr virus (EBV) and human immunodeficiency virus (HIV) remain latent inside the host cells through epigenetic modification of their genome thus mimicking host genome and preventing recognition by host immunosurvillance. 13,15 Also, other viruses epigenetically regulate host gene expression preventing the activation of immune and apoptotic proteins required for inhibiting viral replication in the host cells. There are limited reports which indicate that such mechanism is being used by those viruses which do not integrate into host genome. Influenza viruses, which replicate as extracellular virion particles, do not integrate into host genome and are not associated with any type of human cancers. But recent studies have shown involvement of epigenetic regulation in the expression of certain cytokine genes during influenza A virus infection. [16][17][18] The present study that involves promoter DNA methylation analysis of immune genes known to be involved in influenza-mediated inflammatory response in four different strains of influenza virus-infected human cells will provide new insight into the ways viruses interact and modulate host cellular responses. were used for the study as described earlier. 19,20 Human lung epithelial (A549) cells used for virus infection were maintained in Dulbecco's modified Eagle's tissue culture medium (Invitrogen Life Technologies, Carlsbad, CA, USA) containing 10% fetal calf serum, 100 units/ml penicillin, and 100 ug/ml streptomycin in tissue culture flasks (Corning, NY, USA) at 37°C in a CO 2 incubator. Virus infection A549 cells at a concentration of 3 9 10 6 cells/ml were infected with the two above-mentioned H5N1 viruses at a multiplicity of infection (MOI) of 1. After 1 hour, the inoculum was removed; the cells were washed twice with phosphate-buffered saline (PBS) and supplemented with growth media. For each virus, different sets of tissue culture flask were infected and cultures harvested at 16 hours postinfection (hpi) time point. As at around 16 hpi, virus progeny particles get completely assembled inside the cells which can give an increased host response. Hence, analysis was carried out at this time point. Mock-infected cells of the respective time point were taken as controls. Infection of the viruses was performed in BSL-3+ facility. Analysis of DNA methylation Genomic DNA isolated from control and infected cells was analyzed for DNA methylation in the promoter region of 24 genes involved in inflammatory response, using Methyl-Profiler DNA Methylation qPCR Assays according to the supplier's instruction (SABiosciences Corp., Frederick, MD, USA). Briefly, genomic DNA was isolated using a QIAamp DNA Mini Kit (Qiagen, Valencia, CA, USA) and treated with RNase to remove potential RNA contamination. For each assay, a total of 1 lg of genomic DNA for each sample was used. The Methyl-Profiler DNA Methylation qPCR Assay is based on the digestion of unmethylated and methylated DNA, using methylation-sensitive and methylation-dependent restriction enzymes. The remaining DNA after digestion is quantified by real-time RT-PCR, using primers that specifically flank the promoter region containing CpG island. For this analysis, the relative concentrations of differentially methylated DNA (specifically hypermethylated, unmethylated, and intermediary methylated DNA) are determined by comparing the amount of each digest with that of a mock digest. For each sample, data are expressed as the sum of the percent hyper, intermediary, and unmethylated DNA. ABI7300 RT-PCR instrument was used to read the plates. Standard DDCt method was used for the calculation of the proportion of hypermethylated, intermediately and unmethylated DNA for each gene using the manufacturer supplied Excel macro spreadsheet. Bisulfite sequencing Bisulfite conversion was carried out using EpiTect bisulfite kit (Qiagen) according to the manufacturer's instructions. Briefly, 500 ng of DNA was treated per column, and purified DNA was eluted in 20 ll elution buffer. Purified DNA was used as template for PCRs with following primers for the CpG island of human IL17C gene 5′-GTTGTTT TAGAGTTTGTTGGTGTTG-3′(sense) and 5′-ATCCAAT CTAAAAACCCCAC-3′(antisense) synthesized according to bisulfite-converted DNA sequences for the regions of interest using the Methprimer software. 21 The PCR product was gel-purified and sequenced by conventional Sanger Sequencing. Real-time quantitative reverse-transcription PCR (RT-PCR) Total RNA extracted from control and infected cells was used for quantitative real-time PCR using Quantitect SYBR green one step RT-PCR kit (Qiagen, Carlsbad, CA, USA). All quantifications [threshold cycle (CT) values] were normalized to that of b-actin and analyzed to determine the relative level of gene expression. The relative fold change was determined using standard 2 ÀDDCT method. The experiments were carried out in triplicates. The RT-PCR were carried out using following gene primers-IL17CFwd 5′-CATCGATA- the primer sequences for DNA methyltransferases are described earlier. 22 represents the fraction of input genomic DNA containing no methylated CpG sites in the amplified region of a gene. Methylated represents fraction of input genomic DNA containing two (IM) or more methylated (HM) CpG sites in the targeted region of a gene. The level of methylation of each gene was compared between infected and control cells (Figure 1). Promoter DNA methylation analysis of inflammatory cytokine genes in A549 cells infected with Influenza A viruses The analysis revealed that genes IL13 and IL17C were hypermethylated, whereas genes CXCL6 and CXCL14 were intermediately methylated in control A549 cells. Infection with influenza viruses resulted in decrease in promoter methylation of IL13 and IL17C genes ( Figure 2). This decrease in methylation was maximum (50%) in case of cells infected with highly pathogenic H5N1 virus. Cells infected with pH1N12009 showed no significant change in methylation of IL17C but marginal (10%) decrease in IL13 methylation levels ( Figure 2). Infection with seasonal H1N1 and vaccine strain of H5N1 resulted in minor (20%) but significant decrease in promoter methylation of these genes compared with controls ( Figure 2). Genes CXCL6 and CXCL14 also showed decrease in methylation, the most prominent being in cells infected with H5N1 virus (80% and 60%, respectively; Figure 2). In contrast, we observed minor increase in the levels of promoter methylation in CCL25 and IL13RA1 genes in cells infected with H5-subtype-specific viruses (30%) compared with H1 viruses (10%; Figure 2). Remaining 17 genes present in the panel of assay did not show any significant methylation change to influenza virus infection ( Figure 3). Expression analysis of the genes showing significant methylation changes in response to influenza infection To understand whether differential methylation status of the above-mentioned inflammatory genes caused by influenza virus infection actually correlate with their expression changes, we analyzed the expression of genes showing most drastic increase or decrease in promoter DNA methylation levels. For this, we selected IL17C, IL13, and CXCL6 genes for the real-time PCR analysis (Figure 4). We found significant up-regulation of these 3 genes in influenzainfected cells, and in accordance with the decrease in methylation levels, there was corresponding increase in expression levels, the highest being in H5N1-infected cells. CXCL6 gene, an important cytokine known to be involved in inflammatory responses to influenza viruses, was up-regulated by 12-folds compared with controls. In RG-H5N1infected cells, CXCL6 was up-regulated by ninefolds, while in pH1N12009-and seasonal H1N1-infected cells, it was upregulated by eight and sixfolds respectively. Also, IL17C gene was significantly up-regulated in H5N1-infected cells, which was approximately twofolds more than other influenzainfected cells (Figure 4). Promoter analysis of IL17C gene using bisulfite sequencing To confirm the observations obtained from methyl-profiler assay and to identify specific sites undergoing demethylation, bisulfite sequencing of the CpG island present in the promoter region of IL17C gene was carried out with the control and influenza virus-infected cells ( Figure 5). The CpG island was identified using CpG island finder software (http://bioinformatics.wistar.upenn.edu/cpg), which indicated the location of the CpG island 1 kb downstream of transcription start site (TSS). The software did not show precise results for IL13 and CXCL6 promoters. Using specific primers designed using Methprimer (http://www.urogene. org/cgi-bin/methprimer/methprimer.cgi) Web-based tool, a region of 200 bp inside the CpG island containing 16 CpG sites was amplified. We found that CpG site at the positions 1254 from TSS was demethylated in all the strains of influenza viruses. However, interestingly, CpG site at the position 1290 was dramatically demethylated exclusively in H5N1-infected A549 cells ( Figure 5). The region was found to be highly methylated in control A549 cells. Sequence analysis 23 further revealed that 1254 CpG is located within the NF-jB binding site, whereas 1290 CpG is located in Retinoid X receptor-alpha (RXR-a) transcription factor (TFs) binding site. These two TFs play important role in influenza infection-mediated activation of signal transduction pathways in host cells. Expression analysis of DNA methyltransferases in Influenza A-infected cells In mammalian cells, DNA methylation is governed by methyltransferases. To determine whether changes in promoter methylation in inflammatory genes are mediated by DNA methyltransferases, we analyzed the expression of DNMT1, DNMT3a, and DNMT3b at mRNA level in control (uninfected) and A549 cells infected with the four different influenza viruses. The transcriptional levels of the three methyltransferases were determined using quantitative realtime RT-PCR ( Figure 6). We observed decrease in expression of DNMT3a and DNMT3b as well as DNMT1 with all the influenza viruses except in H5N1-infected cells. However, the Discussion Acute inflammation caused by excessive secretion of cytokine has been observed with influenza virus infection, especially with H5N1 and pathogenic strains of H1N1 viruses. In this study, we provide evidence that change in promoter DNA methylation of inflammatory genes is involved in excessive secretion of cytokines during infection with influenza viruses. Also, the change in methylation level greatly depends on the strain of the influenza virus and its pathogenicity. We observed most significant changes in DNA methylation of inflammatory genes with highly pathogenic-H5N1 influenza viruses. Our results showed that IL17C and IL13 were the main genes regulated by epigenetic mechanism during influenza virus infection. IL-17C that is a member of the Interleukin-17 family is selectively induced in epithelial cells by inflammatory stimuli. 24 IL-17C functions in an autocrine manner and binds to receptors (IL-17RA and IL-17RE), Figure 5. DNA methylation analysis of IL17C gene promoter region using bisulfate sequencing in A549 cells infected with HPAI-H5N1, RG-H5N1, seasonal H1N1, and pandemic H1N1 2009 influenza viruses. Using specific primers designed for the CpG island present in the IL17C gene promoter, a region of 200 bp containing 16 CpG sites was amplified and sequenced. Filled circles represent methylated CpGs and empty circles represent unmethylated CpGs (16 circles for 16 CpGs). Further analysis revealed that +1254 CpG lies in the binding site of NF-jB and +1290 CpG lies in the binding site of Retinoid X receptor-alpha (RXR-a) transcription factors. The +1254 CpG site was found to be demethylated in all the influenza virus-infected cells, while +1290CpG exclusively demethylated in H5N1-infected cells. which are preferentially expressed on tissue epithelial cells. IL-17C plays an important role in stimulating epithelial inflammatory responses, including the expression of proinflammatory cytokines and chemokines. It also plays a significant role in activation of adaptive immune responses to viral infection. 24 IL13 has also been shown to cause influenza virus-mediated lung inflammation and allergic response. 25 In our study, we found that the IL17C and IL13 promoter are methylated and transcriptionally inactivated in uninfected cells. Infection with influenza viruses resulted in decrease in promoter methylation of these genes which was found to be virus strain specific. For example, the decrease in methylation level in pH1N12009 infected was not significant than that of the control cells; however, it decreased by 20% in case of seasonal H1N1-and RG-H5N1-infected cells. But decrease in the level of promoter DNA methylation was most prominent and significant in H5N1 (50%)-infected cells, clearly indicating the role of virus pathogenicity in these epigenetic modifications. This result was further verified in bisulfite analysis of IL17C promoter region, which resulted in identification of CpG sites which were undergoing demethylation specifically in H5N1-infected cells. This demethylation was not observed in other influenza virus-infected cells. Further analysis of this region revealed that CpG site specifically undergoing demethylation is a binding site for RXR-a transcription factor. RXR-a has been shown to play a significant role in regulating expression of cytokines involved in inflammatory response. Binding of RXR-a to the promoter region has been reported to be essential for the transcription of CCL6 and CCL9 chemokines. 26 Loss of methylation at that CpG site as observed in our study might facilitate binding of RXR-a at the promoter region of cytokines causing their increased expression in H5N1infected cells. Influenza virus infection-mediated promoter DNA methylation changes in inflammatory genes have been reported earlier. 16,18 It has been shown that aberrant DNA methylation changes in the IL32 promoter region resulted in its transcriptional activation in response to influenza virus infection. 16 Also, specific CpG demethylation at CREB1 binding region was important for the regulation of COX2 gene expression during influenza virus infection. 18 We also observed that the binding site of NF-jB transcription factor at the promoter region of IL17C gene was demethylated in all the influenza virus-infected cells. Transcription factor NF-jB has been shown to play important role in influenza virusmediated host immune responses. 4,27 Our result further signifies the involvement of NF-jB in host responses to influenza virus infection which is not dependent on the virulence or pathogenecity of the virus. This indicates that activation of NF-jB is a generalized response to influenza virus infection. To understand the mechanism further, we analyzed the expression of the 3 methyltransferases (DNMT1, DNMT3a, and DNMT3b) in the influenza virus-infected cells to investigate the involvement of DNA methyltransferases in the epigenetic regulation of inflammatory genes in response to influenza infection. Gene expression analysis showed that all the three types of DNA methyltransferases were affected by influenza virus infection indicating their involvement in the promoter methylation changes in inflammatory genes. We observed down-regulation Dnmt3a and Dnmt3b genes in all the influenza virus infections except for H5N1-infected cells. This observation was in accordance with earlier studies where decrease in the expression of DNMT3a and DNMT3b was observed in cells infected with H3N2 influenza A virus. 18 However, the increased expression or up-regulation of DNA methyltransferases in H5N1-infected cells indicates a strainand subtype-specific host cellular response and needs further studies. Overall, this study provides evidence that infection with influenza viruses can cause epigenetic changes like DNA methylation. This mechanism is being used to regulate the expression of host inflammatory genes and thus can play an important role in regulating host immune responses against influenza viruses. However, detailed understanding of this needs further investigation.
2018-04-03T03:50:26.505Z
2013-06-11T00:00:00.000
{ "year": 2013, "sha1": "237afad4822dcff5ee2feed6e3e56f9be82297d4", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc4634256?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "237afad4822dcff5ee2feed6e3e56f9be82297d4", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
257447370
pes2o/s2orc
v3-fos-license
Applications of Nanomaterials in Dentistry: A Review ABSTRACT Aim and Objective: Currently, the major priority in the field of nanotechnology or nanoscience is research and development at the atomic- or molecular-level sciences. Almost every aspects of human health, including pharmaceutical, clinical research and analysis, and supplemental immunological systems, are significantly impacted by it. Diverse dental applications to the realm of nanotechnology, which also reflect developments in material sciences, have given rise to the field of nanodentistry and nanocatalytic drug development, especially in oral nanozyme research and application. This review is aimed to provide readers an in-depth analysis of nanotechnology’s characteristics, varied qualities, and applications toward dentistry. Materials and Methods: A query was carried out in PubMed and Google Scholar databases for the articles published from 2007 to 2022 using the keywords/MESH term nanomaterials, dentistry, nanoenzymes, metals, and antibacterial activity. Data extraction and evidence synthesis have been performed by three researchers individually. Results: A total of 901 articles have been extracted, out of which 108 have been removed due to repetitions and overlapping. After further screening following exclusion and inclusion criteria, 74 papers were considered to be pertinent and that primarily addressed dental nanotechnology were chosen. Further, the data havebeen extracted and interpreted for the review. The results of the review indicated that the development of multifunctional nanozymes has been continuously assessed in relation to oro-dental illnesses to show the significant impact that nanozymes have on oral health. Conclusion: As evidenced by the obtained results, with the advent of ongoing breakthroughs in nanotechnology, dental care could be improved with advanced preventive measures. conditions are the major pitfalls halting its widespread use. Most of the research published in the previous 20-30 years has paid attention to nanoparticles, indicating that nanotechnology and the properties of resources at these scales are of tremendous interest. This significant interest has a wide range of applications in nanodentistry, including nanoceramics, restorative-like nanocomposites, nanoglass ionomers, nanometals, and nanolocal anesthesia. [4] Fabricating nanomaterials for dental applications require intricate subject matter to generate novel regenerative, drug-releasing pods, and implant materials. When it comes to building nanodentistry products, the biomimetic technique is still in the testing phase, while the bottom-up and top-down methods are the norm. Clinicians, researchers, and material scientists working with nanomaterials for dentistry require insights into the role of nanomaterials. Although there are several review articles on the role of nanoenzymes and nanomaterials in dentistry, the data available still lack the direction toward a successful outcome. Therefore, we aim to provide a brief narrative review of this area following partial Preferred reporting items for systematic reviews and meta analysis guidelines for evidence and advances in nanomaterials corresponding to medicine and dentistry. MAterIAls And Methods A query on databases such as PubMed and Google Scholar was carried out. The MESH terms "nanomaterials," "dentistry," "nanozymes," "metals," and "antibacterial activity" were used in to search in the databases using AND and OR for data extraction from peer-reviewed scientific journals from the year 2007 to 2022. The articles were included based on the following criteria: the year of publication, the initial author's name, the title's applicability, the goal of the connected publications, and articles in English. On the other hand, conference proceedings, nonoriginal papers, studies irrelevant to the fields of medical and dental, and too much information were factors for exclusion. results Scientific papers like literature reviews, systematic reviews, clinical trials, and original studies in the past 15 years have revealed updates on nanomaterials, nanoenzymes, and their influence in the field of dentistry. A data of 429 articles have been extracted from PubMed and 472 articles from Google Scholar. Following the inclusion and exclusion criteria thoroughly, we chose and examined 74 articles published between 2007 and 2022 that have at least one of the keywords listed above in the title or abstract. After a thorough analysis of all the 74 articles, we have 37 original research articles, Figure 1: Evidence search and data extraction for nanomaterials in dentistry 34 reviews (comprehensive, clinical, critical, narrative, systematic, and meta-analysis), 2 randomized clinical trials, and 1 case report providing the data for the current review topic [ Figure 1]. The data search, identification, and extraction have been conducted independently by three researchers in an Excel sheet. The summary of the findings from the above literature has been categorized into the following subheadings. CatalytiC aCtivity of nanopartiCles Nanotechnology has gained much popularity in the field of medicine due to its ease in analyzing and manipulating bonds at an atomic level. This adds a great advantage toward innovations of new materials, drugs, diagnostic aids, etc., especially in the field of dentistry. Based on the physical and chemical properties of various metals and composites, nanomaterials like carbon nanotubes, graphene, hydroxyapatite, titania, and silver have been manufactured in the form of crystals, nanopores, nanodrops etc., each with exclusive beneficiary properties. These materials are developed strategically to improve conventional materials' properties. [3] Even though various nanoparticles have been evolving, synthesizing, and manufacturing these nanoparticles have been a sensitive task. One of the most commonly used and concerning syntheses is platinum and palladium groups, which have an effective catalytic activity. Iridium, ruthenium, platinum, osmium, rhodium, and palladium make up the platinum-group metals (PGMs)-often known as the platinum family or platinum metals-which are made up of six noble, valuable metallic fundamentals grouped in the periodic table. These metals have excellent catalytic characteristics and are commonly employed in industry as catalysts. Although many other uses of these components are still highly essential, the automobile industry has materialized as the primary user of PGMs in the last three decades. To demonstrate the widespread interest in PGM nanoparticle research, keywords "nanoparticle" and "metal name" (iridium, palladium, rhodium, platinum, osmium, and ruthenium) have been used in the Scopus database. Platinum nanoparticles have 18.2 × 10 3 articles in the Scopus database, palladium has 13.9 × 10 3 articles, ruthenium has 4.1 × 10 3 articles, rhodium has 2.4 × 10 3 articles, iridium has 1.3 × 10 3 articles, and osmium has 1.8 × 10 2 articles. This article demonstrates the synthesis of nanoparticles from the least commonly used platinum family like ruthenium, osmium, rhodium, and iridium. Any further modifications in synthesis can be made to modify the electronic structure or chemical properties, improving the heterogenosity of the catalyst. It was also proved that the basic geometry and crystalline structure of the nanoparticles could be controlled on exposure to various factors like the type of precursor metal, temperature, solvent, and reagent concentration influencing their catalytic activity. The platinum-derived nanoparticles have distinctive properties like carbon monoxide oxidation, Fischer Tropsch synthesis, and a catalyst to decompose nitrogen oxide and convert carbon dioxide into organic compounds. Hydrogen storage can be applied in various fields like drug development, dental applications, a filter of X-ray devices, and automobiles. With proper synthesis knowledge, these nanoparticles customized from supported and unsupported PGMs make a huge impact as a catalyst in commercial markets, replacing the currently used ones. [5] nanozymes as an antibaCterial agent Dental caries is one of the most common infectious diseases of the mouth, affecting 2.4 billion individuals worldwide. [6] The oral biofilm produces a variety of illnesses that endanger oral health and can lead to systemic sicknesses, such as atherosclerosis, diabetes, and Alzheimer's disease, all of which have significant medical costs and disastrous complications. [7][8][9] Researchers have made significant advances in developing innovative, consistent, and effective oral antibacterial medications that promote enzymes. [10] Nanozymes aid in the prevention of root canal biofilm infection. According to Koo's findings, activating H 2 O 2 can significantly eliminate biofilm plaque from the surface of a root canal and dentinal tubules. They developed two types of antibacterial catalytic robots. When put in a limited environment, the threedimensional catalytic antibacterial automaton is a soft robot that allows precise effects (P < 0.0005). [11] According to their findings, dual catalytic magnetic iron oxide nanoparticle-equipped catalytic antibacterial robots in magnetic fields may generate free radicals, break down exopolysaccharide matrix, and clear biofilm particles. When put in a limited environment, the threedimensional catalytic antibacterial automaton is a soft robot that allows precise effects. The leaf-shaped catalytic antibacterial robot removes exopolysaccharide matrix and biofilm buildup from intricate dentinal tubules while also killing microorganisms. The study discovered that catalytic antibacterial robots might be used to treat the severely limited anatomical surfaces of human teeth. By removing biofilms and eradicating germs, these catalytic antibacterial robot systems can endure persistent biofilm-associated diseases and prevent cross-contamination of medical equipment and other surfaces. [11] Healing of oral lesions Several studies [12] have associated oral ulcers with genetic susceptibility, bacterial and viral infections, allergies, vitamin and trace element insufficiency, systemic diseases, and other disorders. As there are no unique drugs in current treatment techniques, we must develop therapeutic methods to increase the body's immunity and facilitate ulcer healing. [13] According to Naha et al., [14] vitamin B2-modified Fe 3 O 4 nanozymes with anti-inflammatory and antibacterial activities hasten the healing of oral ulcers. According to the researchers, this modification raised its enzyme-like activity and significantly improved its superoxide dismutase (SOD)like activity with ROS scavenging capability. Cellular antioxidation studies revealed that these enzymes were biocompatible and capable of protecting cells from H 2 O 2 . These nanozymes kill Streptococcus mutants and reduce local inflammatory factors and remove reactive oxygen species, which helps mice heal from oral ulcers faster. This enzyme-like antibacterial mediator could be a viable therapy for mouth ulcers. nanozyme in oral observing funCtion Oral health observation can track oral diseases and their threat aspects, providing a basis for early analysis and treatment, improving oral health, improving people's living conditions, and evaluating treatment outcomes. [15] As a result, it is critical and urgent to create and complete oral health illness observation system. [16] To this nanozymes may offer a novel approach to monitor oral illness [17] due to their good sensing properties and can monitor ions, chemicals, proteins, nucleic acid, and cancer cells. [18] The nanozyme research in dentistry is currently focused on monitoring ions and nucleic acids. [19] The color difference of the colorimetric biosensor is then used to identify the target. [20] nanozymes play tHree CritiCal roles in tHis proCess 1) Surface modification: Nanozymes can be adsorbed with ions or nucleic acids, which act as a surface modification and boost their catalytic efficiency. As an illustration, the fluoride ion (F − ) may absorb a lot on nanoenzyme leading to alterations of charges over its surface and further elevating its catalytic efficiency. 2) Specificity and sensitivity: A nanozyme is generally bound to a specific object. If the nanozyme binds to all substances tested, it will not be monitored well. 3) Metachrosis: Nanozymes have a stronger affinity for colorimetric biosensor substrates, resulting in dramatic color changes and a monitoring role. Nanoceria can be used to monitor F − effectively, as Liu and his colleagues discovered. [20] According to their research, pure nanoceria's catalytic activity rose by more than 100 times when F − was added. Additionally, oxidase-like activity can be prolonged by mixing F − and nanoceria, whereas a single nanoceria inactivates in less than a minute. The study's lower observance limit for fluoride ion in both the water and the toothpaste was 0.64 M. Due to the inability of the other anions under investigation to raise activity in the same way as F − , F − monitoring has become extremely sensitive. The results demonstrate that the properties of nanoceria can be used to rapidly and accurately detect F − in toothpaste and intake water. [20] oral CanCer monitoring Oral cancer is the fourth most common tumor, posing a threat to oral health and carrying a high death and repetition rate, which is the most concerning issue for oral medical professionals. Tissue biopsy, formerly the gold standard for diagnosing oral cancer, is becoming unable to fulfill the demands of current analysis and therapy. Its intrusiveness causes patients uneasiness and may lead to tumor cell spread. Early on, accurate monitoring allows for early discovery, analysis, and treatment, as well as avoiding the distress of minor operations due to a mismatch between the postoperative biopsy and surgical procedure. [21,22] The nanomonitoring technology has proven highly efficient, sensitive, and quick for detecting DNA associated with oral cancer or lesions as a novel noninvasive supplementary monitoring strategy. [23] Dental purpose of nanomaterials Damage to tooth tissue can lead to problems such as oral precancer and cancerous lesions, tooth decay, periodontitis, hyperesthesia, and bad breath. The aforementioned problems can be treated using therapeutic techniques and biocompatible synthetic materials. [20] Nanomedicines used as dental materials have physicochemical and biological characteristics that set them apart from conventional dental therapies in overcoming side effects. Various types of nanomaterials have been found to imitate host tissue properties in studies, albeit there is no knowledge of such features among dentistry groups. [21,24] As a result, the current analysis center on the possessions of various metal-and polymer-based nanomaterials [22] utilized in dental adhesives and restoratives, acrylic resins, period ontology, tissue engineering, endodontic, and implant dentistry. [25,26] nanomaterials in preventive Dentistry Preventive dentistry is essential and plays a significant role because of the growing body of knowledge regarding oral problems. [27] Nanomaterials are used in preventive dentistry to regulate biofilms on the surface of teeth and to demineralize the early stages of submicron-sized enamel lesions. [28] A formulation of silver nanoparticles (Ag NPs) was developed by Schwass et al. [29] for the purpose of caries removal. Silver nitrate (AgNO 3 ) was chemically reduced by sodium borohydrate (NaBH 4 ) in the presence of sodium dodecyl sulfate to produce micelle aggregation structures that included monodispersed stabilized Ag NPs that ranged in size from 6.7 to 9.2 nm. Microplate measurements, which measure the absorption of crystal violet light at 590 nm, showed significant alterations in the biofilms that had been treated with Ag NP. Bacterial sensitivity is unaffected by the presence of sugar. This Ag NP formulation showed promise for therapeutic uses in inhibiting the formation of in vitro biofilms for several Streptococcus spp. and Enterococcus faecalis strains. Manikandan et al. [30] studied the production of silver oxide nanoparticles (Ag 2 O NPs) and its antibacterial activity against dental bacterial strains using Ficus benghalensis prop root extract (FBPRE) as a stabilizing and reducing agent. They found that higher extract concentrations and time frames resulted in a significant increase in the formation of NPs. Dental bacteria Lactobacilli sp. and Streptococcus mutants showed extremely strong antibacterial activity when FBPRE and Ag 2 O NPs were combined. After multiple animal tests, they concluded that combining the synthesized FBPRE with Ag 2 O NPs as a germicidal component in toothpastes would be advantageous. nanomaterials in eDentulism Edentulism has significant negative consequences, including a decreased intake of essential foods and an unattractive appearance, and it is becoming more common in many nations. [31] Even though tooth loss estimates have dropped, the age range in which edentulism is still prevalent has widened. As a result, denture therapy is essential in public health, and its importance will only increase as the population ages. [32] Using a modified sol-gel synthesis method, to create polymethyl methacrylate (PMMA)/TiO 2 nanocomposites, Totu et al. [33] used nanosized TiO 2 filler; morphological and structural analyses revealed that the TiO 2 nanofiller diffused uniformly in the PMMA solution. According to experimental results, adding TiO 2 NPs to the polymer modified the structure and characteristics of the polymer; 0.4% TiO 2 NPs in the nanocomposite dramatically altered the FTIR spectrum. The addition of TiO 2 NPs in the PMMA polymer matrix produced antibacterial effects, particularly against Candida species, as demonstrated by using a 0.4% nanocomposite for complete denture production using a stereolithographic approach. According to Rodrigues Magalhaes et al., [34] TiO 2 nanotubes might be used to improve the biological and mechanical properties of dental materials. Tetragonal zirconia poly-crystals stabilized with yttria (Y-TZP) are increasingly employed in dentistry as the foundation for crowns and fixed partial prostheses. No matter how well it performs in the clinic, Y-TZP is susceptible to issues, including microstructure-related faults introduced during the fabrication process, which could jeopardize its structural and clinical dependability. While monitoring each manufacturing phase, researchers evaluated the role of the blanks' production technique and original composition adjustments by including TiO 2 nanotubes (0%, 1%, 2%, and 5% in volume). The experimental Y-TZP characteristics were altered by including TiO 2 nanotubes in various combinations, resulting in lower flexural strength. The microstructure of Y-TZP was also modified by the nanotubes, which led to higher grain sizes, more holes, and a little increase in the monoclinic phase. Additionally, adding TiO 2 nanotubes enhanced the structural reliability and Weibull modulus values. The impact of nano-zirconium oxide nanoparticles on the mechanical properties of PMMA denture base material was examined by Gad et al. [35] The PMMA tensile strength of the test groups with 2.5%, 5%, and 7.5% NZ was considerably greater than that of the controls. The inclusion of nano-ZrO 2 significantly improved the tensile strength, with the 7.5% NZ group showing the largest gain. The experimental group's translucency levels were much lower than those of the controls. The 2.5% NZ group in the powered group exhibited higher values for translucency than the 5% NZ and 7.5% NZ groups. While PMMA translucency decreased as the nano-ZrO 2 concentration climbed, the tensile strength of the denture base acrylics increased. Confront anD opportunity of nanozymes in oral funCtions The bulk of the catalytic activity of the nanozymes in dental applications and investigations is composed of peroxidase, SOD, oxidase, and catalase-like activities, which may bring about irreversible bacterial/biofilm destruction. Since nanozymes may significantly enhance their enzymatic activity when exposed to DNA or ions, they can be used as a colorimetric biosensor to track ions, bacteria, or DNA linked to oral cancer. By promoting cell adhesion, proliferation, and differentiation in a sterile environment, nanozymes can also help soft and hard tissue regeneration. The nanozymes genealogy has shown promising results toward dentistry as a means of overcoming the drawbacks of conventional H 2 O 2 concentrations, buffering oxidative interference brought on by the habitat during cell proliferation and differentiation, eliminating oral flora by degrading biofilms, and monitoring F − and Streptococcus mutants using a quick and easy procedure. Nevertheless, there are still a lot of aspects of nanomaterials that have not been appropriately used, as well as benefits from other fields of medicine that have not been thoroughly examined in dental scenario. Few researchers have linked their study on nanozymes with this understanding of these aspects of dentistry, but their combined work could encourage more people to utilize nanozymes. Because of its superior mechanical properties and X-ray resistance, gutta-percha is the most approved root canal sealing material and is widely utilized in dentistry. [36] On the other hand, because gutta-percha is not antibacterial, it is difficult to get rid of germs from the root canal. Antibacterial nanozymes can be used to produce guttaperchas, or nanomaterials' mechanical and thermal properties can be fully exploited to produce antibacterial things. Robots that clean root canals were developed by Koo's team. It would be interesting to know if these robots can be recycled using nanoparticle magnetic characteristics. Plaque treatment and other dental procedures, including cleaning root canals and teeth, also employ ultrasound technology. If nanozymes can multiply when exposed to ultrasound, this question has to be explored further. [37] otHer tHerapeutiC baCkgrounD Nanozymes and other catalytic nanomaterials have contributed to the development of the idea of nanocatalytic medicine, which has promise for tissue regeneration, antibacterial activity, tumor therapy, and monitoring. [38] Although there is little research on nanozymes in dentistry, the advantages of nanozymes in other medical specialties may shed some light on this issue. Only a small number of research have used nanozymes in dentistry, compared to numerous that have used them in biomedicine for tumor surveillance, treatment, and prevention. Because the blood vessels and lymph nodes in the mouth and face are so large, cancers frequently spread and are challenging to treat. [39] Maxillofacial surgeons have been hard working to develop a solution toward oral cancer for years. In order to reduce distant metastasis and prevent recurrence, the removal of excessive tissues and further resections are still employed in oral and maxillofacial surgery, which may eventually reduce the quality of life of the patient. [40] Without needing cofactors or reagents, Fan et al. discovered that magnetoferritin nanoparticles straightly adhered to cancer cells to overexpress transferrin receptors. By observing the color reaction, the tumor tissue can be identified. [41] Clinical samples were used to verify the nanozymes' excellent specificity and sensitivity. If the findings are applied to oral and maxillofacial surgery, surgeons can remove malignant lesions more precisely and prevent the agony caused by expanded excision. Das discovered that nanozymes have neuroprotective properties, suggesting that they could be used to help repair face nerve damage caused by oral and maxillofacial surgery. [41] oral Cavities pHysiologiCal loCation The mouth cavity is where the digestive tract begins. Food and saliva will carry the nanozymes that remain in the oral cavity after local activity into the body. Restocking Streptococcus gordonii is advised by several research methodologies in order to preserve oral biofilms ongoing suppression. S. gordonii is a common oral bacterium, although some investigations have shown that it can cause endocarditis when it enters blood vessels. It is also important to take into account the body's protracted accumulation of nanozymes. Nanozymes can be absorbed through the gastrointestinal tract, albeit the concentration is likely very low. Furthermore, pH-dependent nanozymes may react significantly under gastric acid, increasing the strain on the gastrointestinal system. [42] The body may promote weight loss and experience more oxidative stress in the blood and liver if there are an excessive number of nanozymes present. DNA damage from nanomaterials is also a possibility. The immunogenicity of nanozymes was not taken into account in the bulk of reported toxicity studies since they only utilized mice or cells for a short duration. Potential threats must be examined in a long-term evaluation. Nanozymes can originate cytotoxicity in varying degrees depending on their type and dosage. There is currently no assessment standard in place; thus, these issues must be resolved. [42] dIscussIon Oral disorders, such as periodontal disease, oral cancer, and dentalcaries, afflict nearly 3.5 billion people worldwide, according to a series of papers published in The Lancet in 2019. [15] In order to significantly aid in the prevention and treatment of oral diseases, stomatology is developing in lockstep with biomaterials. [43] On the other hand, traditional dental materials (such as silver amalgam alloys) have some restrictions (like tooth weaking, fractures) that can guide to problems and treatment malfunction. [44,45] Nanomaterials have opened up a world of possibilities for improving oral health, restoring oral purpose, and improving the superiority of life. [46,47] Natural enzymes like proteolytics and amylase have antibacterial, anti-inflammatory, and immunity-boosting capabilities were constantly studied and utilized in conditions like periodontitis, oral ulcers, and dental caries. [48,49] On the other hand, natural enzymes have certain drawbacks, including poor stability under extreme conditions (heat and high pH), expensive procedure, labor-intensive separation and purification, and challenging longterm storage, among others [ Table 1]. [50] Nanozymes have several outstanding biological effects, including antibacterial [51][52][53][54][55] antioxidant, anti-inflammatory [2,54,55] and biosensor, [7] in addition to the intrinsic features of nanomaterials, such as fluorescence, photothermal effect, and near-infrared imaging. [56][57][58] Many researchers have recently proposed using it to detect disease, regrow tissue, anticancerous and antibacterial agents. [58][59][60][61] Earlier, nanozymes were employed to treat oral plaque biofilm in 2016 [62,63] and have achieved greater results toward disrupting the biofilm formation with the help of dextran coated iron oxide nanoparticles. [8] Since then synthesis and characterization of nanoenzymes have developed into significant level. Nanozymes are still inferior to natural enzymes, which have long been available for medicinal usage and toothpaste addition. They have an efficient catalytic mechanism, high catalytic activity, and are physiologically safe while whitening teeth and reducing dental plaque and calculus. Nanozymes catalytic mechanism, on the other hand, is still a topic of discussion. How nanozymes may perform enzyme catalysis without catalytic activity centers baffles us. [63] Recent studies have steadily shown that the catalytic activity of nanozymes is correlated with the fundamental characteristics of nanomaterials, such as size, composition, and form, as well as the reaction environment, such as temperature, pH, and reactants. The problem of poor catalytic activity, however, may only be partially resolved by altering these features. Metal sulpfdes, for example, are utilized as protontrapping tools to produce H 2 S and expose Fe 3+ in order to boost catalytic effectiveness. The most often employed nanozymes with enzyme-like capabilities in dental research were peroxidase, catalase, oxidase, and SOD. DNase activity was incorporated by the researchers into the alteration process. Thanks to DNA engineering that enables substrate-specific binding of nanozymes for an efficient oral monitoring. However, unlike a conventional catalyst, nanozymes are unable to selectively link to substrates because they lack the intricate anatomy of a genuine enzyme substrate-binding bag. In conclusion, numerous potential obstacles exist to nanozyme research and implementation in dentistry. Scientists must put effort to comprehend the clear energetic mechanism of nanozymes and to manufacture new varieties of nanozymes to meet the therapy demand on the technique to clinically experimental application. Dental researchers must address urgent clinical concerns, collaborate to understand nanozyme mechanisms at the molecular biology point, and analyze potential troubles in nanozyme use. [64] Due to the limited data available on the performance of nanomaterials, a systematic review was not conducted pertaining to the title that can be made possible in the future research with much more beneficial literature added into the databases. Here the review concerns on various physical challenges to be faced while studying the nanomaterials, synthesis, and advancements. With the rise in research in an in vitro and in vivo level, we faced a deficit in data-based significance in the extraction data. It would be of great advantage if future experimental studies can carry out on a higher sample level, and further systematic reviews to be conducted to assess the efficacy of these research experiments. Glutathione peroxidase Graphene oxide Biosensing [7] 8. Peroxidase Iron oxide nanoparticle-dextran coated Disrupt oral biofilm [8] 9. Sulfite oxidase Molybdenum trioxide Cytoprotection [9] 10. Glutathione peroxidase Vanadium pentaoxide Antioxidant [10] are both energy saving and beneficial from an economic and environmental standpoint. These applications are anticipated to impact several different economic sectors. With these solutions, there may be a chance to relieve the environmental burden. The greatest challenge currently in the field of medicine is to understand the process of the pathophysiology of any disease, followed by diagnosis and treatment opportunities. Scientific platforms conducting larger forms of research in nanotechnology have led to clinical breakthroughs leading to greater opportunities toward diagnostics and treatment procedures, including developing preventive measures. With an increase in the usage of nanomaterials due to their unique physiochemical properties, there comes the risk related to the exposure of nanomaterials too. Hence, it is essential for researchers and technological development platforms to study more about nanomaterials toward their life cycles and assess the possible risks and hazardous effects for a universal benefit. finanCial support anD sponsorsHip This work was supported by Saveetha Institute of Medical and Technical Sciences, Saveetha Dental College and Hospitals, Saveetha University, and M/s Trend Fashions India Pvt Ltd. ConfliCts of interest All the authors declare no conflict of interest
2023-03-12T15:47:42.647Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "7fcb49f4f8b9cd207b75d14f3e790991b26a8c53", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d28fe4d3138a376663774cda5ec95a65f2cdd70f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
119242073
pes2o/s2orc
v3-fos-license
DA White Dwarfs in the Kepler Field We present 16 new, and confirm 7 previously identified, DA white dwarfs in the Kepler field through ground-based spectroscopy with the Hale 200", Kitt Peak 4-meter, and Bok 2.3-meter telescopes. Using atmospheric models we determine their effective temperatures and surface gravities to constrain their position with respect to the ZZ Ceti (DA pulsator) instability strip, and look for the presence or absence of pulsation with Kepler's unprecedented photometry. Our results are as follows: i) From our measurements of temperature and surface gravity, 12 of the 23 DA white dwarfs from this work fall well outside of the instability strip. The Kepler photometry available for 11 of these WDs allows us to confirm that none are pulsating. One of these eleven happens to be a presumed binary, KIC 11604781, with a period of ~5 days. ii) The remaining 11 DA white dwarfs are instability strip candidates, potentially falling within the current, empirical instability strip, after accounting for uncertainties. These WDs will help constrain the strip's location further, as eight are near the blue edge and three are near the red edge of the instability strip. Four of these WDs do not have Kepler photometry, so ground-based photometry is needed to determine the pulsation nature of these white dwarfs. The remaining seven have Kepler photometry available, but do not show any periodicity on typical WD pulsation timescales. Introduction White dwarfs (WDs) are extremely compact objects, and are the final evolutionary stage of ∼95% of stars. Even though WDs are well characterized and heavily studied, more can be learned about their internal structure through asteroseismology, especially since the advent of precise space-based photometry such as that provided by Kepler. We focus here specifically on characterizing DA white dwarfs located in the Kepler field, and on constraining the onset of the instability strip. DA white dwarfs are the most common WD spectral type and have hydrogen-dominated envelopes (or photospheres), which produce strong hydrogen Balmer absorption lines. Some DA white dwarfs are observed to pulsate. For these WDs, asteroseismology can be used to determine their interior structure and composition (e.g., Gilliland et al. 2010). Pulsating DA white dwarfs are found to exist only in a narrow region in the T eff -log g plane, known as the instability strip. The instability strip in DA white dwarfs is located in the temperature range 10 800 K < T eff < 12 300 K (Bergeron et al. 2004;Mukadam et al. 2004), where the κ-and γ-mechanisms in the hydrogen partial ionization zone drive the pulsations. The instability strip was determined from observations to have a small dependence on mass as well (e.g., Giovannini et al. 1998). The purity of the instability strip for DA white dwarfs was first theorized by Fontaine et al. (1982); if a DA white dwarf was found to lie within the instability strip, it would have to be pulsating. This led to studies by Fontaine et al. (1982) and Greenstein (1982) arguing that DA white dwarf pulsations are a phase through which all DA white dwarfs evolve as they cool. If this is true, then studying the seismological properties of pulsating DA white dwarfs would provide constraints on the properties of all DA white dwarfs (e.g., Daou et al. 1990). From theoretical modeling, the blue edge of the instability strip is predicted to be a sensitive function of WD mass and of the physical characteristics of the hydrogen envelope, especially the effectiveness of convective mixing (Winget & Fontaine 1982). Studying the blue edge of the instability strip by determining effective temperatures of pulsating, and non-pulsating, DA white dwarfs which fall near this boundary will inform us about the pulsation mechanism and its characteristics. Additionally, studying the red edge of the instability strip allows for the determination of which mechanism is responsible for stopping pulsations at this point (Tassoul et al. 1990;Gianninas et al. 2005). In this paper, we set out to find pulsating DA white dwarfs in the Kepler field. Kepler photometry is ideal for measuring photometric variability because of its precision and long timeline of nearly constant observations. Modeling and analysis of spectroscopic observations of WDs allow for the determination of effective temperature and surface gravity. These properties are used to determine whether a DA white dwarf falls within the instability strip, and whether it should thus be pulsating. Photometric observations are then used to verify the hypothesis of pulsation, or stability, when Kepler data is available. We present spectroscopic and photometric data for 23 DA white dwarfs in the Kepler field. We discuss the identification and selection of WD candidates, as well as details of the observations and data reduction, in §2. We then discuss the spectral modeling used to determine each white dwarf's effective temperature (T eff ) and surface gravity (log g) and period analysis in §3. Results of the modeling and period analysis are presented in §4 for each of the 23 DA white dwarfs. Finally, the conclusions are presented in §5. Observations and Data Reduction Target objects were chosen based on a program which surveyed the Kepler field in UBV in order to find new blue (hot) objects. From this survey (Everett et al. 2012), photometric calibrators in the form of hot WDs were searched for, as non-pulsating WDs are known to be very stable photometric sources. We searched the photometric data for objects with colors near B − V = 0.0 and U − B = −0.8 or bluer, yielding several hundred candidates. Candidates for the 2.3-meter Bok telescope follow-up program (see 2.1 below) were selected based on their blue color alone. Candidates for the Hale 200 ′′ follow-up program were selected based on a combination of blue color and high proper motion, performed as follows. The list of blue candidates was cross-correlated against the SUPERBLINK all-sky proper motion catalog (Lépine & Gaidos 2011;Lépine, in prep.), which identifies stars with proper motions, µ > 40 mas yr −1 and visual magnitudes V brighter than 20, including in the Kepler field. The identification of a faint blue object as a high proper motion star usually indicates that it is a hot subdwarf or white dwarf (Lanning & Lépine 2006). Our subset of <100 blue, high proper motion stars, along with some additional blue objects, were observed using ground-based optical spectroscopy (discussed in the following section) to determine their spectral types. The ground-based spectroscopy revealed various spectral types of WDs, active galactic nuclei, and chromospherically active stars, among others. Four sources (KIC 4829241, 6212123, 3354599, and 10149875) were selected as likely hot WDs in the Kepler input catalog after being matched to sources on the POSS I (Palomar Observatory Sky Survey) blue plates. White dwarfs with counterparts in the SUPERBLINK proper motion catalog are denoted below with their SUPERBLINK catalog number ("PMI" prefix) in addition to their KIC number. Other white dwarfs are identified by their catalog number from the UBV survey ("Blue" prefix) and from the Howell-Sing-Holberg Survey, conducted on the Bok 2.3meter telescope ("HSH" prefix). As the HSH spectra turned out to be of low quality, the subsequent observations were obtained with the 4-meter telescope. Spectroscopic Observations Optical, ground-based spectroscopic observations were obtained for 10 of the WDs from the Palomar 200 ′′ (5.1-meter) Hale Telescope on Mount Palomar in California. We used the Double Beam Spectrograph (DBSP), which is composed of two channels, blue and red, with wavelength ranges of 3500Å -5000Å and 5500Å -7500Å, respectively. The observations utilized blazed gratings with low to medium resolution, R ≈ 4500 (red channel) and R ≈ 3000 (blue channel), and a slit width of ∼1 ′′ . Observations were taken over several observing runs throughout August 2013. For 9 of the WDs, optical, ground-based observations were obtained from the Kitt Peak Mayall 4-meter telescope using the KPC-22b grating in the second order on the Ritchey Chrétien (RC) spectrograph. The blue spectral resolution for the setup used is ∼5000, providing a wavelength coverage of 3700-5100Å with a dispersion on the CCD detector of 0.72Å pixel −1 . The slit was set to 1 ′′ and used in an east-west (90 • ) orientation for all observations. For the 4 remaining WDs, optical, ground-based observations were obtained with the Steward Observatory 2.3-meter Bok telescope at Kitt Peak, using the Boller and Chivens spectrograph. The 2005 September observations were taken with the 832 lines/mm grating and used in the 2nd order at two different grating tilts to cover the blue region (∼3400-4850Å) and 1st order in conjunction with a UV filter to cover the red region (5000-7000Å) with two separate grating tilts. A 1.5 ′′ slit was used producing a resolving power of ∼2700. An observing log for all of our spectroscopy is presented in Table 1. All spectra were reduced with iraf (Image Reduction and Analysis Facility) software, using well-known routines (noao/imred package) to extract one-dimensional optical spectra. A single one-dimensional spectrum is extracted from the raw spectrum by determining the extraction region, subtracting out the background, and fitting a function along the extraction axis. Once the spectrum is extracted, it is then flux calibrated by applying a standard star to the spectrum. It is then wavelength calibrated using arc or comparison lamp spectra, which have known rest wavelengths. Blue and red spectra from the Hale telescope were reduced separately due to the gap in data and the difference in resolution between the channels; the channels were combined for analysis. The blue channel spectra were analyzed first in order to identify the DA white dwarfs because all of the Balmer lines except Hα are located within the blue wavelength range. Photometric Observations Photometric observations were obtained by the Kepler 0.95m space telescope, which has a bandpass of 4300-8900Å 1 . Kepler observed a 115 square-degree field of view in the Cygnus constellation almost non-stop, obtaining unprecedented photometric coverage of this field from 2009-2013. Discovering planets via transits was the main objective of Kepler, but its photometric precision and depth, down to 21 Kepler magnitudes, is also extremely beneficial for astronomical studies of variable systems. Kepler has two modes of observation, long cadence (LC; 30 minute exposures) and short cadence (SC; 1 minute exposures), both of which are used in this work. All Kepler data are archived and publicly available from the Mikulski Archive for Space Telescopes 1 (MAST), where the light curves for our DA white dwarfs were obtained. Light curves were available for eighteen of the twenty three DA white dwarfs. There are two pipeline-reduced data sets available for download and analysis, PDCSAP 2 (presearch data conditioning simple aperture photometry) and SAP (simple aperture photometry). Both of these pipeline-reduced light curves are adequate for photometric studies, although the user can obtain the raw data and re-reduce it using pyraf routines, if necessary. PDCSAP data was utilized for the DA white dwarfs in this paper. Table 2 lists the photometric observation information for all of our white dwarfs in columns 8 and 9. LC and SC refer to long and short cadence data and "Qtrs" indicates the appropriate 90 day blocks of Kepler data. Modeling and Analysis Each DA white dwarf was initially identified visually in the spectroscopic data from its easily recognizable spectral signature: DA white dwarf spectra exhibit characteristic, gravitybroadened absorption lines of the hydrogen Balmer series. The parameters of the DA white dwarfs were then determined by fitting atmospheric models to the observed spectra. Based on the derived effective temperature and surface gravity from the best fit model, we assessed whether the WD would be expected to be in the instability strip, and thus a pulsator. We then used the Kepler photometric data, when available, to determine if there is any evidence of variability and to confirm or deny our pulsation hypothesis. Spectral Models DA white dwarf models were provided by Detlev Koester (priv. comm.); a discussion of the models and their usage can be found in Koester (2010). The models were produced using four basic assumptions of modeling stellar atmospheres: 1) homogeneous, plane-parallel geometry, 2) hydrostatic equilibrium, 3) radiative and convective equilibrium, and 4) local thermodynamic equilibrium. We used a group of 594 models with a temperature grid spacing of 250 K from 6000 K to 20 000 K and 1000 K from 20 000 K to 30 000 K, with log g spacing of 0.25 dex from 7.00 to 9.00 dex. Errors for measured effective temperature and gravity at the edges of the model range are either marked as zero (in the text), as "· · · " (in Table 2), or left blank (in Figs. 21,25,and 26). In order to fit the models to the observed DA white dwarf spectra, both the models and the spectra were normalized to the continuum. For all lines (Hα to Hξ), except Hβ (for the Hale telescope), a continuum section on either side of the spectral line was selected and a linear regression was fit to these sections, excluding the absorption line itself. For Hβ (Hale only), only the continuum on the blue side of the line was used, as the red wing of Hβ is diminished in intensity due to the grating efficiency falling off on the red side of Hβ. There is no systematic difference between the fits of Hα alone versus the inclusive fits of Hβ-Hξ and Hδ-Hξ in the Hale spectra. This process was used for the spectral models as well, to normalize the modeled and observed spectral continuum to unity. The models were folded with a Gaussian profile matching the instrumental width of the spectrograph before being fitted to the observed spectra. Each Balmer line was then fitted individually for each WD, and the lowest χ 2 value was calculated. The best global fit was determined by fitting all the lines simultaneously and minimizing the total χ 2 for all five (or six) lines (Hβ to Hξ and Hα where available). Stacked plots of all the Balmer lines and their best global model fits are discussed and displayed in §4. Also displayed in §4 are χ 2 contour maps in T eff -log g space, which show the best fit model, the lowest χ 2 , and the empirical instability strip for comparison. A discussion of the errors and how they were determined can also be found in §4. Period Analysis Kepler photometric data of each WD was run through an idl routine in order to compute the Lomb-Scargle periodogram. Lomb-Scargle is a mathematical computation, similar to a discrete Fourier transform (DFT), which calculates the most likely period of time series data (Scargle 1982). These calculations allowed us to determine if there were any likely periods of the WDs, which would imply pulsations. Pulsating DA white dwarfs typically have periods of up to ∼1 day, but usually much less than 1 day, ∼100 to 1000 s (e.g., Bergeron et al. 1995, Fontaine & Brassard 2008. The range searched when looking for periodic trends in the Kepler data was 0.001 to 20.0 days. Pulsation periods were searched for in the 0.001 to ∼1.0 day range, whereas above ∼1.0 day was the region where we searched for any other type of variability (i.e., companions). The low end of the pulsation search range goes much lower than the normal range of pulsations, but it was extended in order to be as inclusive as possible. Calculations of likely periods were also conducted using the Exoplanet Archive 3 periodogram tool, which resulted in the same results as the above analysis. Eighteen of the twenty three DA white dwarfs have Kepler light curves in the archive and the data are discussed in §4. Two of the DA white dwarfs with Kepler data show variability, but both periods are not within the range of pulsations (see discussions in §4.2.3 and 4.2.9). The remaining sixteen WDs with Kepler light curves did not reveal any statistically significant (>3σ) periods within the normal range of WD pulsations. Figure 1 shows the empirical instability strip in T eff -log g space, plotted with known ZZ Cetis (open circles) and known photometrically stable DA white dwarfs (filled circles). This figure is a reproduction of Fig. 6 from Gianninas et al. (2005), with the addition of the DA white dwarfs from this paper (colored shapes). Results In the following subsections, each DA white dwarf and its modeled parameters are discussed individually (a summary is provided in Table 2). Eleven of the twenty three DA white dwarfs are determined to be instability strip candidates based on their modeled spectroscopic parameters; seven of these WDs have Kepler photometry. Of the remaining twelve WDs, eleven fall well outside of the instability strip and can be added to the list of known photometrically stable WDs. Ten of these WDs are confirmed as non-pulsators by Kepler photometry and one does not have Kepler photometry, but is not predicted to be an instability strip candidate. The remaining WD, KIC 11604781, shows a clear periodicity near 5 days, attributed to an orbital variation. Previous studies of this WD also conclude that it is probably a binary (see §4.2.3). We discuss the photometric data in this section and the confirmation, where possible, of our hypotheses of the pulsation nature of each WD from the spectral modeling described in §3.1. χ 2 ν contour plots were created in order to determine the parameters' confidence intervals and statistical errors. One, two and three σ contours, which correspond to ∆χ 2 min + 2.30, ∆χ 2 min +6.18, and ∆χ 2 min +11.8 (Press et al. 2007) respectively, are shown on each contour plot for each DA white dwarf, indicating the confidence intervals of the model fits. The statistical Gianninas et al. (2005) with the empirical (dashed line) and the theoretical (solid line) instability strip calculated from Fontaine et al. (2003), known photometrically stable WDs (filled circles) from Gianninas et al. (2005) and known ZZ Cetis (open circles) from Bergeron et al. (2004). DA white dwarfs from this paper are displayed as colored shapes. Only 13 of the 23 DA white dwarfs are shown in this figure; the remaining 10 are well outside the bounds of the instability strip. Yellow squares and green stars indicate the eleven instability strip candidates. The green stars are the four WDs that lie close enough to the instability strip that their status as instability strip candidates is more likely than the remaining seven WDs (yellow squares). Some of the instability strip candidates have been shifted in log g by 0.02 or 0.04 dex and one shifted in T eff by 100 K to disentangle the error bars. The red triangles are two of the twelve non-instability strip WDs which lie within this range of effective temperature and log g. errors of each best fit correspond to the 1σ contour, where the upper and lower bounds of the contour at this level for both the gravity and effective temperature were determined. Hβ has a large effect on the errors as well because it is an upper Balmer series line, where the log g effect is stronger than in the later lines. The 4-meter data appear to have better S/N, even though the resolution is less than the Hale telescope; this is most likely due to observing conditions or greater integration time. The ZZ Ceti empirical instability strip is shown on each contour map to illustrate its position with respect to the T eff -log g confidence intervals. If the 1σ contour overlaps with the instability strip at any point, the WD is classified as an instability strip candidate. Seven of the twenty three DA white dwarfs in this sample have been previously modeled in the literature. Previous parameter estimates are compared with parameters obtained from the present work in Table 3. Five of these seven WDs have previous spectral observations and analysis by Østensen et al. (2010, 2011). These previous observations were conducted with multiple ground-based telescopes using low resolution spectra, with the resolving power, R, ranging from 550 to 1600 (see specifics in Østensen et al. (2010, 2011)). This resolving power is lower than that which we achieved with all three of the spectrographs used in this work. Discrepancies arise in the determinations of effective temperature and gravity for these five WDs and are discussed in the appropriate sections below. We note that the resolution for the Bok 2.3-m observations is lower than the Hale observations, therefore the model fits are not as good. KIC 10709534 is a new DA white dwarf in the Kepler field with T eff = 10.5 +3.4 −1.0 kK and log g = 8.50 +0.43 −0.60 dex. Fig. 2a shows the best fit model to each Balmer line (thick black line). Fig. 2b shows the location of the best fit (cross) in T eff -log g space with contours of 1, 2 and 3σ. The 1σ error contour is shown in the T eff -log g space to demonstrate that, taking into account the errors on both the effective temperature and surface gravity, KIC 10709534 has a high probability of being a pulsator. Unfortunately, Kepler photometric data for KIC 10709534 was unavailable, so the pulsation nature could not be confirmed. KIC 10709534's contour plot shows a double-minimum, allowing the effective temperature and surface gravity of this WD to be either the values stated above, or T eff = 18.0 +3.5 −2.6 kK and log g = 7.75 +0.41 −0.59 dex. If the modeled parameters were in fact the latter, KIC 10709534 would not be an instability strip candidate. This double-minimum degeneracy shows that there are mathematically two distinct parameter subsets that provide statistically good fits to the spectrum. The minimum χ 2 of each 1σ contour space in T eff -log g space was however examined by eye, and the primary model value with T eff = 10.5 kK was determined to be a better fit of the Balmer line cores (see comparison of fits in Fig. 2a). This is consistent with the marginally lower χ 2 value found for this minimum, and suggests this value is the most likely estimate of the two. In addition, B − V and U − B colors were used to determine an approximate effective temperature, which is most consistent with T eff = 10.5 kK. KIC 10777440 (PMI18486+4811) KIC 10777440 is a new DA white dwarf in the Kepler field. No previous WD designations or classifications have been found in the literature for this object. KIC 10777440 has modeled parameters of T eff = 14.8 +1.9 −2.4 kK and log g = 8.25 +0.33 −0.35 dex. Figure 3a displays the best fit model to each of the Balmer lines. Figure 3b shows a χ 2 contour map with the 1σ to 3σ contours; this WD has a moderate probability of being within the instability strip. The measured effective temperature of KIC 10777440 does appear to be a little too hot for it to be a pulsator, but it is an instability strip candidate within its 1σ errors. As in the previous case, Kepler data is not available to confirm or deny this hypothesis. KIC 7879431 (PMI19085+4338) KIC 7879431 is a new DA white dwarf in the Kepler field. Best fit parameters of T eff = 14.5 +2.2 −2.3 kK and log g = 8.25 +0.32 −0.42 dex were determined. Figure 6a displays the best fit model to the Balmer lines of KIC 7879431's optical spectrum. Figure 6b shows the χ 2 contour map in T eff -log g space. KIC 7879431 appears more likely to have an effective temperature too hot to be located within the instability strip, however taking into account the 1σ errors, this this WD remains a plausible instability strip candidate. Again, no photometric data is available, so we cannot confirm or deny this hypothesis. KIC 9082980 (PMI19179+4524) KIC 9082980 is a new DA white dwarf in the Kepler field. Modeled parameters provide T eff = 14.0 +1.9 −1.9 kK and log g = 8.00 +0.42 −0.27 dex. Figure 7a displays the best fit model to the Balmer lines of the optical spectrum of KIC 9082980. Figure 7b shows the χ 2 contour map in T eff -log g space, where KIC 9082980 falls just outside of the instability strip. The modeled temperature and gravity place KIC 9082980 in a region hotter than the blue edge of the instability strip, but the possibility of it being within the strip cannot be ruled out, when the 1σ errors are included. From spectroscopic analysis, KIC 9082980 is an instability strip candidate. Kepler data is not available to confirm or deny this hypothesis. KIC 2158770 (PMI19245+3734) KIC 2158770 is a known DA white dwarf in the Kepler field. Modeled parameters for KIC 2158770 are, T eff = 12.5 +3.3 −1.9 kK and log g = 8.00 +0.46 −0.50 dex. Figure 8a shows the best fit model for each of the Balmer lines in the optical spectrum of KIC 2158770. The χ 2 contour map shown in Fig. 8b was used to calculate errors and confidence intervals for the model fitted parameters. KIC 2158770 is an instability strip candidate because its measurements of effective temperature and gravity, taking the 1σ errors into account, can fall within the instability strip. A recent study by Kleinman et al. (2013) used SDSS spectra and measure a T eff of 9964 ± 41 K and a log g of 8.01 ± 0.054 dex. This measurement falls at the edge of the χ 2 3σ contour of the modeled parameters of this paper. SDSS spectra has lower resolution than the Palomar spectra, with R = 1800. Kleinman et al. (2013) used a similar set of models and grids, which they fit using an automated process; the fits were checked by eye as well. Both the fit from this work and the closest fit to the literature are shown in the stacked Balmer line and contour plots in Fig. 8. The best fit model from this paper fits all but Hβ better by eye than the fit from Kleinman et al. (2013), which suggests that our estimate is more reliable. The periodogram produced from the Kepler data does not show any statistically significant (>3σ) periods within the normal range of pulsation periods. Therefore, this WD is not confirmed to be an instability strip candidate. However, its proximity to the instability strip potentially makes the star a useful constraint to the strip's blue edge. KIC 6672883 (PMI19010+4208) KIC 6672883 is a new DA white dwarf in the Kepler field with modeled parameters of T eff = 16.5 +6.6 −6.8 kK and log g = 7.75 +1.14 −0.75 dex. Fig. 9a shows the best fit model for the fitted Balmer lines. Fig. 9b displays the χ 2 contour map in T eff -log g space, showing large errors in the measurement. Taking the large 1σ error bars on the measurements into account potentially places KIC 6672883 within the instability strip, so it is an instability strip candidate. Kepler data was available for this target, but did not show any statistically significant periodicity (>3σ) within the period range of pulsations. We thus confirm KIC 6672883 to be a non-pulsating DA white dwarf. A better spectrum and improved parameters will be necessary to determine if the star can put any useful constraint on the edge of the instability strip. KIC 10198116 (PMI19099+4717) KIC 10198116 is a known DA white dwarf in the Kepler field with best fit modeled parameters of T eff = 13.5 +3.4 −2.1 kK and log g = 8.00 +0.35 −0.31 dex. The nominal temperature appears to be too hot for the star to be in the instability strip, but the errors in the temperature and gravity measurements still makes this a possibility, so we identify KIC 10198116 as an instability strip candidate. Fig. 10a shows the best fit model plotted on the Balmer lines. Fig. 10b shows the χ 2 contour map in T eff -log g space of the best fit model and 1, 2, and 3σ confidence intervals. KIC 10198116 has been studied by Østensen et al. (2011) and Maoz et al. (2015). The modeled parameters from, Østensen et al. (2011) are T eff = 14.2(5) kK and log g = 7.9(3) dex, which is within the 1σ errors of the measurements made in this paper. The comparison of the literature fit and the fit from this work are shown in both plots of Fig. 10. Østensen et al. (2011) and Maoz et al. (2015) do not predict KIC 10198116 to be a pulsator. The Kepler photometric data does not show any statistically significant periodicity (>3σ) within the normal period range of pulsations. KIC 10198116 is confirmed to be a non-pulsator, and this may help to put a constraint on the blue edge of the instability strip. KIC 11509531 (PMI19320+4925) KIC 11509531 is a new DA white dwarf in the Kepler field with best fit modeled parameters of T eff = 9.25 +5.36 −1.52 kK and log g = 8.25 +0.75 −1.25 dex. Fig. 11a shows the Balmer lines overplotted with the best fit model. Fig. 11b shows the χ 2 contour map in T eff -log g space with confidence intervals. It can be seen that there are two separate 1σ contours, each with its own minimum χ 2 value. The lowest χ 2 value is actually for parameters T eff = 25.0 +5.0 −8.6 kK and log g = 7.00 +1.45 −0.00 dex, but both this and the previously mentioned fits were analyzed by eye and it was determined that the lower effective temperature value provides a better fit to the line cores (comparison shown in both panels of Fig. 11). The B − V and U − B colors are also not blue enough to be consistent with a 25.0 kK sources. For these reasons, we adopt the lower T eff value. This modeled effective temperature is on the low side for the star to be in the instability strip, but the 1σ error makes this a possibility, which leaves KIC 11509531 as an instability strip candidate. Kepler data was available for this object, but no statistically significant periodicity (>3σ) was found within the normal range of periods for pulsations. A few peaks were found above 3σ significance for longer periods (ones corresponding with variability), but when the data was phased to each of these periods, there was no clear periodic trend. KIC 11509531 is confirmed to be a non-pulsating DA white dwarf, and one which might place a constraint on the red edge of the instability strip. Better spectra would be needed again in this case. KIC 10213347 (PMI19362+4714) KIC 10213347 is a new DA white dwarf in the Kepler field. The modeled parameters for the best fit are T eff = 10.3 +17.6 −2.0 kK and log g = 8.00 +1.00 −1.00 dex. Fig. 12a displays the best fit model plotted on the Balmer lines of the observed spectrum. Fig. 12b shows the χ 2 contour map with confidence intervals in T eff -log g space. The 1σ errors are very large, but the modeled effective temperature does lie close to the instability strip and can lie within the instability strip when the errors are taken into account. We therefore classify KIC 10213347 as an instability strip candidate. Kepler data, however, does not show any statistically significant periodicity (>3σ) within the range of pulsations. A single peak is seen with a period corresponding to variability, but when the data is phased to this long period, no plausible periodic trend is apparent. Therefore, KIC 10213347 is confirmed to be a non-pulsator, but could potentially constrain the red edge of the instability strip. KIC 8244398 (PMI19423+4407) KIC 8244398 is a new DA white dwarf in the Kepler field. It has modeled parameters of T eff = 13.5 +7.3 −3.5 kK and log g = 8.00 +0.76 −0.90 dex. Fig. 13a shows the best fit model overplotted on the Balmer lines of the observed spectrum. Fig. 13b shows the χ 2 contour map in T eff -log g space with 1, 2, and 3σ confidence intervals. The model temperature is on the hot side, but the instability strip is within the 1σ error bars, which are relatively large in this case. When these large errors are taken into account, KIC 8244398 is an instability strip candidate. The Kepler photometric data produces a periodogram that does not show any possible periods with greater than 3σ significance, within the range of pulsations. We therefore claim KIC 8244398 to be a non-pulsator, but it can help to constrain the blue side of the instability strip. KIC 9228724 (PMI19430+4538) KIC 9228724 is a new DA white dwarf in the Kepler field. The best fit modeled parameters are T eff = 12.8 +6.8 −2.3 kK and log g = 8.25 +0.49 −0.88 dex. Fig. 14a shows the best fit model plotted on the observed spectrum's Balmer lines. Fig. 14b displays the χ 2 contour map in T eff -log g space. It is apparent that KIC 9228724 lies very close to the blue edge of the instability strip and can lie within the instability strip when the 1σ errors are taken into account. KIC 9228724 is an instability strip candidate. The Kepler data do not show any statistically significant periodicity (>3σ) within the likely range of periods for pulsating DA white dwarfs. KIC 9228724 is confirmed to be a non-pulsator. Again this star may place a constraint on the blue edge of the instability strip. KIC 10649118 (PMI18553+4755) KIC 10649118 is a new DA white dwarf in the Kepler field. Modeled parameters produced T eff = 8.50 +0.82 −0.69 kK and log g = 8.00 +0.64 −0.77 dex. Figure 15a shows the best fit model to each of the Balmer lines. Figure 15b displays the 1-3σ contours of the best fit parameters for log g and T eff . The χ 2 contour map in T eff -log g space shows two 1σ minima. The other minimum has T eff = 27.0 +3.0 −5.5 kK and log g = 7.00 +0.57 −0.00 dex. Both minima were examined by eye and the minimum with T eff = 9.00 kK was visually determined to have the better overall fit of the line cores, even though T eff = 27.0 kK had the lower χ 2 value. B − V and U − B colors also suggest an effective temperature consistent with 8.50 kK. The two fits can be seen on the Balmer line and contour plots in Fig. 15. Neither of the above measurements fall within the instability strip (shown in Fig. 15b), therefore this WD is not an instability strip candidate. There is no Kepler photometry available to confirm or deny this hypothesis. KIC 4242459 (PMI19002+3922) KIC 4242459 is confirmed to be a DA white dwarf and modeled parameters of T eff = 9.50 +0.22 −0.13 kK and log g = 8.25 +0.09 −0.05 dex were determined. KIC 4242459 was studied in the optical/IR by Zuckerman et al. (2003), who measured T eff = 9470 K with a fixed value of log g = 8.00 using photometry, which closely matches our measurements; it is within our 1σ error contour. Figure 16a shows the best fit model from this work and the closest match to the best fit model from the literature. Figure 16b shows the χ 2 contour map of KIC 4242459 in the T eff -log g plane with confidence intervals. Modeled parameters for KIC 4242459 place it below the red edge of the instability strip, even when the 1σ errors are taken into consideration. This WD is not an instability strip candidate and photometric data confirms that there is no statistically significant variability (>3σ) on pulsation timescales. KIC 11604781 is a known DA white dwarf with a companion of unknown spectral type. Our best fit modeled parameters are T eff = 9.50 +0.55 −0.26 K and log g = 8.50 +0.32 −0.10 dex. Østensen et al. (2011) found this object to be a DA5 with T eff = 9.1(5) kK and log g = 8.3(3), which is within 3σ of our result. Figure 17a shows the best fit model to the spectrum of KIC 11604781. Figure 17b shows the χ 2 contour map in T eff -log g space, with 1, 2 and 3σ contours marked. The fits from this work and the closest match to the fit of Østensen et al. (2011) are compared in both Figs. 17a and 17b. From its modeled parameters, KIC 11604781 is not an instability strip candidate, however we do detect variability in its light curve. A period of 4.89 days is extracted from the photometric data, which is well outside of the normal period range for pulsations. Figure 4 shows a section of the Kepler light curve, the Lomb-Scargle periodogram of the entire light curve, and the phased and binned light curve for KIC 11604781. A representative point with a typical error bar is located in the top right corner of the light curve. KIC 11604781 has been studied by a few others (Østensen et al. 2011;Maoz et al. 2015), who found the same periodicity (4.89±0.02 days) without a concrete explanation as to its cause. Explanations of possible companions and the causation of this period are provided in Maoz et al. (2015). The light curve variation is most likely an orbital variation and is determined to not be due to pulsations. No statistically significant, greater than 3σ, periodicity was found within the range of DA white dwarf pulsations. KIC 8682822 (PMI19173+4452) KIC 8682822 is a known DA white dwarf and was studied by both Østensen et al. (2010) and Maoz et al. (2015). The best fit modeled parameters from this work yield T eff = 19.5 +1.3 −1.1 kK and log g = 8.75 +0.14 −0.10 dex. Østensen et al. (2010) studied this WD as a compact pulsator candidate. Østensen et al. (2010) measures a T eff = 23.1 kK and log g = 8.5 dex, which is hotter than our modeled effective temperature, and is just outside of the 3σ contour of our result. Figure 18a shows the best fit model of T eff and log g for the Balmer lines in the optical spectrum of KIC 8682822. Figure 18b shows the χ 2 contour map of KIC 8682822 in T eff -log g space; it falls outside of the instability strip. The fits from this work and the closest match to the Østensen et al. (2010) fit are compared in both Figs. 18a and 18b. No statistically significant period, greater than 3σ significance, is found in the Kepler data, which agrees with KIC 8682822's modeled effective temperature being too hot to fall within the instability strip. Maoz et al. (2015) measures a mass of ∼0.8 -1.2M ⊙ , with extremely small amplitude variations leading to a period of 4.7±0.3d. Photometric data confirms that there are no variations on the order of pulsations, and we do not recover the 4.7 day period that Maoz et al. (2015) claims. KIC 8682822 is confirmed to be a non-pulsating DA white dwarf. KIC 7129927 (PMI19409+4240) KIC 7129927 is a known DA white dwarf. We determine T eff = 9.50 +0.45 −0.31 kK and log g = 8.25 +0.30 −0.15 dex. Østensen et al. (2011) estimates T eff and log g from observations of two separate years; T eff = 23314 ± 212 and log g = 7.280 ± 0.050 (year 1) and T eff = 24191 ± 332 and log g = 7.120 ± 0.060 (year 2). Østensen et al. (2011) claims this object is a composite DA2+DA3 white dwarf binary system. Figure 19a shows the best fit model to the Balmer lines of KIC 7129927 and includes the closest fit to both of the Østensen et al. (2011) fits for comparison. Figure 19b shows the χ 2 contour map in T eff -log g space of KIC 7129927, which falls outside of the instability strip. The fits from Østensen et al. (2011) are noted on the contour map. Our measurements do not match well with those of Østensen et al. (2011). Balmer line best fit diagrams were examined for the closest models in our grid to the T eff and log g from Østensen et al. (2011); T eff = 23.0 kK, log g = 7.25 dex and T eff = 24.0 kK, log g = 7.00 dex. The best fit we obtained, fits the cores much better. It is apparent though, from Fig. 19b that there is a 2σ and 3σ contour around a central T eff = 21.5 kK and log g = 7.33 dex. The measurements from Østensen et al. (2011) fall into the secondary 3σ contour of our χ 2 plot. B − V and U − B colors suggest an effective temperature consistent with the measurements from this work. The discrepancy in the model fits here could result from the fact that KIC 7129927 might be a composite system and our single WD model fit is not appropriate. The modeled temperature estimate from this work places KIC 7129927 just below the red edge of the instability strip, but this WD is not an instability strip candidate. It does not coincide with the instability strip when the 1σ errors are considered. Kepler data confirms there is no detectable, statistically significant (>3σ) variability within the range of pulsations in the photometric data, noted by Maoz et al. (2015) as well. KIC 6042560 (Blue18) KIC 6042560 is a new DA white dwarf in the Kepler field. The modeled parameters for this WD are T eff = 6.75 +0.76 −0.75 kK and log g = 7.75 +1.11 −0.75 dex. Fig. 9a displays the best fit model overplotted on the five measured Balmer lines. Fig. 9b shows the contour χ 2 map in T eff -log g space with the best fit marked. It is clear from this figure that KIC 6042560 is far too cool to be located within the instability strip, therefore it is not an instability strip candidate. Photometric data from Kepler shows no statistically significant periodicity (>3σ) within the range of pulsations. KIC 6042560 is confirmed to be a non-pulsating DA white dwarf. KIC 7346018 (Blue1903) KIC 7346018 is a new DA white dwarf in the Kepler field. The best fit modeled parameters are T eff = 30.0 +0.0 −3.5 kK and log g = 8.50 +0.50 −0.82 dex. It is noted here that the positive error of 0.0 kK is not a true error, but the upper limit of the model grid.. The effective temperature measurement may thus be underestimated, as 30.0 kK is the absolute upper bound of the DA white dwarf models. Fig. 21a displays the best model fit overplotted on the observed spectral Balmer lines. Fig. 21b shows the χ 2 contour map with the 1, 2 , and 3σ confidence intervals in T eff -log g space. It is clear that KIC 7346018 lies far from the instability strip in effective temperature and it is not an instability strip candidate. We do note, however, that there is a possibility this white dwarf is a hot pulsating DA white dwarf (or DAV), a new class of DAVs classified by Kurtz et al. (2013), which have temperatures around 30 kK. The Kepler data available for KIC 7346017 however does not show any periodicity with greater than 3σ significance within the period range of pulsations. This DA white dwarf is confirmed as a non-pulsator. KIC 8612751 (PMI19060+4446) KIC 8612751 is a new DA white dwarf in the Kepler field. The best fit model provided measurements of T eff = 7.50 +1.35 −1.08 kK and log g = 8.00 +1.00 −1.00 dex. These parameters place KIC 8612751 below the instability strip in temperature, and even taking errors into account, it is not an instability strip candidate. Fig. 22a shows the best fit model plotted on the observed spectrum for all of the Balmer lines. Fig. 22b shows the χ 2 contour map with the best fit marked in T eff -log g space. A second set of 1, 2, and 3σ contours can be seen at the upper left corner of the χ 2 plot. Both minima of each 1σ contour were examined by eye and the fit of T eff = 30.0 kK and log g = 7.00 dex was determined to not be as good of a fit as the previously noted values (see Fig. 22 for comparison). B − V and U − B colors also provide an effective temperature which agrees with T eff = 7.50 kK. Kepler data are available for this white dwarf, but no periodic modulation with greater than 3σ significance was found from the period analysis. KIC 8612751 is confirmed to be a non-pulsating DA white dwarf. KIC 4829241 (HSH08) KIC 4829241 is a known DA white dwarf in the Kepler field with modeled parameters of T eff = 19.5 +1.5 −1.6 kK and log g = 8.00 +0.29 −0.24 dex. Several studies of KIC 4829241 have been completed and two of these have measured effective temperatures and gravities. Østensen et al. (2011) measured an effective temperature of 19.4(5) kK and gravity of 7.8(3) dex. Zhao et al. (2013) found a modeled effective temperature of 20376±345 K and gravity of 7.93±0.06 dex. Both of the previously determined values are within the 1σ errors of the measurements from this paper. Fig. 23a shows the best fit model plotted on the observed spectrum's Balmer lines. Fig. 23b displays the contour χ 2 map in T eff -log g space with 1 − 3σ confidence intervals. Both plots in Fig. 23 show the comparison of the closest model to the literature fits with the fit from this work. As model fits for this work and the fit from Østensen et al. (2011) are the essentially same, the alternative fit represents the model from Zhao et al. (2013). KIC instability strip candidate. Kepler data show no significant periods, greater than 3σ significance, within the range of pulsation periods. A long period variation of 16.61 days was found, which is well above 3σ significance (Fig. 5). However, taking the errors into account, the data could very well have no variation. Neither Østensen et al. (2011Østensen et al. ( ) nor Zhao et al. (2013 found a pulsation period for this white dwarf. KIC 4829241 is confirmed to be a non-pulsating DA white dwarf. KIC 6212123 (HSH24) KIC 6212123 is a new DA white dwarf in the Kepler field. Best fit modeled parameters yield T eff = 6.25 +0.59 −0.25 kK and log g = 8.50 +0.50 −0.82 dex. Fig. 24a displays the best fit model overplotted on the observed spectrum of the measured Balmer lines. Fig. 24b shows the contour χ 2 map in T eff -log g space with confidence intervals. The measured effective temperature of KIC 6212123 is too cool to fall within the instability strip, even taking the 1σ errors into account. KIC 6212123 is not an instability strip candidate. Kepler data shows no statistically significant periodicity (>3σ) in the range of pulsations for this white dwarf. KIC 6212123 is confirmed to be a non-pulsating DA white dwarf. KIC 3354599 (HSH32) KIC 3354599 is a new DA white dwarf in the Kepler field with modeled parameters of T eff = 6.00 +0.92 −0.00 kK and log g = 8.50 +0.50 −1.09 dex. Again, the 0.0 kK error is due to the limit of the model grid. Fig. 25a shows the best fit model plotted on top of the observed spectrum of all the measured Balmer lines (Hα to Hξ). Fig. 25b shows the χ 2 contour map in T eff -log g space with the best fit model and confidence intervals marked. The modeled effective temperature of KIC 3354599 is too cool to be within the instability strip, even taking the 1σ errors into account, therefore it is not an instability strip candidate. Available Kepler photometric data does not show any statistically significant, greater than 3σ, periodicity within the normal range of pulsations. KIC 3354599 is confirmed to be a non-pulsating DA white dwarf. KIC 10149875 (HSH36) KIC 10149875 is a new DA white dwarf in the Kepler field. The best fit modeled parameters are T eff = 30.0 +0.0 −0.8 kK and log g = 9.00 +0.00 −0.29 dex. Both the 0.0 kK and 0.00 dex errors mark the upper bounds of the models fit to these DA white dwarfs. Fig. 26a shows the best fit model plotted on the observed spectrum's measured Balmer lines. Fig. 26b shows the χ 2 contour map in T eff -log g space with confidence intervals. The minimum χ 2 values within each 1σ contour were examined by eye and the best fit was determined to be the fit with T eff = 30.0 kK, which also had the lowest χ 2 value. The alternative χ 2 minimum fit was T eff = 7.25 +0.49 −0.62 kK and log g = 9.00 +0.00 −0.62 dex. B − V and U − B colors are most consistent with the lower effective temperature value of 9.00 kK. The comparison of the two fits is shown in both plots in Fig. 26. The effective temperature of KIC 10149875 is too hot to be within the instability strip, therefore it is not an instability strip candidate. This white dwarf is also a possible hot DAV (Kurtz et al. 2013). Kepler photometric data shows no statistically significant (>3σ) periods within the range of pulsations. KIC 10149875 is confirmed to be a non-pulsating DA white dwarf. Conclusion We presented modeled effective temperatures and gravities for 23 DA white dwarfs in the Kepler field using ground-based spectroscopic observations. Seven of the twenty three WDs had been previously studied and are here confirmed to be DA white dwarfs. The remaining 16 are newly classified DA white dwarfs in the Kepler field. Eighteen of the total twenty three DA white dwarfs were supplemented by Kepler photometric data, making it possible to search for modulations from white dwarf pulsation modes. Eleven WDs are found from spectroscopic measurements to be instability strip candidates when their 1σ errors are taken into account and twelve were determined to be non-pulsators (see Fig. 1). Out of the 11 instability strip candidates, 7 had photometric data and none of these were seen to have statistically significant periods within the normal period range of DA white dwarf pulsations. Of the 12 non-pulsators, 10 had photometric data from Kepler and were determined to be photometrically stable from both spectroscopic and photometric analysis. These 10 photometrically stable DA white dwarfs can be used as photometric calibrators in the Kepler field. One more non-pulsating DA white dwarf, KIC 11604781, had Kepler data, but has a companion, so it is not photometrically stable. We do not confirm any pulsators in this sample of 23 DA white dwarfs.
2016-10-16T18:15:31.000Z
2016-10-16T00:00:00.000
{ "year": 2017, "sha1": "40039889cfd2bd829aba9acd784780d3a2db785a", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/464/3/3464/18518661/stw2490.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "40039889cfd2bd829aba9acd784780d3a2db785a", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }