text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Molecular Dynamic Study of PullIn Instability of Nano-Switches
Capacitive nano-switches have been of great interest as replacements for conventional semiconductor switches. Accurate determination of the pull-in voltage is critical in the design process. In the present investigation, pull-in instability of nano-switches made of two parallel plates subjected to electrostatic force is studied. For this purpose, two parallel rectangular nanoplates with opposite charges are modeled based on molecular dynamics (MD) technique. Different initial gaps between nanoplates and its effect on pull-in phenomena are studied in addition to taking different values of geometrical and physical parameters into account to evaluate pull-in voltages. Here molecular dynamic simulations as an atomic interaction approach are employed for modeling of nano-switches in order to study pull-in instability considering atomic interaction and surface tension. Boundary conditions and also the van der Waals force are considered as important parameters to investigate their effects on pull-in voltage values.
Introduction
The potential difference across the conductors develops an electric field across the dielectric, causing positive plate to come in contact with the ground plane.If the applied voltage exceeds this pull-in limit, the upper plate will snap down.Accurate determination of the pull-in voltage is critical in the design process of capacitive switches and in virtue of the electromechanical coupling effect and the nonlinearity of electrostatic force [1] [2].
To operate devices such as capacitors, they need to be biased with a DC voltage to form a surface charge [3] [4].
The aspects such as drive mode, temperature dependence and dielectric charging have been analyzed and their effects on the pulling-voltage are evaluated [5].There are numerous investigations based on analytical modeling of the pull-in voltage for beams or diaphragms with the modified capacitance accounting to the fringing field and the equivalent spring constant to meet the emerging need for microelectromechanical systems (MEMS) process during both process development and manufacturing [6]- [11].Distributed modeling based on well-known beam and plate theories [12] is also developed.Moreover, analytical approximate solutions for pull-in voltages are obtained via energy methods [13], homotopy perturbation method [14], modified couple stress theory [15], and with elastic boundaries [16].Parallel plate capacitors are modeled in the equations of motion based on classical elasticity theory and the orthodox theory of Coulomb blockade [17] Researchers have shown that there is a close match between analytical solution and numerical simulations of the nano-devises.For instance in [18] the continuum model's accuracy is compared with atomistic simulation to compute the pull-in voltage of Nano-cantilever switches.Atomistic simulation is an applicable approach to studying nano system's properties [19]- [21].Hoshyarmanesh et al. applied molecular dynamics to investigate surface molecular interaction in micro-cantilever biosensors [22].Molecular dynamics simulation of pull-in phenomena is also derived in carbon nanotubes to study Stone-Wales defects, different geometries and boundary conditions on the pull-in charges [23] [24].
Different methods such as molecular mechanics and atomistic modeling are employed in order to investigate the mechanical behaviors of nanostructures.Ansari and Sahmani [25] studied biaxial buckling behavior of single-layered graphene sheets on the basis of two approaches including nonlocal continuum elasticity and MD simulation.Bahrami et al. used molecular dynamic to simulate the Interaction of Carbon Nanotubes and External Flow [26].
Additionally, various investigations have been conducted to indicate the size effect on the pull-in phenomenon of micro-and nano-switches.Mohammadi et al. [27] investigated the size effect on the pull-in instability of hydrostatically and electrostatically actuated circular microplates.The research revealed that variation of dimensionless internal length scale parameter and hydrostatic pressure leads to different values of pull-in voltage.Mousavi et al. [28] studied the small scale effect on the pull-in instability of nano-switches subjected to electrostatic and intermolecular forces.The intermolecular and electrostatic attraction forces using Eringen's nonlocal elasticity theory are solved numerically by differential quadrature method.Wang and Wang [29] studied the pull-in instability of a nano-switch under electrostatic and intermolecular Casimir forces with consideration of the surface energy.The analysis is based on the geometrically non-linear Euler-Bernoulli beam theory with consideration of the surface energy.The research has proved the effect of intermolecular Casimir force on the pullin voltage weakens as the initial gap has increases.
Many problems in the natural sciences and engineering are common with sources of uncertainty.There are various numerical and analytical methods to solve the static uncertain systems like fuzzy finite element analysis [30] and other trial and errors approaches, however these are not approximate solutions and lack of accuracy is observable.Computer simulations are the most common approach to studying problems in uncertainty quantification.Simulation-based methods like Monte Carlo simulations are one of the basic probabilistic approaches for uncertainty propagation.Plates uniformly loaded with electrical potential and fixed at both four edges and two edges are studied here and also the nano-switche with fixedplate at four sides as an uncertain static system is solved through molecular dynamic technique.
In the current investigation, the pull-in instability of electrostatically actuated nano-switches is studied with considering the small scale effect conducting molecular dynamic simulation.To study pull-in phenomena nanoswitches with different initial gaps and different boundary conditions are simulated and pull-in voltages are evaluated.In addition to investigate the effects of van der Waals force in the small-scale, the simulation is solved by both considering and neglecting van der Waals force, and the results are compared.
Molecular Dynamics Simulation
Two approaches have been used to obtain the solution of the maximum deflection for fixed thin rectangular plates under uniform load; these are the double cosine series and the superposition method as a generalization of Hencky's solution [31].The problem of the uniformly loaded rectangular plate with fixed at all sides has been solved by Hencky (1913) and independently by Boobnoff (1902).Boobnoff made exact calculations for several aspect ratios for the plate while Hencky made refined calculations only for the case of a square plat [32].Hutchinson has used the solution from which was presented in [33].However molecular dynamic investigate the atomic interaction of plates in nano-devices considering non-local view.
Due to the difficulties in manufacturing nanostructure, experiments on nano-switches have been costly.Computational design tools, when sufficiently precise and accurate, can be far more economical than trial-and-error design cycles based purely on experimentation.In this section, a molecular dynamic approach is proposed for analysis and design of capacitors.To this end, MD simulation of a nano-switch and it's response to electrical field and estimation of mechanical deflection of capacitor is performed.Schematic of nanoplate-based switch is shown in Figure 1.The capacitor plates are constrained with two different boundary conditions as are displayed in Figure 2, clamped square diaphragm and fixed-fixed sides, while the lower plate is fixed in bottom side.
The necessary step that must be considered in MD simulation is equilibration of the capacitor, since it is the starting point of minimization process [34].NAMD execution consists of three stages of equilibration and one stage under applied electrical field [35].The stages are prevalent in most of simulations.First stage is energy minimization; second stage temperature is increased from zero to 295 Kelvin degree with constant volume.In the next stage pressure is constant and the Langevin thermostat is applied.The last stage the volume is adjusted constant and the desire voltage is applied to system.The constant pressure simulation let volume to alter.An equilibrate system can be find out by following conditions, volume should fluctuate smoothly around a constant value; also energy is approached to a certain value.If one draws the volume curve versus time, a noticeable reducing tangent would be observed in the volume.
In order to simulate the mechanical deflection of a capacitor, the electrical field with different values is applied to the system.In other words, potential difference applied to a system causes a certain value of voltage along z-axis.Electrostatic forces deflect the moveable nanoplate depending on the gap between the movable plate and the fixed one.van der Waals force also acts an important role in the nanoplate's attraction.Lennard-Jones potential has two attractive and repulsive terms.Forcefield potential energy function [36] consists of intermolecular or bonding interactions, Lennar-Jones potential for the vdW interaction and the Coulomb potential for electrostatic actuation are utilized to model the nano-switches as it is brought in Equation ( 1) and expanded in Equation (2).( ) ( ) ( ) where , i j r is the distance between atoms i and j, the parameter ε denotes the strength of the interaction and σ defines a length scale.The attractive term proportional to , 6 i j r − dominates at large distances and it models the van der Waals dispersion forces caused by the dipole-dipole interactions due to fluctuating dipoles.The electrostatic interaction of two atoms with charges i q and j q separated by , i j r is given by coul E , the dielectric contant ε 0 in vacuum is equal to 1.When two plates contact with each other, the vdW repulsive force will make the collapse harder; the total van der Waals energy can be computed by a pairwise summation over all the atoms.
Plats are made of silicon which the upper plate is n-type silicon and the lower one is p-type silicon.This basically means n-type silicon is positively charged, and p-type silicon becomes negatively charged.Subsequently applying an uniform electric field across the z-axis enforce an attractive force between plats and voltage has a direct relation to electric field, which is electric field multiple the size of system along the z-axis, consequently the pull-in voltage will be determined.
When an electrical field is applied, the lines of pressure control will be absent.Inserting an electrical field with controlled pressure disturbs the system and led to incorrect answers.In addition the Langevin temperature controlled should be turning on.Applying Langevin forces to ions, their motions which supposed to be measured through applied electrical field, led to an infinitesimal deviation in the current.If a high electrical field was applied to system, the high ionic current would increase the temperature up to 450 Kelvin degree.The simulation leans to be invalid in such degree of heat, since this restriction limits us to work by smaller electrical field in real simulation.In the MD simulation, the atom's interaction and the surface tension are considered that impose an ineligible force on nano-scale devices.Surface tension plays an important role in nano-devices function which is neglected in classic mechanic.It makes the simulation more accurate than analytical calculation such as continuum mechanics that is originated from classic mechanics.
Total number of atoms simulated to form plates is 66,304, and plates are made of silicon.Force field parameters of silicon for the covalent bonds between atoms including bond energy, bond radius, angle, dihedral, van der Waals radius and some other decisive factors are brought in the Appendix.Overall simulation time is determined based on reaching to a stable point in energy and volume, as it is seen in following graphs the simulation steps has exceeded 120,000 steps considering time step fixed at 2 fs/step.Fixed atom constraint facility made it possible to take the effects of boundary condition into account.
Simulation is adjusted as the coordinate of atoms be registered once in every 100 steps to find the pull-in voltage.As it is illustrated in the Figures 3-5 the applied electric field should be large enough to contact plates.
The capacitor with gap value of 2.719 nm was actuated by the electric field of 7 kcal mol A e ⋅ and 5 kcal mol A e ⋅ considering with and without vdW respectively.The electric field value is gradually augmenting for the gap values of 1.543, 4.3504 nm, respectively, in the case of c lamped-clamped boundary condition.Thus, absorption has happened over the time and plates have connected completely.The Newtonian equations of motion are solved for the atoms which are consisting of following terms [37]: where E i is the electric field acting on atom i, m i the mass, q i the partial charge, and pi the laboratory momentum of atom i. F i represents the total intermolecular force acting on atom i due to all other atoms.Silicon mass and partial charge are respectively equal to 28 u and +4 for n-type and −4 for p-type.Simulation has been executed with and without van der Waals force to discover the exact effect of surface forces in the function nano-scale switches.Molecular dynamic results indicate that the der Waals force can play a significant role in determining the pull-in voltage of cantilever switches especially for smaller gaps.When the gap between the cantilevers plats is very small, vdW force acts as a repelling force to avoid the distance to exceed the atomic van der Waals radius.Under this circumstance system requires a bigger snap-down voltage to overcome the repulsive force.
Results and Discussion
The successful application of molecular dynamic simulations is using a nonlinear least square fitting procedure to minimize the equilibrate energy and obtain stabilize system results.The geometrical parameters of nanoswitches are considered in MD simulations, i.e. 2.17 nm, 15.07 nm h a b = = = as are seen below.Capacitors with two different gaps are modeled in Figure 3, Figure 4 and plates are fixed at two sides; however, in Figure 5 upper plate is fixed at four sides.The figures show the transient absorption of plates under the electrical field so the whole parts are illustrating the snap-down phenomena.Although for pulling phenomenon just a small connection between the electrodes are enough, as shown in the Figures 3-5, the connection is not so small and a remarkable area of the upper electrode touches the lower electrode, this is due to gaps size designed in the vicinity of plats thickness value in addition to a potent atomic interaction.In the other word relatively small gaps, which are in the order of plats thickness, make a large area of plats surface to contact.
The values of pull-in voltages obtained by MD simulations are tabulated in Table 1 corresponding to nanoswitches with different initial gaps between the nanoplates.It is clearly seen that by increasing the value of ini-tial gap of nano-switch, the pull-in instability occurs at higher applied voltages.
As a necessary condition for equilibration, the volume should be smoothly fluctuated around an average value and stabilized energy lean to a certain value.Figure 6 and Figure 7 demonstrate the system's steady state energy versus time-step regarding four different initial capacitor's gaps and fixed edges.As it is demonstrated the system's total energy has remained constant during equalization.If the gap between the upper electrode and the ground plane is smaller than vdW radius, van der Waals force acts inversely, so applies an repelling force that inhibit plates from having contact; to obtain precious results vdW must be taken into account.Numerical value of vdW force during equalization is drawn distinguishingly in Figure 8.One can see that the graph reaches its peak in the plats distance of 0.22 nm before dropping and converting to a repellent force.
To have a better judgment, graphs of applied voltage in fourvarious gaps are drawn in one picture.being obtained by the MD simulations versus their relevant gap value are drawn as a separated curve.It is indicated that the plats initial distance as a part of geometry plays an important role in the pull-in phenomenon of nano-switches.
The pull-in voltage versus gap values in case of investigation of boundary condition (fixed plates at two sides and four sides) and vdW effects are respectively illustrated in Figure 10 and Figure 11 which present a nonlinear relation of them.In the Figure 11 one can clearly perceive the obtained pull-in voltage is directly influenced by the van der Waals force; for a more intensive contact the repulsive force must be defeated, so the plats are so close, it requires a bigger electric field to frustrate vdW repulsive force.To emphasize this concept nano-switch sensitivity is calculated and shown in Figure 12 in two cases of considering and ignoring vdW based on following formula, output(deflection) divided input(actuation voltage), also the boundary condition is considered into sensitivity.To increase the sensitivity and decrease the pull-in voltage it is viable to increase the degree of freedom of plates in addition to decreasing the gap value.
Conclusion
In the present work, the pull-in phenomenon in nano-switches considering gap value and boundary condition is investigated.MD simulations as a unique view in simulation of pull-in phenomena are performed in capacitor switches with different boundary conditions and gap values.Pull-in voltages of nano-switches are predictedus- ing MD.It was revealed that plats' initial gap has a considerable influence on the pull-in phenomenon in nanoswitches.van der Waals force plays a significant role in plate's absorption especially in the case of smaller gaps, based on the results that the effects of van der Waals force have intensified in nano-scaled switches and played as an auxiliary force at first; however, for closer contact its role has changed to a repulsive force.Clamped edges compel a resisting force against electrical field which acquires system an extra load for plates contact, hence the pull-in voltage is much bigger in the case of four sides fixed plates in comparison to two sides fixed.Mechanism of pull-in phenomena is analyzed and studied without carrying out time-consuming and high cost experiments; through molecular dynamic it is workable to study each important parameter thoroughly.That is the reason that makes the MD a powerful technique to scrutinizenano-devices and their internal interaction like force field potential, electrostatic forces and van der Waals forces.
Figure 5
demonstrates plate's absorption in the case of four sides fixed boundary condition regarding gap value of 4.3504.
Figure 3 .
Figure 3. MD simulation performed for pull-in phenomenon in a nano-switch, fixed at two sides, gap = 4.3504.
Figure 4 .Figure 5 .
Figure 4. MD simulation performed for pull-in phenomenon in a nano-switch, fixed at two sides, gap = 1.543.
Figure 9
illustrates the applied voltages versus deflection regarding different gap values; in addition pull-in voltages
Figure 7 .
Figure 7. Variation of system's total energy versus step number regarding various initial gaps, fixed at four sides.
Figure 8 .
Figure 8. Variation of system's van der Waals energy versus step number for silicon plats, fixed at two sides.
Figure 10 .
Figure 10.Variation of pull-in voltage versus gap value regarding boundary conditions.
Figure 11 .
Figure 11.The pull-in voltage versus gap values considering vdW effects.
Figure 12 .
Figure 12.Variation of sensitivity versus gap value.
Table 1 .
The values of pull-in voltages predicted by MD simulations corresponding to various initial gaps and boundary conditions. | 4,228 | 2014-08-11T00:00:00.000 | [
"Engineering",
"Physics"
] |
On the Hermite-Hadmard inequalities for interval-valued coordinated convex functions
In this paper, we establish Hermite-Hadamard inequality for interval-valued convex function on the co-ordinates on the rectangle from the plane. We also present Hermite-Hadamard inequality for the product of interval-valued convex functions on co-ordinates. Some examples are also given to clarify our new results
Introduction
The classical Hermite-Hadamard inequality is one of the most well-established inequalities in the theory of convex functions with geometrical interpretation, and it has many applications. The Hermite-Hadamard inequality states that, if f : I → R is a convex function on the interval I of real numbers and a, b ∈ I with a < b, then (1.1) Both inequalities in (1.1) hold in the reversed direction if f is concave. We note that Hermite-Hadamard inequality may be regarded as a refinement of the concept of convexity, and it is implied easily from Jensen's inequality. Hermite-Hadamard inequality for convex functions has received renewed attention in recent years, and a remarkable variety of refinements and generalizations have been studied. In [7], Dragomir demonstrated the subsequent inequality of Hadamard type for coordinated convex functions.
For more results related to (1.2), we refer the readers to [1,9,15] and the references therein.
On the other hand, interval analysis is a notable case of set-valued analysis, which is the discussion of sets in the spirit of mathematical analysis and general topology. It was introduced as an attempt to handle the interval uncertainty that appears in many mathematical or computer models of some deterministic real-world phenomena. An old example of an interval enclosure is Archimede's method, which is related to computing the circumference of a circle. In 1966, the first book related to interval analysis was given by Moore, who is known as the first user of intervals in computational mathematics, see [11]. After his book, several scientists started to investigate the theory and application of interval arithmetic. Nowadays, because of its applications, interval analysis is a useful tool in various areas which are interested intensely in uncertain data. You can see applications in computer graphics, experimental and computational physics, error analysis, robotics, and many others.
Preliminaries and known results
In this section, we review some basic definitions, results, notions, and properties which are used throughout the paper. The set of all closed intervals of R, the sets of all closed positive intervals of R, and closed negative intervals of R are denoted by R I , R + I , R -I , respectively. The Hausdorff distance between [X, X] and [Y , Y ] is defined as The metric space (R I , d) is a complete metric space. For more in-depth notations on interval-valued functions, see [12,19].
In [11], Moore gave the notion of the Riemann integral for interval-valued functions. The sets of all Riemann integrable interval-valued functions and real-valued functions on [a, b] are denoted by IR ([a,b]) and R ([a,b]) , respectively. The following theorem gives a relation between (IR)-integrable functions and Riemann integrable (R-integrable) functions (see, [12, p. 131]).
Theorem 2 Let F : [a, b] → R I be an interval-valued function with the property that F(t) = [F(t), F(t)]. F ∈ IR ([a,b]) if and only if F(t), F(t) ∈ R ([a,b]) and
In [19,21], Zhao et al. introduced a kind of convex interval-valued function as follows.
With SX(h, [a, b], R + I ), we will show the set of all h-convex interval-valued functions.
The usual notion of convex interval-valued function matches a relation (2.1) with h(t) = t (see [18]). Moreover, if we take h(t) = t s in (2.1), then Definition 1 gives the s-convex interval-valued function defined by Breckner (see [2]).
In [19], Zhao et al. obtained the following Hermite-Hadamard inequality for intervalvalued functions by using h-convexity.
2) reduces to the following result: which was obtained by Sadowska in [18].
2) reduces to the following result: which was obtained by Osuna-Gómez et al. in [14].
Remark 2 If h(t) = t, then (2.4) reduces to the following result: then (2.5) reduces to the following result: We call S(F, P, δ, ) an integral sum of F associated with P ∈ P(δ, ). Now, we review the concepts and notations of interval-valued double integral given by Zhao et al. in [20]. Definition 2 A function F : → R + I is said to be interval-valued coordinated convex function if the following inequality holds: for all (x, y), (u, w) ∈ and s, t ∈ [0, 1].
Lemma 1 A function F : → R + I is an interval-valued convex on coordinates if and only if there exist two functions F
Proof The proof of this lemma follows immediately by the definition of interval-valued coordinated convex function.
It is easy to prove that an interval-valued convex function is interval-valued coordinated convex, but the converse may not be true. For this, we can see the following example. In what follows, without causing confusion, we will delete the notations of (R), (IR), and (ID). We start with the following theorem.
Theorem 7 If F : → R + I is an interval-valued coordinated convex function on such that F(t) = [F(t), F(t)], then the following inequalities hold:
which can be written as Integrating (4.2) with respect to x over [a, b] and dividing both sides by ba, we have By adding (4.3) and (4.4) and using Theorem 2, we have the second and third inequality in (4.1). From (2.3) we also have By adding (4.5) and (4.6) and using Theorem 2, we have the first inequality in (4.1). In the end, again from (2.2) and Theorem 2, we have and the proof is completed.
Remark 4 If F = F, then Theorem 7 reduces to Theorem 1.
Proof Since F and G are interval-valued coordinated convex functions on , therefore which can be written as Integrating the above inequality with respect to x over [a, b] and dividing both sides by ba, we have (4.8) Now, using inequality (2.6) for each integral on the right-hand side of (4.8), we have c) . N(a, b, c, d), (4.13) where P (a, b, c, d), M(a, b, c, d), and N(a, b, c, d) are defined in Theorem 8.
Proof Since F and G are interval-valued coordinated convex functions, from (2.7) we have and Adding (4.14) and (4.15), then multiplying both sides of the resultant one by 2, we get F(a, c)G(b, c) + F(b, c)G(a, c) , Using (4.17)-(4.24) in (4.16), we have | 1,512 | 2019-12-27T00:00:00.000 | [
"Mathematics"
] |
Decoupling the effects of clear atmosphere and clouds to simplify calculations of the broadband solar irradiance at ground level
. In the case of infinite plane-parallel single- and double-layered cloud, the solar irradiance at ground level computed by a radiative transfer model can be approximated by the product of the irradiance under clear atmosphere and a modification factor due to cloud properties and ground albedo only. Changes in clear-atmosphere properties have negligible effect on the latter so that both terms can be cal-culated independently. The error made in using this approximation depends mostly on the solar zenith angle, the ground albedo and the cloud optical depth. In most cases, the maximum errors (95th percentile) on global and direct surface irradiances are less than 15 W m − 2 and less than 2–5 % in relative value. These values are similar to those recommended by the World Meteorological Organization for high-quality measurements of the solar irradiance.
Introduction
Solar radiation drives weather and climate and takes part in the control of atmospheric chemistry. The surface solar irradiance (SSI) is defined as the power received from the sun on a horizontal surface at ground level. Of concern here is the SSI integrated over the whole spectrum, i.e. between 0.3 and 4 µm, called total or broadband SSI.
Numerical radiative transfer models (RTMs) simulate the propagation of radiation through the atmosphere and are used to calculate the SSI for given atmospheric and surface conditions. RTMs are demanding regarding computer time and in this respect are not appropriate in cases where operational computations of the SSI are performed such as at Deutscher Wetterdienst (Mueller et al., 2009), the Royal Netherlands Meteorological Institute (KNMI) (Deneke et al., 2008;Greuell et al., 2013), the MINES ParisTech or prepared within the MACC European project (Granier et al., 2010). Several solutions have been proposed in order to speed up calculations of the SSI, such as abaci -also known as look-up tables (LUTs; Deneke et al., 2008;Huang et al., 2011;Mueller et al., 2009;Schulz et al., 2009).
The present work contributes to the research of fast calculations of the SSI under all sky conditions. It does not propose a new model but an approximation that can be adopted by models for calculations of the SSI. More precisely, it examines whether in the case of infinite plane-parallel singleand double-layered cloud, the SSI computed by an RTM can be approximated by the product of the SSI under clearsky and a modification factor due to cloud properties and ground albedo only. If this approximation were accurate enough, i.e. if the modification factor did not significantly change with clear-atmosphere properties, it would be possible to construct two independent models, possibly LUTbased models -for example, one for clear-sky conditions and the other for cloudy conditions. Recently, for example, Huang et al. (2011) used such an approximation with a very limited justification. This Technical Note aims at holding this Published by Copernicus Publications on behalf of the European Geosciences Union. justification by (1) exploring the influence of the properties of the clear atmosphere on the SSI in cloudy atmosphere, (2) proposing a general equation that decouples the effects of the clear atmosphere from those due to the clouds, and (3) computing the errors made with this approximation.
Objective
Let G denote the SSI for any sky. G is the sum of the beam component B of the SSI -also known as the direct component -and of the diffuse component D, both received on a horizontal surface. In the present article, following the RTM way of doing, B does not comprise the circumsolar radiation. Let G c , B c and D c denote the same quantities but for clear sky. The ratios K c and K cb are called clear-sky indices (Beyer et al., 1996): (1) K c is also called cloud modification factor in studies on UV or photosynthetically active radiation (Calbo et al., 2005;den Outer et al., 2010).
The indices K c and K cb concentrate on the cloud influence on the downwelling radiation and are expected to change with clear-atmosphere properties P c since the clouds and other atmospheric constituents are mixed up in the atmosphere. Equation (1) can be expanded: where θ S is the solar zenith angle, ρ g the ground albedo, and P c is a set of seven variables governing the optical state of the atmosphere in clear sky: (i) total column contents in ozone and (ii) water vapour; (iii) elevation of the ground above mean sea level; (iv) vertical profile of temperature, pressure, density, and volume mixing ratio for gases as a function of altitude; (v) aerosol optical depth at 550 nm; (vi) Ångström coefficient; and (vii) aerosol type. P cloud is a set of variables governing the optical state of the cloudy atmosphere: (i) cloud optical depth (τ c ), (ii) cloud phase, (iii) cloud liquid water content, (iv) droplet effective radius, and (v) the vertical position of the cloud. The objective of this article is to quantify the error made in decoupling the effects of the clear atmosphere from those due to the clouds in cloudy sky, i.e. if changes in P c are neglected in K c , respectively K cb in Eq. (2). This is equivalent to say that the first derivative ∂K c / ∂P c , respectively ∂K cb / ∂P c , is close to 0. In that case, Eq. (2) may be replaced by the following approximation: where P c0 is an arbitrarily chosen but typical set P c . The objective is now to quantify the error made when using Eq. (3) instead of Eq. (2).
Method
The methodology used for assessing Eq.
(3) is of statistical nature. For a given condition related to the position of the sun, the ground albedo and the clouds (θ S , ρ g , P cloud ), several sets of clear-sky properties P c are randomly built. Each quadruple (θ S , ρ g , P cloud , P c ) is input to an RTM to compute G, B, D, K c and K cb . The variances of the K c and K cb series are then computed. The lower the variance, the lower the changes in K c or K cb with respect to the changes in P c and the more accurate the approximation given by the Eq.
(3). The errors made on the retrieved G and B when using Eq. (3) are quantified. The RTM libRadtran version 1.7 (Mayer and Kylling, 2005) is used with the DISORT (discrete ordinate technique) algorithm (Stamnes et al., 1988) to solve the radiative transfer equation. libRadtran needs input data of the atmosphere and surface properties. When not provided, data are replaced by standard assumptions. Atmosphere and clouds are assumed to be infinite plane-parallel. Table 1 reports the range of the 10 values taken respectively by θ S and ρ g . For computational reasons, θ S is set to 0.01 • , respectively 89 • in place of 0 • , respectively 90 • .
Cloud properties input to libRadtran are τ c , phase (water or ice clouds), heights of the base and top of cloud, the cloud liquid content and effective radius of the droplets. Default values in libRadtran for cloud liquid content and droplet effective radius are used: 1.0 g m −3 and 10 µm for water cloud, and 0.005 g m −3 and 20 µm for ice cloud. In a preliminary study (Oumbe, 2009, Fig. 4.6, p. 53), the influence of the changes in effective radius, from 3 to 50 µm, was found negligible for ice clouds. For water clouds, the smaller the radius, the greater the influence, though this influence is still negligible with respect to other variables.
The cloud properties are linked together. Table 2 presents the typical height of the base of cloud, geometrical thickness, and τ c for the different cloud types and is established after Liou (1976) and Rossow and Schiffer (1999).
A total of 10 values of τ c are selected in this study for water clouds and 10 others for ice clouds (Table 3, left column). Ranges of τ c are related to types of clouds to reproduce realistic conditions. Each τ c defines a series of seven couples cloud base height thickness for water clouds and three for ice clouds (Table 3).
According to Tselioudis et al. (1992), 58 % of the clouds are single layered and 28 % are double layered. The results presented hereafter are for single layer; the case of doublelayered clouds is briefly discussed at the end of Sect. 4 as results are similar in both cases. For a given cloud phase, there are 1000 (10×10×10) possible combinations of θ S , ρ g (Table 1) and τ c ( Table 3). The selection of a given τ c leads to the additional selection of a series of cloud base heights and thicknesses as shown in Table 3, i.e. seven for water clouds and three for ice clouds. At that stage, there are 7000 triplets (θ S , ρ g , P cloud ) for water clouds and 3000 for ice clouds.
Each triplet (θ S , ρ g , P cloud ) gives birth to 20 quadruples (θ S , ρ g , P cloud , P c ) by adding 20 P c randomly selected in Table 4. Similarly to what was done by Lefèvre et al. (2013) and Oumbe et al. (2011), the selection takes into account the modelled marginal distribution established from observation. More precisely, the uniform distribution is chosen as a model for marginal probability for all parameters except aerosol optical thickness, Ångström coefficient, and total column ozone. The chi-square law for aerosol optical thickness, the normal law for the Ångström coefficient, and the beta law for total column ozone have been selected. The selection of these parametric probability density functions and their corresponding parameters have been empirically determined from the analyses of the observations made in the AERONET network for aerosol properties and from meteorological satellite-based ozone products (cf. Table 4). For the sake of avoiding non-realistic cases, the allocation of the aerosol types is empirically linked to the ground albedo (Table 5).
Each combination (θ S , ρ g , P cloud , P c ) is input to libRadtran, yielding G, B, and D. In addition, G c and B c are obtained by libRadtran using θ S , ρ g , and P c as inputs. K c and K cb are obtained using Eq. (1). A series of 140 000 values for G, B, D, K c and K cb is thus obtained for water clouds and 60 000 for ice clouds. For each triplet (θ S , ρ g , P cloud ), the variances v(K c ) and v(K cb ) are computed over the 20 values K c and K cb . Since the clouds and other atmospheric constituents are mixed up in the atmosphere, K c , or K cb , is expected to change with varying P c . It is observed that v(K c ) and v(K cb ) are very small with respect to the squared mean values of K c and K cb for each triplet (θ S , ρ g , P cloud ), meaning that changes in K c and K cb with varying P c are small, thus supporting Eq. (3).
In order to illustrate this and to present this vast amount of results in a synthetic manner, it is firstly observed that these quantities v(K c ) and v(K cb ) do not vary noticeably with the cloud geometry for a given triplet (θ S , ρ g , τ c ): among the cloud properties for a given phase, the cloud optical depth τ c is the most prominent one. As a consequence, it is possible to illustrate the findings by averaging v(K c ) and v(K cb ) over the cloud geometry properties for each triplet (θ S , ρ g , τ c ). One obtains mean(v(K c )) and mean(v(K cb )). The positive root means of these averages are denoted RM(v(K c )) and RM(v(K cb )): RM(v(K c )) gives at a glance the influence of P c on K c for all cloud geometries. A small RM(v(K c )) means that the mean of v(K c ) is small. The variance v(K c ) and consequently RM(v(K c )) are linked to ∂K c / ∂P c . The lower RM(v(K c )) is, the lower the mean of v(K c ), the lower the change in K c with P c , and finally the lower the error made when using Eq. (3). The same reasoning holds for RM(v(K cb )). RM(v(K c )), respectively RM(v(K cb )), can be considered as a measure of the error made on K c , or K cb , when using Eq. (3). These quantities are also expressed relative to the mean K c and K cb for a given triplet, yielding relative values, noted rRM(v(K c )) and rRM(v(K cb )).
Influence of P c on clear-sky indices
Relative quantities rRM(v(K c )) and rRM(v(K cb )) depend on G and B. A large rRM(v(K c )) may not be important if G is very small. To better understand the results, Fig. 1 displays the averages of G and D for θ S = 40 • as a function of ρ g and τ c for water cloud. The beam irradiance B is not drawn as it does not depend on ρ g ; it decreases rapidly as τ c increases and the diffuse irradiance D tends towards G as Fig. 1 shows that D increases with ρ g , and that both G and D decrease as τ c increases. Figure 2 exhibits rRM(v(K c )) for each couple (ρ g , τ c ) for θ S = 40 • for water cloud (left) and ice cloud (right). Each cell represents the changes in K c obtained for this θ S and this couple (ρ g , τ c ) when the geometrical parameters of the cloud and the other variables in P c vary. For both cloud phases, rRM(v(K c )) increases with τ c and ground albedo. As a whole, it is small. It is less than 2 % for the most frequent cases, i.e. τ c ≤ 20 and ρ g ≤ 0.7. It can be compared to the maximum relative errors (66 % uncertainty) recommended by the World Meteorological Organization (WMO, 2008) for measurements of hourly means of G or D which are 2 % for high quality, 8 % for good quality, and 20 % for moderate quality.
rRM(v(K c )) reaches a maximum of 8.5 % for τ c = 70 and ρ g = 0.9 for water cloud (7.5 % for τ c = 20 for ice cloud). Large ρ g and large τ c mean more reflected radiation by the ground and more backscattered radiation by clouds. This increases the path of the radiation in the atmosphere and, therefore, increases the influence of P c on K c . As G is small for (τ c = 70, ρ g = 0.9) (Fig. 1), a maximum of 8.5 % is not important in absolute value since it corresponds to approximately 30 W m −2 . This high relative deviation happens only for very high ρ g = 0.9. When ρ g ≤ 0.7, the corresponding error on G is less than 10 W m −2 .
The median and 5th (P5) and 95th (P95) percentiles of RM(v(K c )) for all corresponding couples (ρ g , τ c ) for a given θ S are computed and drawn in Fig. 3 for water cloud (left) and ice cloud (right) as a function of θ S . They are also expressed relative to the corresponding mean K c (Fig. 4) and are called relative median and relative P95. For both phases and for θ S from 0 • to 60 • , the relative median is less than 2 %, and the relative P95 ranges between 3.5 % and 5 %.
All three quantities increase sharply for θ S > 60 • . The relative median, respectively P95, reaches a maximum of approximately 8-9 %, respectively 11-12 % for θ S = 80 • . Then, a decrease is observed for θ S > 80 • . Further computations show that the increase in relative influence with large θ S is mostly due to the increase of the optical path in the atmosphere due to greater θ S and therefore a greater influence of P c and notably the aerosols.
Geosci. Model Dev., 7, 1661-1669, 2014 www.geosci-model-dev.net/7/1661/2014/ Overall, an increase in τ c or θ S increases the path of the sun rays in the atmosphere, and therefore the influence of changes in P c on K c increases along with τ c and θ S . This increase is compensated by a corresponding decrease in G. Since G c rarely reaches 120 W m −2 for θ S = 80 • , the error in G corresponding to P95 is less than 15 W m −2 . The diffuse irradiance D and therefore G are strongly influenced by ρ g . The influence of changes in P c on K c increases with ρ g . Deserts such as northern Africa and Arabia exhibit large ground albedo up to approximately 0.5 (Tsvetsinskaya et al., 2002;Wendler and Eaton, 1983); the error (P95) on G is of the order of 10 W m −2 . Fresh snow-covered or ice-covered areas may exhibit very large ρ g . For ρ g = 0.9, the error on G can be large for small θ S , i.e. 30 W m −2 . One has to be cautious in using Eq. (3) in such extreme cases.
Similar calculations are made for K cb . As expected with an RTM code, K cb changes neither with ground albedo nor with cloud phase; the cloud optical depth is the most prominent variable. Figure 5 exhibits the median and 5th and 95th percentiles of RM(v(K cb )) for all couples (ρ g , τ c ) as a function of θ S and their value relative to the corresponding mean K cb . The relative median, respectively relative P95, is less than 2 %, respectively 3 % for θ S ≤ 60 • . Then, it rises sharply. The relative median, respectively P95, reaches a maximum of approximately 8 %, respectively 17 % for θ S = 80 • . Then, a decrease is observed for θ S > 80 • . Large θ S values correspond actually to low irradiances. The clear-sky B c equals 53 W m −2 for θ S = 80 • , and therefore the corresponding median and P95 errors in B are approximately 4 W m −2 and 9 W m −2 . rRM(v(K cb )) has a tendency to increase as θ S increases. This increase is compensated by a corresponding decrease in B. The clear-sky irradiance B c rarely reaches 90 W m −2 for θ S = 80 • and the maximum error in B is less than 16 W m −2 .
If cases of large θ S and τ c for which the radiation is greatly attenuated are removed by considering only cases for which G > 100 W m −2 , the obtained rRM(v(K c )) and rRM(v(K cb )) are very small, even for large θ S . For θ S equal to 70 and 80 • , the medians are approximately 3 % of K c and K cb , and the P95 are 5 and 7 %, respectively.
It is concluded that -for all considered cloud properties and θ S , and for ρ g ≤ 0.7 -the influence of changes in P c on K c and K cb can be neglected. In these cases, Eq. (3) may be adopted with an error (P95) on G and B less than 15 W m −2 and most often less than 2-5 % in relative value. These results match the WMO requirements for high-quality measurements. However, in applications as discussed in the following section, there will be other sources of uncertainties, and the total uncertainty of any model using Eq. (3) will be greater and probably exceed these WMO requirements.
A similar analysis has been made for double-layered clouds with ice cloud topping water cloud. The water and ice cloud properties have been taken from Table 3, where only water clouds with a height top less than or equal to 5 km were considered since the minimum height of ice cloud base is 6 km. Accordingly, there were 5 (water cloud) × 3 (ice cloud) cases. Results and conclusions are similar to those for single-layered clouds.
Practical implications
A first practical advantage in adopting Eq. (3) instead Eq. (2) is that two independent models -one for modelling G c and B c , the other for modelling the effects of clouds -can be used. If the approach selected to assess the SSI is based on a LUT-based model, using Eq. (3) means that two LUT-based models for K c and K cb can be computed with only one typical set P c0 , therefore strongly reducing the number of runs of the RTM. One may select the following P c0 : -The middle latitude summer from the USA Air Force Geophysics Laboratory (AFGL) data sets is taken for the vertical profile of temperature, pressure, density, and volume mixing ratio for gases as a function of altitude.
-Aerosol properties are as follows: optical depth at 550 nm is set to 0.20, Ångström coefficient is set to 1.3, and type is continental average.
-Total column content in water vapour is set to 35 kg m −2 .
-Total column content in ozone is set to 300 Dobson unit.
-Elevation above sea level is 0 m.
It has been checked that the difference in K c and K cb using different typical sets P c0 was negligible, provided that the selected P c0 does not include extreme values.
As an example, this approach is that used in the MACC/MACC-II (Monitoring Atmosphere Composition and Climate) projects to develop the new Heliosat-4 method for a fast assessment of G, D and B (Qu, 2013;Qu et al., 2014). cloudy conditions, the computation time for abaci combining the dimensions of McClear abaci and those for K c and K cb would have amounted to years. The immense gain in time justifies the slight loss in accuracy. Except θ S and ρ g , inputs to both models are independent. This is another practical advantage of Eq. (3) since it allows efficiently coping with the fact that P c and P cloud may not be available at the same spatial and temporal resolutions. This is exactly the case in the MACC/MACC-II projects. On the one hand, these projects are preparing the operational provision of global aerosol properties forecasts together with physically consistent total column content in water vapour and ozone (Kaiser et al., 2012;Peuch et al., 2009). These data are available every 3 h with a spatial resolution of approximately 100 km. They are inputs to the McClear model, yielding G c and B c . On the other hand, these projects are preparing the provision of P cloud at high temporal (15 min) and spatial (3 km at nadir) resolutions from an appropriate processing of images taken by the Meteosat Second Generation satellites. P cloud will be input to the K c and K cb models. Using Eq. (3) implies that the SSI may be computed at the best available time and space resolutions by resampling G c and B c , instead of resampling all variables contained in P c .
Conclusions
This Technical Note analyses the influence of the prominent atmospheric parameters on the SSI, with the objective of finding a practical way to speed up the calculations with an RTM. The presented results have been obtained by the RTM libRadtran. It has been checked that the results and conclusions do not depend on this model by obtaining similar results with the streamer RTM (Key and Schweiger, 1998).
It was found that -for all considered cloud properties, solar zenith angles and ground albedos -the influence of changes in clear-atmosphere properties on K c and K cb is generally less than 2-5 %, provided that the ground albedo is less than 0.7. This variation is similar to the typical uncertainty associated with the most accurate pyranometers. In these cases, Eq. (3) may be adopted with an error (P95) on G and B less than 15 W m −2 .
The longer the path of sun rays in the atmosphere is, the greater this variation and the greater the influence of clearatmosphere properties on the clear-sky indices. The mean error made when using Eq. (3) and direct irradiances expressed as the 95th percentile (P95) is less than 15 W m −2 . The P95 can be greater than 15 W m −2 when the ground albedo is greater than 0.7. In that case, one should be cautious in using Eq. (3). Such high albedos are rarely found; they may happen in case of fresh snow. Like in other RTMs the beam irradiances are modelled by libRadtran as if the sun were a point source. On the contrary, pyrheliometers measure the radiation coming from the sun direction with a half-aperture angle equal to 2.5 • according to WMO standards. The diffuse irradiance in this angular region is called the circumsolar irradiance (CSI). If it were to be compared to measurements, irradiances estimated in this work have to be corrected by adding CSI to B, and by removing CSI from D. In clear sky, the CSI correction to B is approximately 1 % of B (Gueymard, 1995;Oumbe et al., 2012). Under cloudy skies, and especially thin clouds, the CSI can be greater than 50 % of B. A CSI correction needs to be applied only in cloudy skies. Therefore the CSI can be taken into account a posteriori by correcting K c and K cb obtained by Eq. (4) with a specific model.
The presented work has demonstrated that computations of the SSI can be made by considering independently the clear-sky conditions and the cloudy conditions as shown in Eq. (3). A first practical advantage is that two independent models may be developed and used: one for clear-sky conditions and the other for cloudy conditions with their own set of inputs. Another practical advantage is that it allows efficiently coping with cloud and clear-sky variables available at different spatial and temporal resolutions.
These results are important in the view of an operational system as it permits separating the whole processing into two distinct and independent models, whose input variable types and resolutions may be different. The benefit of this separation is not limited to LUT-based models. For example, one may combine LUT-based models for K c and K cb with an analytical model predicting G c and B c such as the ESRA model (Rigollier et al., 2000) or the SOLIS model (Mueller et al., 2004). When both models are LUT-based, using Eq. (3) means two ensembles of abaci: one for clear-sky and the other for cloudy skies. In doing so, the number of entries for each ensemble is reduced leading to the reduction of (i) the size of the abaci, (ii) the number of combination between parameters, and (iii) the total number of interpolations between nodes, thus increasing the speed in computation. | 6,483.4 | 2014-08-14T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
(Non)-penalized Multilevel methods for non-uniformly log-concave distributions
We study and develop multilevel methods for the numerical approximation of a log-concave probability $\pi$ on $\mathbb{R}^d$, based on (over-damped) Langevin diffusion. In the continuity of \cite{art:egeapanloup2021multilevel} concentrated on the uniformly log-concave setting, we here study the procedure in the absence of the uniformity assumption. More precisely, we first adapt an idea of \cite{art:DalalyanRiouKaragulyan} by adding a penalization term to the potential to recover the uniformly convex setting. Such approach leads to an \textit{$\varepsilon$-complexity} of the order $\varepsilon^{-5} \pi(|.|^2)^{3} d$ (up to logarithmic terms). Then, in the spirit of \cite{art:gadat2020cost}, we propose to explore the robustness of the method in a weakly convex parametric setting where the lowest eigenvalue of the Hessian of the potential $U$ is controlled by the function $U(x)^{-r}$ for $r \in (0,1)$. In this intermediary framework between the strongly convex setting ($r=0$) and the ``Laplace case'' ($r=1$), we show that with the help of the control of exponential moments of the Euler scheme, we can adapt some fundamental properties for the efficiency of the method. In the ``best'' setting where $U$ is ${\mathcal{C}}^3$ and $U(x)^{-r}$ control the largest eigenvalue of the Hessian, we obtain an $\varepsilon$-complexity of the order $c_{\rho,\delta}\varepsilon^{-2-\rho} d^{1+\frac{\rho}{2}+(4-\rho+\delta) r}$ for any $\rho>0$ (but with a constant $c_{\rho,\delta}$ which increases when $\rho$ and $\delta$ go to $0$).
Introduction
In this paper, we are interested in the sampling of probability distribution named Gibbs measure whose density is π(dx) = 1 Z e − U (x) 2σ 2 λ(dx) where λ is the Lebesgue measure, Z = R d e − 2U (x) σ 2 λ(dx) and U : R d → R is a coercive function.Many applications require the computation of these measures in high dimension state space including for example machine learning, Bayesian estimation or statistical physics.Methods that are studied in this paper are based on the discretization of over-damped Langevin stochastic differential equation (SDE) where (B t ) t≥0 is a d-dimensional Brownian motion and σ ∈ R + * .These methods received a lot of attention in the last few years, in particular when U is strongly convex (in the sense where, in the whole space, the smallest eigenvalue of its Hessian is lower bounded by a positive α).This assumption may be certainly constraining in view of applications.It is the reason why, in this paper, we suppose that U is not strongly convex but only weakly convex 1 .More precisely, we will assume that the potential U is a convex twice differentiable function with Lipschitz gradient.Under these assumptions, strong existence and uniqueness of a solution (X t ) t≥0 classically hold and the solution to (1) is an ergodic Markov process whose invariant distribution is exactly the Gibbs distribution π ∝ e −U dλ (for background, see e.g.[MT93], [KS91], [Kha12], [Hai10]).
We respectively denote by (P t ) t≥0 and L the related semi-group and infinitesimal generator.We recall that for a twice differentiable function f : R d → R by It is also well-known that in this log-concave setting, the distribution π satisfies the Poincaré inequality (see e.g.[BBCG08]) and that convergence holds to equilibrium in distribution and in "pathwise average": for any starting point x ∈ R d , the occupation measure converges to π in the following sense: for all continuous function f ∈ L 2 (π), In the continuity of [EP21], our multilevel methods will be based on discretized adaptations of (2).More precisely, we first choose to approximate the stochastic process (X t ) t≥0 by the classical Euler-Maruyama scheme.When the related step size γ is constant, this discretization scheme is defined by X0 = x ∈ R d and: where (Z n ) n∈N is an i.i.d sequence of d-dimensional standard Gaussian random variables.In the long-time setting, these schemes and there convergence properties to equilibrium were first studied in the nineties by [Tal90] and [RT96].Then, some decreasing step Euler schemes were investigated by [LP03] (see also [Lem05]) in order to manage, in the same time, the discretization and long-time errors.Here, we choose to keep the constant step size point of view in order to avoid some additional technicalities but our ideas could be probably adapted to this setting.
This "pseudo-diffusion" form is usually convenient for proofs but it is worth noting that the procedure is only based on the discrete-time Euler scheme.If no confusion arises, we will sometimes write t instead of t γ , and Xt or Xγ t instead of Xγ,x0 t to alleviate the notations.We now mimic (2) with the Euler scheme to approximate the target measure π.Thus, consider the following occupation measure (for background see [Tal90]), for N ∈ N
Multilevel methods
Multilevel methods introduced by M. Giles in [Gil08].These methods, initially used for the approximation of E[f (X T )], are now widely exploited in many settings.The rough idea is the following: assume that the target E[X] is the expectation of a random variable that cannot be sampled (with a reasonable cost) and consider a family of random variables (X j ) j approximating X, with a cost of simulation and a precision which typically increases with j.The principle of the multilevel is to stack correcting layers with a low variance to a coarse approximation X 0 of the target.More precisely, writing the multilevel method consists in building a procedure based on the addition of Monte-Carlo approximations of E[X 0 ] and of E[X j − X j−1 ], j = 1, . . ., J.Then, if the random variables X j − X j−1 have low variance, the approximation of E[X j − X j−1 ] requires few simulations and, in view of (4), we can obtain a procedure which has the bias related to X J but with a cost which may be much less than the one generated by a standard Monte-Carlo method applied to estimate E[X J ].
In the discretization setting, the family of random variables (X j ) j is a sequence of Euler schemes ( Xγj ) j where (γ j ) j is a family of decreasing time steps2 .Following the heuristic (4), the (independent) correcting layers are built by coupling of Euler schemes with steps γ j−1 and γ j .Note that in view of the simulation of the (synchronous) coupling, we need γ j−1 to be a multiple of γ j (in this paper, we will assume that γ j = γ 0 2 −j ).
Multilevel methods have been already studied in the literature for the approximation of the invariant distribution of the Langevin diffusion.In [GMS + 20], the authors take advantage of the convergence in distribution to equilibrium.Thus, the classical Monte-Carlo point of view is adopted: the approximation of π(f ) is obtained by sampling a large number of Euler schemes for each level.In [PP18] 3 and [EP21], the point of view is to take advantage of the convergence of the occupation measure.Thus, each level is based on only one path of the Euler scheme or of the couple of Euler schemes whose length decreases (since the variance of the correcting layers decreases) and discretization step increases.All these papers show that, in the uniformly strongly convex setting, the invariant distribution can be approximated (along Lipschitz continuous functions) with a precision ε (in a L 2 -sense) using a Multilevel procedure whose complexity is of order ε −2 or ε −2 log p (ε) with p ∈ [1, 3].Moreover, in [EP21], a particular attention is paid to the dependency in the dimension.In this case, it is shown that one can build a multilevel procedure that produces an εapproximation of the target for a complexity cost proportional to dε −2 (with an explicit expression of the dependence in the Lipschitz constant L and the contraction parameter α).
The more involved weakly convex case seems to be less explored in the multilevel paradigm but, in view of applications (for instance for Bayesian Lasso), it is natural to ask about the robustness of these methods when one relaxes the contraction assumption.
Contributions and plan of the paper
The main goal of this paper is thus to extend the multilevel Langevin algorithm for the Gibbs sampling to the weakly convex setting, and if possible to obtain some quantitative bounds for the complexity related to the computation of an ε-approximation of the target (see Section 1.4 for a definition of ε-approximation).
We first investigate the penalized multilevel method: in the continuity of [DK19] and [DKRD22], we build a multilevel procedure based on the following observation: consider a new equation with another potential U α (x) := U (x) + α 2 |x| 2 , this new equation has an invariant distribution named π α which converges to π when α tends to 0 in Wasserstein metric.The idea is that this new invariant distribution is easiest to sample because of the uniform convexity of the potential U α .In Section 2.1, Theorem 2.1 combines the benefits of the penalized approach and of the multilevel methods.For a Lipschitz-continuous function f : R d → R and a C 2 -potential U , the multilevel procedure performs an ε-approximation of π(f ) with a complexity cost proportional to π(|.| 2 ) 3 dε −5 .As in [DKRD22], our result depends on the generally unknown constant π(|.| 2 ) which is at least proportional to d (see Remark 2.1 for details and comparisons with [DKRD22]).
Because of the above remarks, we chose in a second part to try to develop some tools which tackle the weakly convex setting from a dynamic point of view and which can improve the complexity in terms of ε.More precisely, in the spirit of [GPP20], we study an intermediary framework (called weakly parametric convex setting in the sequel).We assume that the eigenvalues of the Hessian matrix of U vanish when |x| goes to +∞, but with a rate controlled by the function x → U −r (x) with r ∈ [0, 1) (see Assumption (H 1 r )).The parameter r characterizes the "lack of strong convexity", the case r = 1 referring to the "Laplace case"4 whereas r = 0 corresponds to the uniformly convex setting.When one assumes such an assumption, one can get some bounds for the exponential moments of the Euler scheme (at the price of technicalities).One is also able to preserve some confluence properties, i.e. two paths of the Euler scheme have a tendency to stick at infinity.Finally, in this setting, it is also possible to control the distance between diffusion paths and the related Euler schemes.These three ingredients (obtained with a lower quality than in the strongly convex setting) allow us to tackle the multilevel procedure in this framework.
The related main contribution is Theorem 2.2.In this result, we provide a series of statements under different sets of assumptions: when U is only C 2 or when U is C 3 .Under (H 1 r ) only or under (H 1 r ) and (H 2 r ), where (H 2 r ) denotes an additional assumption which requires the highest eigenvalue to be also controlled by the function x → U −r (x) (we could roughly say that under (H 1 r ) and (H 2 r ), the potential is uniformly weakly convex in the sense that "the decrease of the contraction is uniform").In each statement, we provide a multilevel procedure adapted to the assumptions.The related complexity is exhibited in terms of d, ε but also in terms of the contraction parameter and the Lipschitz constant.Without going too much into the details, when U is only C 2 , the complexity is of the order ε −3 whereas when U is C 3 , we can obtain a rate of the order ε −2−ρ for any ρ > 0 and thus approach the "optimal" complexity ε −2 .Now, in terms of the dimension, when only (H 1 r ) holds the dependence in the dimension of the complexity is bounded by for any ρ > 0 when U is C 3 .This means that when U is C 3 and (H 1 r ) and (H 2 r ) hold, the complexity is of the order ε −2−ρ d 1+ ρ 2 +4r for any ρ > 0. With respect to the paper [GPP20], our multilevel procedure improves the dependence in ε and is most comparable in terms of the dimension.Note that when only (H 1 r ) holds, the dependency in the dimension dramatically increases on r.Whereas, when the potential is uniformly weakly convex, the dependence in the dimension does not explode when r → 1 (see Theorem 2.2 for more details).
Plan of the paper.As detailed in the previous paragraphs, Sections 2.1 and 2.2 are respectively devoted to the statement of the main theorems for the penalized multilevel and in the parametric weakly convex case.Then, Section 3 is dedicated to the proof of the first main theorem (Theorem 2.1).In Proposition 3.2, we obtain a Wasserstein bound related to the bias induced by the penalization on the invariant distribution.The proof of Theorem 2.1 is then an adaptation of [EP21, Theo 2.2].From Section 4, we focus on the proof of Theorem 2.2.In Section 4, we prove some preliminary results on the diffusion and its Euler scheme under (H 1 r ): we begin with some controls of the exponential moments (Proposition 4.1 and Proposition 4.2) which in turn lead to some bounds of the polynomial moments (Proposition 4.3).In this section, we also show that the discretization error can be controlled in long time (Proposition 4.4) and finally obtain an integrable rate of convergence to equilibrium for the Euler scheme (Proposition 4.5).With the help of these fundamental bricks, in Section 5, we obtain some bounds on the bias (Proposition 4.7) and of the variance of the procedure (Proposition 5.2) which in turn allow us to finally provide the proof of Theorem 2.1 in Theorem 2.2.
Design of the algorithm
We now build the multilevel procedure.Let x ∈ R d be the initialization of the procedure, J ∈ N be a number of levels, (γ j ) 0≤j≤J be a sequence of time steps, (T j ) 0≤j≤J be a sequence of final times.Define by Y(J, (γ j ) j , τ, (T j ) j , x, .) the multilevel occupation measure : for all f : R d → R, Y(J, (γ j ) j , τ, (T j ) j , x, α, f where trajectories on each level are coupled with the same Brownian motion.To ease notation, we simplify Y(J, (γ j ) j , τ, (T j ) j , x, f ) by Y(f ).The parameter τ ≥ 0 is the time where we begin the average.Indeed, this "warm-start trick" may improve the precision of the estimation in some cases (we refer to [EP21] for more details).But, it could be equal to 0 when the gain is non-efficient, for example in the second part of our main result.
Notations
The usual scalar product on R d and the induced Euclidean norm are respectively denoted by all its partial derivatives are well-defined and continuous up to order k.The gradient and Hessian matrix of f are respectively denoted by ∇f and D 2 f .The probability space is denoted by (Ω, F , P).The Laplace operator is denoted by ∆: ∆f = d i=1 ∂ 2 i,i f .The L p -norm on (Ω, F , P) is denoted by • p .For two probability measures µ and ν, we define the Wasserstein distance of order p by , where Π(µ, ν) is the set of couplings of µ and ν.
• P and uc : For two positive real numbers a and b and a set of parameters P , one writes a P b if a ≤ C P b where C P is a positive constant which only depends on the parameters P .When a ≤ Cb where C is a universal constant, we write a uc b.
• ε-approximation: We say that Y is an ε-approximation (or more precisely an ε-approximation of a for the Equivalently, Y is said to be an ε-approximation of a if the related Mean-Squared Error (MSE) is lower than ε 2 .
• Complexity/ε-complexity: For a random variable Y built with some iterations of a standard Euler scheme, we denote by C(Y), the number of iterations of the Euler scheme which is needed to compute Y.For instance, C( Xγ nγ ) = n.We sometimes call ε-complexity of the algorithm, the complexity related to the algorithm which produces an ε-approximation.
The penalized approach
In this section, we develop a penalized multilevel method to sample a non-strongly log-concave probability distribution π.The idea is based on [DKRD22] and [DK20] where the authors consider the potential U α (x) := U (x) + α 2 |x| 2 with α > 0 which is called penalized version of U .We here assume that U satisfies the following assumption: WC L : U is a non-zero C 2 -function and there exists L > 0 such that the inequalities being taken in the sense of symmetric matrices.Denote by π α the invariant measure of the diffusion process (X α t ) t≥0 solution of the stochastic differential equation It appears that π α satisfies the Bakry-Émery criterion thus we can apply our multilevel method that requires strong convexity to approximate π α .But our target is π then we have to control the distance (in a Wasserstein sense) between π and π α .To this end, the results of [BV05] and [DKRD22] ensure the convergence of π α when α goes to 0 with a bound of the Kullback-Leibler divergence.This leads to the following theorem: Theorem 2.1.Assume that WC L hold.Let ε > 0, let f : R d → R be a Lipschitz function and x ∈ R d .For ε ≥ 0 let with a complexity satisfying ) and this implies that the complexity is of the order d 4 ε −5 .
Let us now compare with [DKRD22]: note that in this paper, the cost is not explicitly written.As usual in the Monte-Carlo literature, the authors control the number of iterations of the Euler scheme which is necessary to draw a random variable whose distance to the target is lower than ε instead of giving the real cost.In the Langevin Monte-Carlo case, when U is C 2 , they then obtain a number of iterations which is, up to logarithmic terms, of order π(|.| 2 )dε −4 .Normalizing ε (i.e.replacing ε by ε/ π(|.| 2 )) leads to a number of iterations of order π(|.| 2 ) 3 dε −4 .But to compare with our work, we need to include the Monte-Carlo cost, i.e. to the number of simulations which is necessary to make the variance lower than ε 2 .Then, we have to multiply the previous number of iterations by Var π (f )ε −2 which can be reasonably bounded by π(|.| 2 )ε −2 (when f is Lipschitz).This means that the complexity of the penalized Langevin Monte-Carlo in [DKRD22] is of the order π(|.| 2 ) 4 dε −6 .In consequence, the multilevel method allows us to improve the result of [DKRD22].Note that the authors also provide other algorithms such as the Kinetic-LMC where the bound in ε is improved (it seems that our result meets the complexity given for this algorithm).
About decreasing penalization.In the above result, we propose a Multilevel strategy based on a fixed penalization.A natural question arises: could we take advantage of the Multilevel strategy by keeping the same penalization for the highest level and progressively reducing it on the lower layers?Indeed, this is precisely what we do with the discretization bias, thus we can wonder about the effect of such a strategy for the penalization: to this end, let us introduce a decreasing sequence (α j ) 0≤j≤J of penalization levels such that the bias induced by the distance between π αJ and π is small with respect to the required precision ε.
More specifically, we want to replace (4) with the following telescoping series However, the above decomposition requires several longtime bounds on the underlying dynamics to be an efficient multilevel procedure.In particular, for the control of the variance generated by each level, we need to control the pathwise distance between two paths of the dynamics related to penalization levels α j and α j−1 .But, oppositely to the constant penalization case where we can obtain some confluence properties, we can observe that for two trajectories computed with two different degrees of penalization α and α, there is a "lack of confluence" quantified by the following inequality, where the bound dramatically depends on α.In this inequality we voluntary treated the continuous case to ease the readability but a discrete analogous result can be shown.We refer to the section 3 to get the proof of this result.This inequality is certainly related to the shape of the penalization sequence (α j ) 0≤j≤J .
In fact, this result must be considered in addition of the error induced by the difference between two Euler schemes with different time steps.Up to an universal constant we have (see [EP21, Prop 5.1]) Then, the additional variance generated by the decrease of the penalization does not have an impact on the results, we have to impose that γdσ 2 α 2 j (α j − α j−1 )dσ 2 α j−1 .In particular, the sequence (α j ) 0≤j≤J cannot be too decreasing.Going further into the computations, it seems that we cannot expect a significant gain with this approach.
Parametric weakly convex setting
The purpose of this section is to study the non-penalized multilevel procedure in the weakly convex setting.Instead of penalizing the dynamics, it is actually natural to ask about the robustness of the "standard" multilevel method in this case.To answer this question, we have to prove a series of properties in the spirit of the assumptions H i (i ∈ {1, 2, 3, 4}) in [EP21].These assumptions include the convergence to equilibrium of the Euler scheme with a quantitative rate, the long-time control of the L 2 -distance between the Euler scheme and the diffusion, the control of the Wasserstein distance between π γ and π and the control of the moments.Some of these properties (especially the long-time control of the L 2 -distance) seem hard to check in a general convex setting.We thus propose to work in the parametric weakly convex setting used in [EP21] by introducing (H 1 r ) (see below) where we assume that the contraction vanishes at ∞ but with a rate controlled by U −r .
Let us now introduce our assumptions depending on a parameter r ∈ [0, 1) : is positive and there exist L and c > 0 such that, The lower-bound can be seen as the "lack of uniform strong convexity" for the potential.Indeed, if r = 0 we recover the strong convexity and r = 1 corresponds to the weakest convexity case where the gradient is flat at infinity.The fact that ∇U is L-Lipschitz implies that x → λD 2 U(x) is upper-bounded by L. In order to improve the dependence in the dimension, we also introduce an additional assumption that deals with the case where the largest eigenvalue decreases at infinity with an intensity that is of the same order as the lowest eigenvalue: (H 2 r ) : There exists a positive c such that for all x ∈ R d , λD 2 U(x) ≤ cU −r (x).
For instance, it can be checked that the function Let us finally define the couple (γ ⋆ , Ψ) by: where c r is a constant which only depends on r. γ ⋆ will denote the largest value for γ 0 whereas, Ψ controls the moments of U ( Xnγ ) (see Proposition 4.3 for details).It is worth noting that on the one hand, γ ⋆ does not depend on d and on the other hand, that Ψ ∝ d 1 1−r when only (H 1 r ) holds and Ψ ∝ d when (H 1 r ) and (H 2 r ) hold true.This means that in the first case, the dependence in d dramatically increases with r whereas in the second case, it does not depend on r.In the next result, the reader will have to keep in mind that the definition of these parameters depends on the assumptions.In particular, even if (H 2 r ) does not appear in the statement, it is hidden in the value of the parameters γ ⋆ and Ψ.
We are now ready to state our main theorem in this setting: Theorem 2.2.Assume (H 1 r ) and let x ∈ R d such that U (x) r Ψ, γ 0 ∈ (0, γ ⋆ ], δ ∈ (0, 1/4] and let f be a Lipschitz-continuous function.For an integer J ≥ 1, set ∀j ∈ {1, . . ., J}, γ j = γ 0 2 −j and Then, for δ small enough, with a complexity cost, Then, for δ small enough, Y(J, (γ j ) j , τ, Remark 2.2.This technical result deserves several comments: ⊲ Complexity in terms of ε.If we only consider the dependence in ε, we obtain ε −3 when U is only C 2 and ε −2−ρ for any ρ > 0 when U is C 3 and an additional (but reasonable 7 ) assumption on ∆(∇U ) is satisfied.We can thus theoretically approach the complexity in ε −2 .However, it is worth noting that the non-explicit constants depending on ρ and δ go to ∞ (independently of the other parameters) when ρ and δ go to 0. The fact that we "only" obtain a complexity in ε −3 when U is C 2 is due to the fact that in this case, our bound of the 1-Wasserstein distance between π γ and π is of the order √ γ.When U is C 3 , the bound on the 1-Wasserstein distance between π γ and π is of order δ.This allows us to clearly improve the complexity but it can be noted that we do not retrieve the ε −2 -bound of the uniformly convex case.This is due to the rate of convergence to equilibrium.Actually, our rate is polynomial and not exponential, which in turn, implies a "slight cost" on the dependence in ε.In fact, we could get some (sub)-exponential rates but without controlling the dependence in the dimension which is of first importance for applications.
⊲ Complexity in terms of the dimension.The dependence in the dimension strongly varies with the assumptions.In the "worst" case where U is only C 2 and only (H 1 r ) holds, the complexity is of the order . Unfortunately, when r is close to 1, this means that this dependence seriously worsens.We retrieve this same phenomenon when U is C 3 and only (H 1 r ) holds but with a better bound of the order d for any positive ρ and δ.This bad behavior when r goes to 1 is due to the fact that when only (H 1 r ) holds, the bounds on the exponential moments of sup t≥0 E[e U( Xt) ] are of the order exp(d r ) dramatically improves this exponential bound since in this case, we are able to prove that this is of the order e d (this implies that sup t≥0 E[U p ( Xt )] is of the order d p , see Propositions 4.3 and 4.2 for details).It is worth noting that in this case, the dependence in the dimension does not explode when r goes to 1 being of the order d 1+ ρ 2 +(4−ρ+δ)r for any ρ > 0. Remark that when r = 0, we formally approach the rate of the uniformly convex case in dε −2 obtained in [EP21].
⊲ Comparison with the literature: In this setting, the only paper which we may reasonably compare with is [GPP20] since we use similar assumptions.Compared with this paper, our multilevel procedure certainly improves the dependence in ε, replacing ε −4 by ε −3 when U is C 2 and ε −3 by ε −2−ρ when U is C 3 .In terms of the dimension, our approach slightly increases the dependence on the dimension.For instance, when (H 1 r ) and (H 2 r ) hold, [GPP20] obtain a bound in d 1+4r when U is C 2 or C 3 .We here retrieve a dependence which is somewhat similar when U is C 3 but when U is C 2 , our bound in d 3 2 +( 9 2 +δ)r is clearly worse.
⊲ About the parameters.In applications, the dependence in the parameters, L, c, and c may be of importance (think for instance to applications to Bayesian estimation where these parameters can strongly depend on the number of observations).This is why here, we chose to keep all these dependencies in the main result even if it sometimes adds many technicalities in the proof.
Proof of Theorem 2.1
This section is devoted to the proof of the first main result.We first quantify the bias induced by the approximation of π by π α .To this end, we use the Talagrand concentration inequality that estimates the Wasserstein distance between these two measures by their Kullback-Leibler divergence.
Proposition 3.1.Assume that WC L hold.Then for all α ≥ 0, there is a universal constant C such that 7 See Remark 4.8 for details on this assumption We refer to [BV05, Cor 2.4] to find a proof of this result.In addition, in [DKRD22] the authors show that C ≤ 2E π [|X| 2 ] (page 24).It remains to compute the Kullback Leibler divergence of π from π α , to bound the bias induced by the penalization.Proposition 3.2.Assume that WC L hold.Then for all α ≥ 0, Proof.The Kullback-Leibler divergence is defined by By definition of π and π α , Using the inequality e −x ≤ 1 − x + x 2 2 for x ≥ 0, this leads to, and by the inequality log(1 − x) ≤ −x for x ≤ 1 we get, Proposition 3.1 for π α implies the result.Now we switch to the proof of the main theorem.With the two previous propositions, we control the bias induced by the penalization, then it remains to compute the error and the complexity of a multilevel procedure in a uniformly convex setting.To this end, we use [EP21, Theorem 2.2] which gives parameters to perform an ε-approximation of the invariant distribution with an explicit complexity in terms of the parameters, especially in terms of the contraction parameter.Here, this is exactly our penalization parameter α and we will thus optimize its choice in the proof.
Proof. of Theorem 2.1 Let ε be a positive number and f : R d → R be a Lipschitz continuous function.By the bias/variance decomposition, triangular inequality and the Monge-Kantorovich duality, we have The second term denoted by P 2 is the mean squared error of a Multilevel procedure for the approximation of π α .This penalized measure is invariant for the diffusion process defined with the potential U α .By assumption WC L , U α satisfies the following property σ 2 αd and the following parameters8 : we have with a complexity satisfying It remains to calibrate the penalization parameter α.Proposition 3.2 implies Putting α in (15) and ( 16), Precisions about the "decreasing penalization": For α > α > 0 and x, y ∈ R d consider the couple (X x,α t , X x, α t ) t≥0 defined by Proof.By the Itô formula we have, We now use the fact that for all x ∈ R : The strong convexity property of Up to an universal constant, the moment of order two of the diffusion process under the strong convexity hypothesis are bounded by σ 2 d α (see [EP21, Lem 5.1]), we get The result follows.
4 Preliminary bounds under (H 1 r ) and (H 2 r ): From now, we switch to the proof of the second part of the main results i.e. we consider the weakly convex case under the parametric assumptions (H 1 r ) and (H 2 r ).As mentioned before, these hypotheses deal with the behavior of the lowest and highest eigenvalues of the Hessian matrix of U .In some sense, (H 1 r ) quantifies the strict convexity of the potential which in turn implies the contraction of the dynamics.Note that such an assumption also appears in [CFG22] where the authors obtain exponential rates to equilibrium under this parametric assumption.
In this preliminary section we state a series of results related to the diffusion and its Euler scheme under Assumption (H 1 r ).For the upper-bounds of the eigenvalues of D 2 U , we distinguish two cases: the first one where we assume that we have a uniform upper-bound by L (in others words that ∇U is L-Lipschitz) and the second one, we add Assumption (H 2 r ) where the largest eigenvalues also decrease at infinity with a rate which is comparable to the one of the lowest eigenvalues.In fact, in the second case, we will see that we are able to preserve a dependency of the moments in the dimension which is linear, whereas, without this assumption, the dependency is O(d In the second part we state a result about the longtime pathwise control of the distance between the diffusion and its Euler scheme.Third, we study the convergence to equilibrium for the Euler scheme.Finally, we quantify the bias induced by the discretization with some results on the 1-Wasserstein distance between π and π γ (the invariant measure of the Euler scheme).
Bounds on the exponential moment
In order to study the confluence between the continuous time process and its Euler scheme, let us start this section by a control of the moment of the continuous time process and the discrete time when the potential U is supposed convex.We first state a result on the control of the exponential moment of the continuous time process.Proposition 4.1.For all x ∈ R d and t > 0, under (H 1 r ) and (H 2 r ).We preface the proof by a technical lemma.
In particular, C M is a compact set (since it is included in a level set of a coercive function).
Proof.Denote by y the solution of the ordinary differential equation y ′ (t) = −∇U (y(t)) starting from y(0) = x.Define the function f : t → |∇U (y(t))| 2 we have by chain rule for all t ∈ R + .By ( Since lim t→+∞ y(t) = x ⋆ , we get by integration Therefore, Since ∆U = Tr(D 2 U ) ≤ d λD 2 U ≤ dL (where λA stands for the largest eigenvalue of symmetric matrix A) and U (x ⋆ ) = 1, it follows that so that If we now consider the case where (H 2 r ) also holds, we use that ∆U ≤ cU −r (x) to obtain: To ensure that the right-hand member is lower-bounded by M , it is enough to ensure that This concludes the proof.
Proof.(of Proposition 4.1) Let θ ∈ (0, 1), (to be choosen latter) and for all x ∈ R define, show that f θ is a Lyapunov function for the dynamic L : where C M is defined in Lemma 4.1.In the proof of this lemma we showed that C M is included in a level set Finally f θ is a Lyapunov function for the dynamics: i.e. for all x ∈ R d , Hence by a Gronwall argument we get, under (H 1 r ) and (H 2 r ) leads to the result.We now state an analogous result for the Euler scheme.
(ii) Assume (H 1 r ) and (H 2 r ).If γ ∈ (0, 1−r 4c∨L ] and θ ∈ [0, 1 8σ 2 ∧ 1], then for all x ∈ R d and n ∈ N we have where c denotes a constant independent of the parameters and c r a constant which only depends on r.
The proof of this proposition is postponed in Appendix A.
Remark 4.1.The reader will find more explicit (but more technical) bounds in the proof of the second case.It is worth noting that we can preserve a condition on γ does not depend on d (as in the strongly convex setting).This is of first importance in our multilevel setting where it is much more efficient if the rough layers of the method can be implemented with step sequences with large sizes.The proof is very close to [GPP20] but the bounds are refined.In particular, compared to this paper, we precisely do not require that the step size decreases with d.
Thanks to the two previous results we are now able to control the moment of the continuous time and the discrete time processes. where under (H 1 r ) and (H 2 r ), (ii) Let γ ∈ (0, γ ⋆ ] with γ ⋆ defined by (11).Then, where Ψ is defined by (11) (iii) In particular, and Proof.The proof similar to [GPP20, Prop B.4] is postponed in Appendix B.
Remark 4.2.In order to avoid the distinction between cases in all the proofs, we choose to adopt only one notation for Ψ and Ψ but the reader has to keep in mind that the definition of these quantities depends on the fact that (H 2 r ) is satisfied or not.Let us also recall that the notation r,p means that the constant only depends on r and p.These constants are certainly locally bounded: for any compact subset K of [0, 1) × [1, +∞), there exists a universal constant c such that for any (r, p) ∈ K, the underlying constant c r,p related to r,p is bounded by c.Finally, note that we chose to keep all the dependencies in the other parameters.
Remark 4.3.The control of the L 2 -distance between the diffusion and its Euler scheme is a fundamental property for the efficiency of the multilevel method.Actually, it allows us to control the variance of each level.The fact that we are able to obtain such a property in this (semi)-weakly convex setting is new.
We start with two technical lemmas.
Lemma 4.2.Assume (H 1 r ) then for all x ∈ R d we have Proof.First, one can check that for all x ∈ R d and for all eigenvalue of the Hessian we have where in the last inequality we have used assumption (H 1 r ).By the Taylor formula, where ξ θ = λx + (1 − λ)x * .By (23) and the fact that ∇U 1+r (x ⋆ ) = 0, we get This concludes the proof.
The next lemma is a bound on the moments of the increment of the Euler scheme (with the notations γ ⋆ and Ψ introduced in Proposition 4.3).
Then for all t > 0 and k ∈ N * , Proof.By the definition of the Euler scheme we have Then, by Minkowski inequality, where in the last line we used the L-Lipschitz continuous property of ∇U and the fact that B t −B t ∼ √ t − tZ with Z ∼ N (0, Id).Finally, by Lemma 4.2 and Proposition 4.3, we obtain We are now ready to prove Proposition 4.4.
Proof.(of Proposition 4.4) Let x ∈ R d and consider the following process in To control E 1 we use a Taylor expansion and obtain, where For the second term E 2 , using the inequality a, b ≤ ξt 2 |a| 2 + 1 2ξt |b| 2 and the fact that ∇U is L-Lipschitz, we get Thus, and a Gronwall argument leads to Taking the expectation, the Fubini's theorem implies As ds.
Now let φ be a real non negative function, φ : t → φ(t) Using (H 1 r ), the convexity of x → U (x) and t → t −r and the Jensen inequality, we have Thus by the inequality (a + b) p ≤ a p + b p for a, b ≥ 0 and p ∈ [0, 1], By Cauchy-Schwarz inequality, Proposition 4.3(iii) and Lemma 4.3 we get where in the last line, we used that dσ 2 ≤ Ψ and γL ≤ 1.
For the second term use Cauchy-Schwarz inequality, .
With the help of Inequality (25) and Proposition 4.3, we have For the third term of this product, let κ be a positive number and use Markov inequality: The function x → x −κ being convex on (0, +∞) it follow from Jensen inequality that Using again inequality (25) and Proposition 4.3, Finally, we get . Now let φ(s) = t−s (t+1−s) 1−δ with a > 0, δ ∈ (0, 1) and κ = 2(1+δ) 1−δ we have As a consequence, since γL ≤ 1 and dσ 2 ≤ Ψ, we obtain Back to (24), we deduce from (26) and from the above inequality that The result follows.
Convergence to equilibrium for the Euler Scheme under (H 1 r ):
We now proceed to establish the weak error between the discrete semi group and its invariant measure denoted by π γ .The proof of this result is based on the control of the so-called tangent process T x t := ∇X x t .
Remark 4.4.⊲ In order to alleviate the purpose, the result is stated under the assumption that the initial condition x satisfies U (x) r Ψ but the reader will find some bounds without this assumption in the proofs.
⊲ The function h φ,κ plays the role of convergence rate to equilibrium.In this setting where the Hessian is not lower-bounded, we adopt a strategy which consists in separating the space into two parts.In the first one, we assume that we have some good contraction properties parametrized by the function φ and in the other one, we try simply to control the probability that such a good contraction does not occur.This leads to a balance between two terms depending on φ and κ.In the following, we will choose φ and κ in order that h φ,κ is summable with the smallest impact on the dependence in the dimension.
Note that in [CFG22], some exponential rates are exhibited under similar assumptions in the continuous case (with the help of concentration inequalities).However, this exponential rate depends on some constants whose control seems to be difficult to obtain (typically, when the starting distribution is absolutely continuous with respect to the invariant distribution, the constants involve the L 2 -moment of the related density).Probably, some ideas could be adapted to the Euler scheme (starting from a deterministic point) but with technicalities that seem to carry us too far for this paper.
We preface the proof of Proposition 4.5 by a lemma about the shape of the first variation process of the continuous time Euler scheme, T x t = ∇ x Xx t .
Lemma 4.4.For all n ∈ N, x ∈ R d and γ ∈ [0, 1), Proof.(of Lemma 4.4) First, observe that for all n ∈ N, and by the definition of the Euler scheme and the chain rule, Then we get, T and the proof follows by a simple induction.
Consider two paths defined with the same Brownian motion and different starting points: x, y ∈ R d .The following proposition shows that there is a pathwise confluence, i.e. the two trajectories get closer when n goes to infinity.Proposition 4.6.Assume (H 1 r ) and let x, y ∈ R d , γ ∈ (0, γ ⋆ ], κ > 0. Let φ : R + → R be a positive function.Then, where, with γ ⋆ and Ψ given by Proposition 4.3. Remark 4.5.In the sequel, this property is typically applied with a polynomial function φ which leads to polynomial rates to equilibrium.It is worth noting that the proof could be adapted to provide exponential rates (the idea would be to consider an exponentially decreasing convex function instead of x → x −κ in the proof below).However, with our method, such rates would lead to exponential dependence in the dimension.This is why we do not give such bounds here.
Proof.(of Proposition 4.6) For x, y ∈ R d and n ∈ N, let us start by a Taylor expansion of the function where . is the operator norm associated with the Euclidean norm.By Jensen inequality and Lemma 4.4, The operator norm associated with the Euclidean norm of a symmetric matrix is equal to its spectral radius, so we get with, For a given real non negative function φ : R → R we have For a positive number κ we have and using the Markov inequality, The function x → x −κ is convex on (0, +∞) then by Jensen inequality, Observe that assumption (H 1 r ) implies, By Proposition 4.3 and the convexity of U , this implies that for any γ ∈ (0, γ ⋆ ], Thanks to this confluence property we are now able to prove the convergence to equilibrium of the Euler scheme and to give the rate of this convergence. Proof.(of Proposition 4.5) Since π γ is invariant for Xnγ n∈N we deduce from Fubini's Theorem and Jensen inequality that The Lipschitz property of f implies that where [f ] 1 is the Lipschitz constant of f .Proposition 4.6 implies With the help of the Young inequality, we also have Plugging these controls into (28) yields To conclude, we now use the bound (22) of Proposition 4.3(iii) and the assumption U (x) r Ψ.
Bias induced by the discretization under (H 1 r ):
We now need to provide estimates of W 1 (π, π γ ).We provide two results: Lemma 4.5 where we directly derive from Proposition 4.4 a bound in O( √ γ) which "only" requires the potential U to be C 2 .However, such a bound has a serious impact on the dependency in ε of the complexity.Thus, we propose a second result when U is C 3 where we recover a bound in O(γ).
A first bound in O( √ γ)
As mentioned before, a first estimate can be directly deduced from Proposition 4.4.Actually, since in this result, the L 2 -error between the process and its discretization is controlled uniformly in time, leads to a similar bound for W 1 (π, π γ ) by letting t go to ∞.More precisely, Proof.Owing to the stationarity of π, we have for every n ≥ 0, 0 by Proposition 4.5 (more precisely, this property can be deduced from an integration of (29) with respect to π and from the fact that π(U p ) < +∞ for any p by Proposition 4.3).Now, integrating with respect to π the bound of Proposition 4.4 and using that π(U p ) r Ψp by Proposition 4.3 leads to the result.
A second bound in O(γ)
Even if the above bound is quite explicit in terms of its dependency with respect to L, c, c and d, the fact that it is in O( √ γ) dramatically impacts the complexity in terms of ε (at least).
In fact, it is possible to get a 1-Wasserstein error of the order γ by using a combination of the control of the rate of convergence to equilibrium of the continuous process and of the finite-time weak error (between the process and its discretization).Such a strategy is used in several papers : in [PPar], this idea is developed in a multiplicative setting with a so-called "domino" approach for the control of the 1-Wasserstein and T V distances between the process and its discretization, uniformly in time.For the control of W 1 (π, π γ ) itself, our approach follows [DE21] which provides a series of bounds in many models and sets of assumptions which are mainly based on the following principle (see Lemma 1 of [DE21]).Taking advantage of the stationarity of π γ , for any p ≥ 1, for any t > 0, so that if we assume that We thus propose to estimate ε 1 (t) and ε 2 (t) under Assumption (H 1 r ) (with or without (H2 r )).This is the purpose of Lemmas 4.6 and 4.7 respectively.These two estimates lead to the following proposition Proposition 4.7.Assume (H 1 r ) and let δ ∈ (0, 1).Assume that U is C 3 with ∆(∇U ) 2 2,∞ r σ −4 L 3 Ψ (with ∆(∇U ) 2,∞ defined in Lemma 4.7).Then, a constant c r,δ (depending only on r and δ) exists such that for all γ ∈ (0, γ ⋆ ], Remark 4.6.⊲ Note that this result is clearly in the spirit of [DE21, Theorem 6].However, there are several differences.First, we need here to adapt our proof to a setting where we only have polynomial convergence to equilibrium (instead of exponential convergence).Second, under our assumptions on U which are more restrictive than the one of [DE21, Theorem 6], we can improve the constants (and in particular avoid some exponential dependence in the Lipschitz constant L).
⊲ Compared with Lemma 4.5 , this result improves the dependence in γ but it is worth noting that the bound is also better with respect to Ψ (and thus to the dimension).
⊲ As mentioned in Remark 4.5, the proof can be adapted to provide exponential rates but unfortunately our method would lead to exponential dependence in the dimension.For this section, the lack of exponential rate does not have a serious impact on the bounds.Nevertheless, if we needed to improve our bounds, an idea would be to apply [CFG22, Theorem 5.6].In this result, the authors provide exponential rates under assumptions which are similar to ours.However the related constant depends on the density of the semi-group and it would be necessary to be able to control it with respect to the parameters of the model.
Proof.Denoting by T x t = ∂ x X x t , the first variation process related to (X x t ) t≥0 , we have: Thus, where .stands for the operator norm associated with the Euclidean norm.Since T x is the solution to Thus, )ds dλ.
Following the arguments of Proposition 4.6, we get for any positive function φ, .
By (H 1 r ), one deduces that where in the second line, we used Proposition 4.3 and the convexity of U .Let now ν be a coupling of π and π γ .We have Taking the infimum over the set of couplings ν of π and π γ and using again Proposition 4.3, this yields where Then, a constant c r exists such that for all γ ∈ (0, γ ⋆ ], for all λ ∈ (0, 1], Remark 4.8.The assumption on ∆(∇U ) is calibrated to control its contribution by L 3 Ψ.That simplifies the purpose and we could keep its specific contribution at the price of technicalities.However, this assumption is not really restrictive: denoting by A(x) the d × d-matrix defined by A i,j (x) = D 3 i,j,j U (x).One easily checks that ∆(∇U ) 2 2,∞ = sup the second inequality coming from a classical inequality related to the Frobenius norm.Since L ≥ sup x∈R d λD 2 U(x) , the assumption is for instance true if sup To conclude, note that Ψ σ 2 dL is well controlled: for instance, under (H 1 r ) and (H 2 r ), Ψ Remark 4.9.The calibration of the parameter λ is of first importance in the proof of Proposition 4.7 in order to avoid exponential dependence in the dimension.
Proof.The proof is an adaptation of Lemma 5.2 and Proposition 5.3 of [EP21] but with the viewpoint that U is only convex.More precisely, we start with a one-step control of the error between the diffusion and its Euler scheme by setting Then, setting b = −∇U , we have where in the last line we used the convexity of U which involves that ∇U (x) − ∇U (y), x − y ≥ 0.
We then write Let λ > 0. For the right-hand side of (33), we use the elementary inequality, |uv| ≤ λ For (34), the Itô formula applied to On the one hand, setting ∆b = (∆b i ) d i=1 , On the other hand, using the fact that M defined by M t = t 0 ∇b(y + σB s ), dB s is a martingale (we refer to [EP21] for the details), we get Finally, from what precedes, we deduce that A standard Gronwall argument then leads to (ii) Iterating the above inequality, we obtain for each n ≥ 1, Integrating the initial condition with respect to π γ , we get where in the second line, we used the stationarity property of π γ .Now, under (H 1 r ), |b| 2 = |∇U | 2 ≤ 2LU (with the same idea than one which leads to (18)) so that by Proposition 4.3(iii), π γ (|b| 2 ) r L Ψ. On the other hand, by the Itô formula and the fact that ∆U ≤ dL, Again, with the help of Proposition 4.3(iii), Thus, using that γ ≤ L −1 , Since for a symmetric d × d-matrix A, A F ≤ √ d λA , one deduces that ∇b 2,∞ = D 2 U 2,∞ ≤ √ dL.It easily follows that σλL ∇b 2,∞ ( √ Ψ + σ √ dL) ≤ L 3 Ψ (using that L ≥ 1 and Ψ ≥ d).The result follows.
5 Proof of Theorem 2.2 Following the bias-variance decomposition of the MSE: , we successively study the bias and the variance contributions and end the section by the proof of Theorem 2.2.
Step 1: Bias of the procedure
In the sequel, Y(J, (γ j ) j , τ, (T j ) j , f ) is usually written Y for the sake of simplicity.We start with a telescopictype decomposition: Let us now study the bias generated by the first and second terms of the right-hand side of (36).
Lemma 5.1.Assume (H 1 r ) and γ 0 ∈ (0, γ ⋆ ].Let x ∈ R d such that U (x) r Ψ.Then, for any r ∈ [0, 1) and δ ∈ (0, 1 2 ], there exists a constant c r,δ (depending only on r and δ) such that for all T ≥ 1, for all Lipschitz continuous function f : R d → R, Proof.Let us apply Proposition 4.5 with where in the last line, we used standard arguments of comparisons between series and integrals.The result follows.
We are now ready to state a proposition about the control of the bias of the procedure.
Proposition 5.1.Assume (H 1 r ) and γ 0 ∈ (0, γ ⋆ ].Let x ∈ R d such that U (x) r Ψ.Let δ ∈ (0, 1 2 ] and let f be a continuous Lipschitz function with (ii) If the assumptions of Proposition 4.7 are fulfilled, (1) bias Proof.(i) Taking the expectation in (36), we obtain: For the three first terms, we apply Lemma 5.1 and for the last one, Lemma 4.5.The result follows.
(ii) It is the same proof using Proposition 4.7 to control the last term (instead of Lemma 4.5).
Step 2 : Control of the variance
Now we have to control the variance of our estimator.Owing to the independency between the layers, Var(Y(J, (γ j ) j , (T j ) j , f where for some given γ > 0 and s > 0, Before going further, let us recall that in order that the multilevel method be efficient, the correcting layers must have a small variance.In the long-time setting, this involves to be able to control the L 2 -distance between couplings of Euler schemes with steps γ and γ/2.By Proposition 4.4, this is still possible under (H 1 r ), and such a property allows to obtain the following result: Lemma 5.2.Assume (H 1 r ) and γ 0 ∈ (0, γ ⋆ ].Let x ∈ R d such that U (x) r Ψ.Let δ ∈ (0, 1 2 ] and κ > 2 1−δ .Let f be a continuous Lipschitz function.Then, for all T > 0, where Remark 5.1.In the uniformly convex case, the variance is controlled by γ log(1/γ) T whereas, here, we are only able to obtain γ 1− 1 T . This difference is due to the lack of exponential convergence to equilibrium under our assumptions.Note that if we leave κ go to ∞, we are moving ever closer to the uniformly convex bound.However, the constant depends on κ and explodes when κ → +∞.The interesting point is that the exponent of Ψ remains bounded when κ → +∞, which means that the dependence in the dimension is slightly impacted by the choice of κ.
First, at the price of replacing f by f /[f ] 1 ,we can assume in the sequel that [f ] 1 ≤ 1.Then, By Proposition 4.4 and the fact that U (x) uc Ψ, we deduce that for every δ ∈ (0, 1], This yields a first bound for Cov (G γ s , G γ u ): Hence, for any t 0 > 0, We now want to take advantage of the convergence to equilibrium to get a second bound when s − u ≥ t 0 : since G γ u is F u γ -measurable, we have for any s ≥ u γ , Setting F(γ, t, x) = E[f ( Xγ,x t )] − π γ (f ), we deduce from the Markov property that and hence, On the other hand, As a consequence, Let us study the two right-hand members successively.For (40), the Cauchy-Schwarz inequality and (38) yield: By Proposition 4.5 or more precisely by (29) combined with Proposition 4.3(iii)9 applied with φ and that for any κ > 2 1−δ , we deduce that for any t 0 ≥ 2 and for any γ ∈ (0, For (41), using that s γ ≥ s γ − u γ , we remark that we can obtain the same bound so that: In view of the above bound and of the one obtained in (39), we now optimize the choice of t 0 by taking t 0 solution to: (γL) Plugging this value of t 0 into (39), this leads to: for any κ > The result follows.
In the next proposition, we are now able to work on the variance of the multilevel procedure.
To conclude, we need to separate two situations.If γθcd ≤ 1, then the inequality e x ≤ 1 + 4x for x ∈ [0, 1] leads to sup where in the last inequality, we used that 4γc ≤ 1 ≤ c c .This concludes the proof.
(iii) Noting that Ψ r Ψ, the first bound is obvious.For the second one, it is enough to note that under (H 1 r ), (X t ) t≥0 and ( Xnγ ) n≥0 converge in distribution to π and π γ respectively so that with a uniform integrability argument combined with the first bound of (iii), the convergence holds for along functions U p for any p > 0.
B Proof of Proposition 4.3
The idea is to use Jensen inequality to derive controls of the polynomial moments from exponential moments.To this end, we begin with the following lemma: i.a), (ii.a), (ii.b) are satisfied (up to a constant depending on κ, δ and r only) if
Lemma B. 1 .
Let V denote a non negative random variable which satisfies E[e θV ] < e a + ρe b for positive θ, a, ρ and b.Then, for anyp ≥ 1, E[V p ] ≤ θ −p (p − 1 + a + b + log(2ρ)) p .(56) •, • and | • |.The set M d,d refers to the set of d × d-real matrices, we denote by • the operator norm associated with the Euclidean norm.For a symmetric matrix A, we denote respectively by λ A and λA its lowest and highest eigenvalues.The Frobenius norm for A ∈ M d,d is denoted by A F . | 14,058.8 | 2023-01-23T00:00:00.000 | [
"Mathematics"
] |
BINA SEJAHTERA EMPLOYEE COOPERATIVE FINANCIAL INFORMATION SYSTEM BASED ON SAK-ETAP FOR MANAGERIAL DECISIONS
The Bina Sejahtera employee cooperative or abbreviated as Kopkar Binatara STMIK AKAKOM has a savings and loan business unit and is also a shop division. The system was designed by creating context diagrams, level 0 DAD, relationships between tables and data dictionaries, input designs, and main menus and system views. Bina Sejahtera Employee Cooperative Financial Information System Based on SAK-ETAP Managerial Decisions only involve 2 (two) external entities, namely members and management of cooperatives. One of the outputs of this system will be the SHU report. Cooperative administrators provide data on cooperative management, position data, type of deposit data, savings transaction data, loan type data, loan transaction data, installment data, retrieval data, SHU data and obtain member reports, job reports, cooperative management reports, periodic savings reports, savings report per member, loan report per member, loan report per period, retrieval report per member, retrieval report per period, bad credit report per period, ceiling report per member, overall ceiling report, installment report per period, installment report per member, report on fines per period, interest report per period, SHU report per member, overall SHU report, Profit / Loss Report, Change in Capital Report, Balance Sheet Report from Bina Sejahtera Employee Cooperative Financial Information System Based on SAK-ETAP. The reports obtained from the system are complex but useful for managerial decisions
INTRODUCTION
The law of Cooperative firms was set in Indonesia under UU no. 25 in the year 1992. Cooperatives firms are legal entities established by individuals or cooperative legal entities, with the separation of the wealth of its members as capital to run a business, which fulfills common aspirations and needs in the economic, social, and cultural fields by the values and principles of cooperatives.
Cooperative principles are the basic foundation of cooperatives in carrying out their business as business entities and people's economic movements to build effective and long-lasting cooperatives. The latest cooperative principles developed by the International Cooperative Alliance (international non-government cooperative federation) are open and voluntary membership, democratic management, member participation in the economy, freedom and autonomy, and development of education, also training and information.
The Bina Sejahtera employee cooperative or abbreviated as Kopkar Binatara STMIK AKAKOM was founded on July 6,1992. Kopkar Binatara has a savings and loan business unit and also a shop division. The recording system for the Kopkar Binatara STMIK AKAKOM business unit is still done manually so it has not been integrated between each business unit. Because of this, the researchers raised the theme of designing a financial information system for Bina Sejahtera STMIK AKAKOM employees' Cooperatives based on SAK-ETAP for managerial decisions.
Formulation of the Problem is How to make a financial information system for Employee Cooperative Bina Sejahtera STMIK AKAKOM based on SAK-ETAP for managerial decisions.
Based on the formulation of the problem mentioned above, the purpose of this research is to produce a design of financial information system for employee cooperatives of Bina Sejahtera STMIK Akakom based on SAK-ETAP for managerial decisions.
LITERATURE REVIEW
Anggraeni Nova, et. al. (2012) designed a savings and loan information system at KUD Mandiri Bayongbong. The methodology used is the System Development Life Cycle (SDLC). The research results show that the use of the Savings and Loans information system can provide solutions for speed, accuracy, and accuracy in carrying out savings and loan data processing to obtain optimal results.
Nurul and Latifah (2015) designed an accounting system for cash receipts and disbursements and designed a computer-based accounting information system that could be applied to Small and Medium Enterprises (SMEs) to make it easier for them to prepare financial reports. The resulting design is expected to support the design of accounting information systems in Kampung Kue Surabaya SMEs. The research method uses a qualitative approach and the data collection techniques used in this study are grouped into two, namely, the main data is obtained from people involved in UKM activities, while the supporting data is obtained from documents in the form of notes, pictures and other materials. who can support in research External entities involved in the system are employees, owners, and admins. The system generated based on SAK-ETAP includes journal reports, general ledger reports, trial balance reports, worksheet reports, and financial reports.
Theoretical Basis Information System
An information system is a set of components that are interconnected, collected or obtained, processed, stored, and distributed information to support decision-making and control within an organization and assist managers in making decisions. The physical components of information systems include hardware, software, databases, procedures, and brainware.
Sak-Etap
Financial Accounting Standards for Entities Without Public Accountability (Standar Akuntansi Keuangan untuk Entitas Tanpa Akuntabilitas Publik/SAK ETAP) are intended to be used by Entities Without Public Accountability (ETAP), which are entities that do not have significant public accountability; and issue financial reports for general purposes (general purpose financial statements) for external users. Examples of external users are owners who are not directly involved in managing the business, creditors, and credit rating agencies.
SAK-ETAP aimed to create flexibility in its application and was expected to provide easy access for ETAP to funding from banks. SAK-ETAP is a SAK that stands alone and does not refer to General SAK, mostly using the historical cost concept which is managing transactions conducted by ETAP; a simpler form of arrangement in terms of accounting treatment and relatively unchanged over the years.
METHODS Data Collection Method: a. Descriptive Method
In solving the problem, the facts were described with a relationship study and analyzed the output based on the implementation that has been carried out b. Experiment Method By making savings-loan transaction data on the savings-loan at the Bina Sejahtera Employee Cooperative Division STMIK AKAKOM. The research will be carried out through several stages: 1) Specify the object 2) Write down all the attributes that will be used, make unnormalized forms and arrange them into the First Normal and Second Normal forms, make relationships between tables, make context diagrams, make tiered diagrams, make data flow diagrams, make system flowcharts, make program flowcharts 3) Output system design.
RESULT AND DISCUSSION
Based on the results of an analysis using a descriptive method of the financial information system for the Employee Bina Sejahtera Cooperative STMIK AKAKOM based on SAK-ETAP for managerial decisions is the process of financial transactions involving two external entities, namely members and management of the cooperative. The role of members in the system is as a source of member data and will receive SHU information from the system, while the role of cooperative administrators in the system is to input cooperative management data, position data, deposit type data, deposit transaction data, loan type data, loan transaction data, installment data, fetch data, SHU data. Reports obtained by management entities from the system are member reports, position reports, cooperative management reports, savings reports per period, savings reports per member, loan reports of each member, loan reports each period, withdrawal reports of each member, withdrawal reports of each period, bad credit reports of each period, ceiling report of each member, overall ceiling report, installment report of each period, installment report of each member, fines report of each period, interest report of each period, SHU report of each member, overall SHU report, Profit/Loss Report, Capital Changes Report, Balance Sheet Report.
Based on the experimental method, the first result is the creation of a context diagram. The context diagram contains an overview of the system to be created which describes the interaction of the information system with the environment in which the system is placed. Based on the analysis that has been carried out in the STMIK AKAKOM Cooperative Employee financial information system based on SAK-ETAP, it can be determined that there are 2 (two) external entities, namely members and employees, as shown in Figure 1. In this diagram, it can be seen who the entities will be providing data to the system, what data it gives to the system, to whom the system must provide information or reports, and what contents or types of reports the system must produce. Figure 1 shows that members provide member data to the system and will get SHU reports. Cooperative administrators provide data on cooperative management, position data, deposit type data, deposit transaction data, loan type data, loan transaction data, installment data, withdrawal data, SHU data and obtain member reports, position reports, cooperative management reports, savings reports of each period, savings reports of each member, loan reports of each member, loan reports of each period, withdrawal reports of each member, withdrawal reports of each period, bad credit reports of each period, ceiling reports of each member, overall ceiling reports, installment reports of each period, installment reports of each member, fines reports of each period, interest reports of each period, SHU reports of each member, overall SHU reports Profit/Loss Reports, Capital Changes Reports, Balance Sheet Reports from the Employee Cooperative Financial Information System Bina Sejahtera STMIK Akakom Based on SAK-ETAP. Reports obtained from the system are very complex but useful for managerial decisions. The second experimental method is making DAD Level 1 which can be seen in Figure 2.
Figure 2. DAD L1 Financial Information System Employee Cooperative Bina Sejahtera STMIK Akakom Based on SAK-ETAP for Managerial Decisions
The second experimental method is the creation of a Relational Table which is a form of relationship between two or more tables, where one of the tables will have a form of close dependence so that it cannot be separated. The Database Schema of this research can be seen in Figure 3.
Black Box Testing
The testers of the system consist of two entities, namely the management and one member of the cooperative. Testing is carried out based on application details by providing several inputs to the program carried out by the respondents. Next, the input process is carried out according to its functional requirements to see whether the application program can produce the output as desired and follow the basic functions of the program. The result of the testing is shown in Table 1.
CONCLUSIONS
Based on the research that has been done, it can be concluded that the Employee Cooperative Financial Information System for Bina Sejahtera STMIK Akakom Based on SAK-ETAP for Managerial Decisions only involves 2 (two) external entities, namely members and management of the cooperative. members provide member data to the system and will get SHU reports. Cooperative administrators provide data on cooperative management, position data, deposit type data, deposit transaction data, loan type data, loan transaction data, installment data, withdrawal data, SHU data and obtain member reports, position reports, cooperative management reports, savings reports per period, savings reports of each member, loan reports of each member, loan reports of each period, withdrawal reports of each member, withdrawal reports of each period, bad credit reports of each period, ceiling reports of each member, overall ceiling reports, installment reports of each period, installment reports of each member, fines reports of each period, interest reports of each period, SHU reports of each member, overall SHU reports Profit/Loss Reports, Capital Changes Reports, Balance Sheet Reports from the Employee Cooperative Financial Information System Bina Sejahtera STMIK Akakom Based on SAK-ETAP. Reports obtained from the system are very complex but useful for managerial decisions.
RECOMMENDATIONS
This research can be developed by adding several transactions related to cooperative income from the store division of the STMIK Akakom Bina Sejahtera Employee Cooperative. | 2,646 | 2022-12-14T00:00:00.000 | [
"Business",
"Computer Science"
] |
Health Opportunity Costs: Assessing the Implications of Uncertainty Using Elicitation Methods with Experts
Well-established methods of economic evaluation are used in many countries to inform decisions about the funding of new medical interventions. To guide such decisions, it is important to consider what health gains would be expected from the same level of investment elsewhere in the health care system. Recent research in the United Kingdom has evaluated the evidence available and the methods required to estimate the health effects of changes in health care expenditure within the National Health Service. Because of the absence of sufficiently broad-ranging data, assumptions were required in the previously mentioned work to estimate health effects in terms of a broader measure of health (quality-adjusted life-years), which is more relevant for policy. These assumptions constitute important sources of uncertainty. This work presents an application of the structured elicitation of the judgments of key individuals about these uncertain quantities. This article describes the design and conduct of the exercise, including the quantities elicited, the individual (rather than consensus) approach used, how uncertainty in knowledge was elicited (mode and bounds of an 80% credible interval), and methods to generate group estimates. It also reports on a successful application involving 28 clinical experts and 25 individuals with policy responsibilities. Although, as expected, most experts found replying to the questions challenging, they were able to express their beliefs quantitatively. Consistent across the uncertainties elicited, experts’ judgments suggest that the quality-adjusted life-year (QALY) impacts of changes in expenditure from earlier work using assumptions are likely to have been underestimated and the “central” estimate of health opportunity cost from that work (£12,936 per QALY) to have been overestimated.
be some consideration of how any health gains offered by the new intervention are to be assessed against any additional costs it imposes on health systems. A key piece of information to guide this assessment is an estimate of the health gains that could have been achieved elsewhere with the same levels of investment-the health opportunity costs-that is, to consider the health effects that could be generated by making the additional resources required for the new interventions available for other services and interventions that could be funded instead or the health effects of those activities that would need to be given up if these resources are committed to the new intervention.
A number of studies in different countries have based an assessment of opportunity costs on the empirical relationship between changes in health care expenditure and health outcome. [5][6][7][8] Recent research in the United Kingdom used national data on expenditure and outcomes in different disease areas reported at a local level in the National Health Service (NHS). [9][10][11] By exploiting the variation in expenditure and mortality outcomes, the relationship between changes in expenditure and mortality was estimated (while accounting for endogeneity). By using the effect of expenditure on the mortality and lifeyear burden of disease as a surrogate for the effects on a more complete measure of burden (one that also includes the quality-of-life burden of disease), a cost per qualityadjusted life-year (QALY) that reflects the likely impact of changes in expenditure on both mortality and morbidity was also reported.
These estimates of the marginal productivity of health care expenditure indicate the health that is expected to be forgone as a consequence of additional costs displacing other health care activities. They reflect what is likely to happen in the health care system, given current levels of information, local decision making, and the influence of other aspects of social value, which are not captured in measures of health such as QALYs. They represent the relevant expected health opportunity costs when the decision context is restricted to approving or rejecting a new intervention. i In this context, it also indicates the maximum that the health care system can afford to pay for the additional benefits offered by a new intervention (e.g., the temporary monopoly price for pharmaceuticals protected by patent) without reducing the total number of QALYs generated.
The assumptions that were required to link the estimates of effects of changes in expenditure on the mortality burden of disease to the likely effect on QALYs constitute important sources of uncertainty. To inform these assumptions appropriately, the judgments of key individuals, such as those with substantive clinical or policy expertise, are important. Elicitation methods offer a systematic process for formalizing and quantifying, typically in probabilistic terms, individuals' judgments about uncertain quantities. 12,13 Elicitation is an important activity in many fields, including in support of decision making, where there may be significant uncertainties and their quantification can feed directly onto decisions. Furthermore, elicitation is a vital element of a Bayesian approach to statistics, the principles of which are core to decision analyses. Here, the use of prior information to augment existing data has an established theoretically basis, particularly where the empirical evidence is limited. 12 This research presents an application of structured elicitation to inform estimates of expected health opportunity costs in the UK NHS, a key quantity to inform policy decisions. This constitutes a novel and important context for the use of structured elicitation, aiming to reflect uncertainty in the judgments required for policy appropriately and explicitly. We demonstrate the applicability of the elicitation exercise in practice. Its design draws from wider experience of elicitation in health technology assessment 14 and literature from other areas of science (for example, refs. 15 and 16).
This article is structured as follows. The next section summarizes earlier work by Claxton et al. 9 to estimate NHS marginal productivity and is the motivation for the current work. The following sections focus on the elicitation exercise, presenting its methods (design, conduct, and analyses) and the results of its application. The article finishes with a discussion including key policy implications.
Summary of the Work by Claxton et al. and Overview of the Key Uncertainties Identified
Claxton et al. 9 evaluated the relationship between expenditure and mortality using a cross-sectional design, seeking to identify differences in mortality across health care commissioning units (at the time of this research, there i Decision makers may also compare the proposed investment to other specific disinvestments or alternative investments. However, they still need to consider how these compare with what the health care system would be expected to deliver (i.e., an estimate of marginal productivity is still relevant). If the decision maker had full information about all interventions that are or could be provided for all indications and subgroups of the population and was also tasked with the wholesale redesign of the health care system, well-established mathematical programming solutions would be possible and appropriate. The marginal productivity would be the outcome of this optimization (i.e., the shadow price of the expenditure constraint from solving the dual problem).
were 152 primary care trusts) that could be attributed to differences in NHS spend. Empirically, the research first quantified expenditure elasticities, that is, how changes in NHS expenditure in a given year were allocated between Programme Budgeting Categories (PBCs), which reflect broad disease areas characterized by International Classification of Disease (ICD) codes.
Second, the research estimated outcome elasticities, that is, how changes in expenditure by a PBC (in a particular year) altered PBC-specific mortality rates (using national data on mortality reported for ICDs or groups of ICDs, mapped onto PBCs). Analyses adjusted for important covariates (including need) and used instrumental variables to estimate causal effects overcoming the problem of endogeneity.
Results showed that the mortality effects of changes in spend could be identified for only 11 of the 23 PBCs (such as cancer and gastrointestinal disorders). For the remaining disease areas (such as mental health disorders), health care focuses primarily on improving healthrelated quality of life (HRQoL). Across the 11 PBCs for which mortality effects were detected, empirically based estimates of how changes in total NHS expenditure affect mortality were generated, returning the following point estimates (using 2008 expenditure and 2008-2010 mortality): £105,872 for the cost per death averted, £23,360 for the cost per life-year, and £28,045 for the cost per lifeyear where life-years were adjusted for HRQoL.
However, an estimate of health opportunity costs relevant for policy needs also to consider the following (Table 1): A. whether changes in expenditure have effects beyond the year of expenditure (this can be termed duration of effects), B. how the effects of changes in expenditure on mortality relate to effects on a broader measure of health that incorporates both duration and HRQoL impacts (QALYs; this can be termed surrogacy), and C. how changes in expenditure affect health in disease areas for which the previous work could not measure a mortality effect (this can be termed extrapolation).
In the original research, 9 very limited data were available with which to assess each of these questions, and hence assumptions were made (listed in Table 1). These were used to obtain a central estimate of health opportunity costs (expressed as a cost per QALY) across all disease areas of £12,936 per QALY. An analysis of the uncertainty imposed by the empirical estimates (the expenditure elasticities estimated for each of the 23 PBCs and the outcome elasticities estimated for 11 of these) indicated that the probability of this central estimate being less than £20,000 per QALY was 0.89. 9
Methods
This research aimed at formally eliciting the beliefs of key individuals on the 3 judgments outlined above (and in Table 1), which are required for a policy-relevant estimate of health opportunity costs. Another uncertain quantity that was elicited concerned the expected life-years gained from averting a death. This is not required to evaluate health opportunity costs in terms of QALYs (although it is important to distinguish morbidity from mortality impacts on the QALY estimate), and hence, for conciseness, methods and results of the elicitation for this quantity are not described in this article but are available elsewhere. 16 Uncertainty in knowledge was explicitly elicited throughout. 12,17,18 The design of the exercise sought to minimize the use of cognitive heuristics that may lead to bias. [19][20][21] Two groups of individuals were considered: the first comprised clinical experts, acting as substantive experts in key disease areas, and the second included policy experts, defined as individuals drawn from organizations that develop or implement policy or that have a major interest in policy in this area. These individuals are not expected to have specific substantive expertise in key clinical areas. Policy experts were asked for their judgments on the quantities of interest once they had considered the information that had been elicited from clinical experts. As such, the elicited judgments from policy experts reconcile their own judgments together with the views of the substantive (clinical) experts.
This exercise did not seek to establish consensus, as such methods are known to have a number of limitations (e.g., because of the fact that aggregation is done implicitly, dominant individuals may imbalance group dynamics, and consensus methods are known to return overly precise judgments). 22 Hence, experts were asked to give their opinions individually (and discouraged from interacting), and a group estimate was generated analytically (detailed below). 12 All aspects of the exercise (design, conduct, and analyses) were protocolled in advance. 23 What Quantities Were Elicited?
The elicitation questionnaire focused on the effects on the population health of changes in NHS expenditure in a particular year (all else unchanged). Experts were prompted to think of changes in expenditure that were significant but still represented a small proportion of NHS expenditure.
The first uncertain quantity concerned the duration of effects. A 2-part question was used (section A, Table 2) that first asked about the duration of mortality effects Section A Question. For how many more years (beyond the year of increased expenditure) would you expect disease-specific mortality rates to be reduced? Question. From an increase in expenditure in a particular year, how do reductions in mortality rates in subsequent years compare (in proportionate terms) to the reduction observed in the first year? This was elicited separately for the second, third, and fourth years. Refers to quantities A2 yr, A3 yr, and A4 yr, respectively, in the diagram.
Section B
Question. If expenditure is increased in a particular year, how many times bigger (or smaller) are proportionate reductions on quality-adjusted life-year burden when compared with proportionate reductions on mortality burden? We elicited for the year of increased expenditure (first year) and also for any later effects of expenditure on the second, third, and fourth years subsequent to increased expenditure. Refers to quantities B1 yr, B2 yr, B3 yr, and B4 yr, respectively, in the diagram. Section C Question. How much bigger (or smaller) are reductions in health burden (quality-adjusted life-years) when expenditure is increased, for example, in ''mental health disorders'' instead of disease areas with a measured effect of increased expenditure on mortality (average effect across all disease areas in this group). This was elicited for the year of expenditure (first year) and also for any later effects of expenditure on subsequent years (second, third, and fourth years). Refers to quantities C1 yr, C2 yr, C3 yr, and C4 yr, respectively, in the diagram.
beyond the first year. Second, it asked about the magnitude of mortality effects in the second, third, and fourth years after the change in expenditure. Participants were asked to express the latter as a proportion of the effect in the first year, because the effect on the first year is an estimable quantity (and was the focus of the empirical work in Claxton et al. 9 ). Using a relative quantity allows for conditional independence to be reasonably assumed and avoids the burdensome task of eliciting dependency. Conditional independence was also assumed in the elicitation of other uncertain quantities, and the accompanying diagram in Table 2 illustrates the conditional relationships specified. Note that the wording intentionally asked for the effects that can be attributed to changes in expenditure in a particular year and hence was able to identify future (lagged) effects causal to that year's change in spend.
The second uncertain quantity subject to elicitation related to the surrogacy relationship and aimed to establish the effects of increased expenditure on a year's QALY burden (section B, Table 2). QALY burden was defined as comprising of the life-years lost due to premature mortality (due to disease) in the year of interest, adjusted for quality, plus any impacts on the level of HRQoL from disease in individuals alive in that year. This was elicited separately for the year of expenditure (first year) and subsequent years (second, third, and fourth years). To allow for conditional independence, it was formulated as relative to effects on mortality burden in the same year.
The third uncertain quantity related to extrapolation (section C, Table 2). Experts were asked about reductions in QALY burden in disease areas that did not have measurable mortality effects (e.g., mental health). They were asked to express these reductions proportionally in relation to the average QALY burden reduction from an increase in NHS expenditure across all disease areas with measurable mortality effects. Again, this was elicited separately for the year of expenditure (first year) and subsequent years (second, third, and fourth years).
Although elicited judgments are likely to differ between disease areas, it was considered too burdensome for the experts to present their judgments for each of the 23 PBCs. Hence, 7 disease areas (circulatory, respiratory, gastrointestinal, neurological, mental health, endocrinology, musculoskeletal) were selected. These were chosen because changes in expenditure and changes in mortality in those areas are the most important drivers of the central estimate of health opportunity cost and most sensitive to the surrogacy and extrapolation assumptions. Estimates were elicited from experts separately for each of these 7 main PBCs and a single estimate for the remaining PBCs combined. These are heterogeneous and broad disease areas, so in responding to questions, experts were asked to consider the ICDs within each PBC for which an increase in expenditure is more likely to fall.
Which Experts?
We aimed to recruit purposively 20 clinicians (at least 2 from each clinical area ii ) and 20 individuals affiliated with selected policy-relevant organisations. iii, 24 Responses from experts were anonymous, but the organizations they belong to were recorded (policy experts), as were the clinical areas of expertise (clinical and relevant policy experts), to facilitate analysis of betweenexpert heterogeneity. 14 How Were the Different Quantities Elicited?
It was important for elicitation to reflect experts' uncertainty, so experts were asked for multiple summaries on each quantity. 12 One was the mode (the value the expert believes to be most likely, their best guess) as it is generally thought that experts can more easily report this than the mean or median. 12,25 The other summary estimates were the bounds of a credible interval (Crl; the Bayesian equivalent to confidence intervals). iv Evidence shows that while eliciting CrI is intuitive, there is a clear tendency for these to be too narrow (a bias called ''overconfidence''); that is, people believe their estimates are more accurate than is justified. 26 This limitation is acknowledged, but experts' time constraints were a major consideration. 27 Hence, strategies were adopted to minimize the potential for bias: 80% CrI were elicited as these typically show less overconfidence than 95% CrIs, 12 and single limit estimates were also elicited-in which the lower bound is elicited first and then the upper bound separately-as these are also thought to produce wider estimates than asking directly for the range. 28,29 Hence, the wording used in this work was as follows: (Mode) My best guess for the value of this quantity is . . . .
(Lower bound of 80% CrI): I am very certain (90% certain) that the true value for this quantity is higher than . . . .
(Upper bound of 80% CrI): I am very certain (90% certain) that the true value for this quantity is lower than . . . .
Conduct of the Exercise
A paper questionnaire was developed (Supplementary Appendix 1) and extensively piloted. To facilitate appropriate training, the exercise was, where possible, conducted in groups (workshops). A training session for experts was developed that described the objectives of the elicitation exercise; clarified concepts such as those of uncertainty, variability, and heterogeneity; familiarized experts with the quantities the research sought to elicit; described and explained the impact of bias and heuristics; and trained experts on the methods of elicitation used (Supplementary Appendix 2). [30][31][32] This was delivered by 2 of the authors (K.C. and M.O.S.).
Throughout the exercise, individuals were encouraged to revisit and revise their answers to previous questions, 33 but we did not record when this occurred. At the end of each section of the exercise, participants were asked whether they were confident the answers they had given reflected their views and uncertainties. Response options were ''yes,'' ''not sure,'' and ''no.'' Individuals were also provided with opportunities for free-text feedback.
The judgments from clinical experts were elicited prior to those of policy experts. The judgments from clinical experts were summarized (histograms of the modes and upper and lower CrI bounds) and presented to policy experts to help them formulate their judgments using the same elicitation tool (Supplementary Appendix 3).
Analyses and Pooling across Experts
Analyses were conducted in Excel 2010. 34 In describing the elicited beliefs, the first step was to fit a distribution to each quantity elicited from each individual expert. 30,35 The quantities of interest here ranged between 0 and +infinity and were fitted with the log-normal distribution as prespecified. 23 Given that 3 summaries were elicited from each expert, more than 1 type of 2-parameter distribution can reasonably reflect their judgments. It was protocolled 23 that, to reflect this additional uncertainty, 2 alternative (2-parameter) distributions would be fitted: one using the lower bound of the CrI and the mode and another using the upper bound and the mode. v A unique distribution for each quantity elicited by each expert was then derived by linear pooling of the 2 distributions (i.e., pooling means and variances). vi Further details on this stage of analysis is presented in Supplementary Appendix 4.
After describing each expert's judgment for each quantity using distributions, these were pooled together to derive a single distribution for the group. Linear pooling was used 12 with equal weights across experts 4 to preserve the individual judgments in the collective (pooled) judgment. 14,26 Linear pooling means that, if the experts' distributions for a single quantity are identical, the pooled distribution is also identical to the individuals' distributions. Also, if there is the support from at least 1 expert that the quantity of interest takes a particular value, the pooled distribution will also show some support for that value. 12,36 The primary analysis reflects the pooled results from clinical experts, and the secondary analysis reflects the pooled results from policy experts.
Sensitivity Analyses
Two sensitivity analyses were protocolled. 23 One explored heterogeneity (i.e., between-expert uncertainty) by 1) considering only responses of clinical experts in the clinical specialty relating to the disease area in the question and 2) by grouping policy experts based on the type of organization to which they belonged (see footnote iii). The second protocolled sensitivity analysis disregarded those responses when individuals indicated they were not confident that the response reflected their views and uncertainties. A third and final sensitivity analysis was not protocolled and provided a qualitative assessment of the implications of using a Gamma distribution, instead of the log-normal, in the fitting.
Primary Analyses Using Substantive (Clinical) Experts' Responses
Twenty-eight clinical experts participated in 3 (group) workshops and 4 individual interviews. vii A summary of the pooled distributions across all clinical experts is presented in Table 3.
Results of question A1 (duration of effects) indicate that changes in NHS expenditure in a particular year are expected to affect mortality in subsequent years. The mean duration of effects is highest for circulatory and gastrointestinal (approximately 11 additional years) and lowest for neurological disease (approximately 6 additional years). The pooled distribution shows considerable uncertainty, as demonstrated by its wide 80% CrI. As an illustration, the top panel of Figure 1 shows the individual experts' distributions for the duration of effects in circulatory disease (in gray), overlaid with the pooled distribution across all experts (in black). Note that the uncertainty in the pooled distribution reflects not just each individual's uncertainty but also between-expert heterogeneity.
Experts' judgments suggest that, across all disease areas, mortality effects beyond year 1 are expected to be higher than effects in the first year (section B). In circulatory disease, for example, it is expected that the effect in the second year is 1.5 times that in the first year. This can be interpreted to reflect the preventative nature of much of the expenditure in this disease area, in which health benefits of current expenditure are higher in the future. The magnitude of expected mortality effects decreases over time for all disease areas. For example, in circulatory disease, surrogacy on the third year is expected to be 1.2 and in the fourth year 0.9. The pooled distributions are wide, and the 80% CrI includes the value of 1.
Experts' judgments indicate that surrogacy relationships are expected to be greater than 1 in the year of expenditure for all disease areas (between 2.9 and 3.7, see Table 3). This implies that changes in spend are expected to reduce QALY burden proportionately more than mortality burden, although this is associated with considerable uncertainty. The individual experts' distributions on the surrogacy relationship in year 1 for circulatory disease have been graphically presented in the bottom panel of Figure 1. Only 5 of the 27 distributions (1 expert did not complete this question) have mean estimates below or equal to 1 (results not presented here). The pooled distribution across the 27 experts shows a mean of 2.9 and an 80% CrI suggesting the true value lies between 0.3 and 6.6 ( Table 3). Over time, expected values for surrogacy do not fall below 1.
Extrapolation relationships follow the same pattern as surrogacy, with expected values consistently above 1 (between 2.6 and 4.7). The 80% CrI seem to reduce width over time.
Secondary Analysis Using Policy Experts' Responses
Twenty-five policy experts participated in 2 workshops (affiliations in endnote viii). Table 4 presents a summary of pooled distributions.
Results were fairly similar to those obtained with the pool of clinical experts, but between-expert variation was lower for this group of experts (exemplified in Figure 2 for duration of effects, top panel, and surrogacy, bottom panel, in circulatory disease). With respect to mortality effects, policy experts generally indicated higher duration (in terms of expected values) than clinical experts and a similar magnitude over time.
In terms of surrogacy, expected values are also comparable with those of clinical experts. Expected values do not fall below 1 (although CrI include 1); for example, for respiratory, surrogacy had an expected value of 2.9. Expected extrapolation relationships also follow similar patterns to those of clinical experts but decrease slightly faster over time.
Face Validity and Qualitative Feedback
The information provided by individual experts is reproduced in item 4 of the Supplemental Material. Only a very small proportion of clinical experts (1/28 in section A, 3/ 28 in section B, and 0/24 in section C) indicated their responses did not reflect their views and uncertainties, with the remaining answering ''yes'' or ''unsure'' (respectively, 16 and 11 out of 28 in section A, 7 and 19 out of 28 in section B, and 14 and 10 out of 24 in section C). This was qualitatively similar for policy experts. Qualitative feedback was insightful regarding the reasons for these responses. Participants, both clinical and policy, consistently mentioned that the heterogeneity across the ICDs that composed the different disease areas made responding to questions particularly challenging. Some clinical experts also found it difficult to answer questions on disease areas that did not relate to their specialism. Some policy experts also indicated that they relied heavily on the clinical experts' answers. The qualitative feedback did not suggest that the answers lacked face validity but instead explains the wide distributions returned by participants.
Sensitivity Analysis
Results of sensitivity analyses are shown in full in Supplementary Appendix 5. Here, we present only a qualitative summary of results.
Results did not change meaningfully when removing individuals who indicated their responses did not reflect their views and uncertainties (item 2.1A in Supplementary Appendix 5). When also removing individuals who responded ''not sure'' to this question (i.e., considering only those who responded ''yes''), differences were again not meaningful, except for surrogacy, for which means were slightly higher across all disease areas (item 2.1B in Supplementary Appendix 5). In terms of heterogeneity in the primary analysis (item 2.2 in Supplementary Appendix 5), the pooled distribution of clinicians in their clinical area of expertise shows some differences in relation to the pooled results across all clinicians (see, for example, the mean duration of mortality effects for circulatory, gastrointestinal, and neurological diseases). The magnitude of such effects over time is (in general) higher for circulatory and neurological diseases. Expected surrogacy relationships are similar for the year of expenditure, except for neurological disease, for which experts indicate surrogacy to be higher. Expected extrapolation relationships are lower for mental health, in the first year and over subsequent years, but higher for the first year in musculoskeletal disease.
In terms of heterogeneity in secondary analyses (item 2.3 in Supplementary Appendix 5), of note is the pooled distribution for group G2 (the biggest group comprising of 15 of the total 25 experts, including ''governmental bodies,'' such as the Department of Health and Social Care or Public Health England), which presents generally lower expected values and more precise distributions than the overall group. This implies that the heterogeneity introduced by the remaining groups is contributing to a widening of the CrI.
The post hoc sensitivity analyses evaluating an alternative distribution to represent experts' beliefs (item 2.4 in Supplementary Appendix 5) shows overall conclusions to be robust but that the magnitude of effects is sensitive to the choice: the log-normal distribution (prespecified in our analyses plan) has a heavier tail than the Gamma (implemented in sensitivity analyses) and hence generally returns higher expected values when fitted to the same mode and CrI bounds.
Discussion
This research developed an exemplar elicitation exercise aimed at quantitatively gathering the (uncertain) beliefs of individuals on a set of quantities for which there is currently insufficient evidence but that are central to an estimate of health opportunity costs for the UK's NHS. Resourcing decisions in the NHS require consideration of health opportunity costs, and hence this work has direct relevance for current policy in the United Kingdom. Despite being motivated by earlier research, 9 this work will also have longer-term relevance as the judgments elicited can be used to support other empirical studies for the United Kingdom, including those using different econometric methodologies, as these can be expected to suffer from the same evidence gaps.
Elicited judgments should not replace high-quality evidence, and it is paramount that primary evidence is collected on each of the uncertain quantities covered here. Our work, however, was designed in such a way that, as new evidence reports on individual quantities, the judgments elicited on the other quantities can be retained for use in policy. This was achieved by defining quantities as conditionally independent. The work presented here is also important internationally, as it can be adapted for evaluations pertaining to other countries or settings, beyond the UK's NHS.
The group estimates obtained provide a summary of the beliefs of multiple experts on quantities for which there currently is no evidence. There are, therefore, important implications for a meaningful estimate of health opportunity costs for use in policy. First, regarding the duration of mortality effects, the original analyses 9 assumed impacts only in the year of expenditure.
The results from the current work, however, indicate that mortality effects are expected also to occur in subsequent years. This suggests that the original work underestimated the QALY impacts of changes in expenditure. Second, the original work assumed perfect surrogacy in the effects of changes expenditure between mortality burden and total QALY burden. The results from this research indicate, however, that surrogacy is expected to be greater than 1 (this holds across disease areas for the first, second, and third years), indicating that the effects of changes in expenditure on total QALY burden are, in proportionate terms, expected to be higher than (rather than equal to) those on mortality burden. Again, this suggests that the original work underestimated the QALY impacts of changes in expenditure. Third, in terms of extrapolation, the original work assumed changes in spend to have equal effects on diseases with, and without, measured mortality effects. This work demonstrates that the extrapolation relationship is generally expected to be greater than 1. That is, the health effects in disease areas without measured mortality effects are expected to be higher than what was assumed in the original work. Consistently across the 3 uncertainties, experts' judgments suggest the QALY impact of changes in expenditure are likely to be underestimated when using the assumptions that underpin the ''central'' estimate of £12,936 per QALY reported in Claxton et al. 9 The exercise was carefully developed to align with the scope of the policy question, was piloted extensively, and was accompanied by an extensive training package to support experts and guide them through the tasks. As a consequence, it ran successfully. Experts were able to express their beliefs quantitatively, with only a few indicating their answers did not reflect their views (i.e., were not face valid). However, in approximately half of the answers, individuals indicated they were unsure that their answers reflected their views or uncertainties. Feedback left in open text did not, however, indicate these answers were not face valid but instead suggested that the breadth of the questions meant that the distributions retrieved were wide. Convening individuals in groups aided the delivery of the standardized training package and maximized expert engagement. However, it also made recruitment difficult: 132 clinical and 84 policy experts were contacted to recruit effective samples of 28 and 25, respectively. Issues with recruitment in elicitation have been recognized elsewhere. 27 As expected, the level of uncertainty in knowledge expressed by the individual experts was large, and group estimates were highly uncertain (as evident by the wide CrIs). In their feedback (Supplementary Appendix 6), experts consistently indicated that heterogeneity in the broad disease areas contributed to the uncertainty expressed in their responses. However, eliciting for ''finer'' definitions of disease, for example, 3-digit ICD codes of which there are more than 1500, would have been unfeasibly burdensome. Therefore, future research could instead provide further information to experts to help them make judgments about which ICDs may matter the most within each disease area.
The design of an elicitation exercise requires a number of methodological choices to be made, many of which are example specific. This exercise used methods established in the literature and justifies the choices made. However, it is important to acknowledge that methods research in this area is limited and that little is known about how different choices affect results. For example, although there is some evidence that consensus methods present a number of challenges inherent to group interaction (see the Methods section), its accuracy in relation to individual elicitation is largely unknown. This article demonstrates that structured elicitation can feasibly be used to explicitly quantify the judgments required to delimit important policy problems, judgments that otherwise would still need to be made implicitly and without the support of relevant experts. In this work, we focused on achieving a relevant estimate of health opportunity costs, a central quantity for policy on health care resource allocation decisions. We have learned that the methods used here (i.e., the elicitation protocol) are applicable in this novel context. For example, the elicitation of the mode and bounds of an 80% CrI was widely understood by the experts, and experts working close to policy valued the summaries of the judgments of clinical experts provided. We also learned that there are challenges in eliciting policy-relevant, but broad-ranging, quantities. Such broad-ranging quantities are by definition uncertain, and structured expert elicitation makes this explicit. | 7,950.6 | 2020-05-01T00:00:00.000 | [
"Economics",
"Medicine"
] |
Hydromagnetic Waves in Cold Nuclear Matter
: I consider a proton–neutron fluid mixture placed in an ultra-strong external static magnetic field and derive the spin-independent, small-amplitude disturbances in infinitely extended systems. As a theoretical framework I adopt a hydrodynamical model for the proton and neutron fluids moving in a Skyrme mean-field derived from the time-dependent Hartree Fock formulation of the many-body nuclear problem. From the mass, momentum balance, and Maxwell equations, I set up a system of equations governing the electromagnetic field and the continuum-mechanical fields of the mixture. Next, the hydromagnetic equations are linearized, and the occurrence of small-amplitude distortions of the velocity field is analyzed for various orientations of the constant external magnetic induction with respect to the wave propagation vector. The derivation of the above equations is carried out for the inviscid case
Introduction
Static magnetic effects in nuclei are primarily determined by the fact that their constituents, i.e., protons and neutrons, possess their own magnetic moments. Due to the disproportion between the nuclear mass and the electron mass, the magnetic moments of nucleons and nuclei are smaller by the same proportion with respect to the orbital and spin magnetic moments of an atomic electron shell. In this respect, recall the tiny value of the nuclear magneton, µ N = 3.1524512326(45) × 10 −18 MeV/G. Consequently, in the absence of an external field or probe of magnetic nature other than the magnetic field of the electron shell, nuclear magnetism can be manifested in a subtle way such as is the case of nuclear hyperfine structure [1]. In more recent times and in connection with the investigation of the properties of dense matter, primarily motivated by the quest for a putative ferromagnetic state of superdense matter, it was pointed out that the magnetization of asymmetric nuclear matter [2][3][4], and, in particular, neutron matter [5][6][7][8] due to magnetic fields in excess of 10 17 G, is likely to affect the nuclear equation of state (EOS) of magnetic stars.
On the other hand, the manifestation of dynamic magnetic properties in nuclei is well-documented for collective excitations characterized by significative probabilities of M1-transitions [9,10].
It is well-known that, in astrophysical environments, the magnetic white dwarf pulsars can develop fields with strengths in excess of 10 12 Gauss (G) [11] and 10 14 G for magnetars [12]. An extremely rapid mechanism of magnetic field amplification during the merging of a binary neutron star system was reported in [13]. According to these authors, the existing neutron star magnetic fields (∼10 12 G) become amplified within the first millisecond after the merger, i.e., long before the collapse to a black hole can proceed, up to values of 10 15 G, though, as they pointed out, it is highly probable that much stronger fields are generated during this violent process. On the other hand, massive stellar magnetized objects that undergo gravitational collapse tend to convert the huge quantity of available energy into the generation of fantastic magnetic fields as high as B∼10 28 G [14]. For more details regarding the occurrence of huge magnetic fields developed in the astrophysical context, the interested reader may consult [15].
One should also recall that, under laboratory conditions, magnetic fields over a large range of values are produced. For example the ephemere magnetic fields, produced at CERN in proton-proton and nucleus-nucleus collisions at ultra-high energies, are estimated to attain values as high as B ≈ 10 21 G, corresponding to a collision time of t 0 ≈ 0.1 fm/c (see [16] and references therein). Such strengths are already surpassing the critical magnetic field, which causes changes in the structure of the QCD vacuum and, therefore, are of no relevance for our investigation. On the other hand, the highest magnetic field, currently measured under terrestrial laboratory conditions, is significantly lower. Very recently, the newly developed megagauss generator system, operating at the Institute for Solid State Physics (University of Tokyo), generated a magnetic field strength of 1.2 × 10 7 G for around 100 microseconds, a value that dwarfs almost any artificial magnetic field ever recorded on Earth [17].
It was pointed out by Hannes Alfvén that in a conducting fluid subjected to a constant magnetic field H 0 , the electric currents produced by the mechanical displacements of charges will produce a mechanical stress that alters the dynamical behavior of the fluid [18]. More precisely, a new type of wave (Alfvén wave) is generated and propagated along the direction of the imposed magnetic field with a speed of v A ∼H 0 . In Alfvén's view, the magnetic field lines are pictured as elastic strings in a dynamic process, and therefore the square of the intrinsic magnetic field plays a role analogous to the elastic shear modulus. Note that the velocity of shear waves in elastic media is v S = µ/ρ, where µ is the shear modulus and ρ is the body's density. Thus, for a region of the sun where the magnetic induction is B 0 = 15 G and the density is ρ = 5 kg/m 3 , the velocity amounts to v A ∼60 cm/s. As a first application in the astrophysical context, Alfvén proposed a scenario for the generation of strong magnetic fields on the spots of the sun by surmising the transmission of a magnetic field disturbance δH, produced in the sun's center, towards the surface via transverse hydromagnetic incompressional waves [19]. These waves propagate along the lines of the sun's general magnetic field H 0 . The existence of low-frequency transverse waves across a finite conductivity liquid placed in a constant magnetic field, was verified in a laboratory experiment by Lundquist using a cylindrical geometry [20]. Another Swedish scientist extended the framework put forward by Alfvén to compressible liquids such that longitudinal waves associated with compressions of the frozen-in magnetic field were predicted [21]. Some years later, Alfvén put forward the challenging idea that, in a manner similar to the sun, transverse hydromagnetic waves are generated by the perturbation of the nucleus intrinsic magnetic field [22]. In his short note, he commented that, for a nucleus with electric conductivity assumed to be infinite, for what he called "reasonable values" of the external magnetic field strength and the nuclear mass number A, the eigenfrequency of the lowest hydromagnetic mode, macroscopically pictured as a torsional wave along the direction of the magnetic field, is in the order of a few keV. In recent times, Bastrukov et al. revisited the problem raised by Alfvén, developed a simple nuclear-fluid collective model, and concluded that energies in the range of the giant dipole resonance (GDR) are obtained for the hydromagnetic resonance, provided the magnetic field falls in the interval 3 × 10 17 ≤ B ≤ 9 × 10 17 G [23].
The hydromagnetic oscillations of a fluid sphere were considered mainly in connection with self-gravitating bodies (stars in which there is a prevailing magnetic field) (see [24] and references therein). In view of the bounding character of the gravitational interaction, an important issue in this context concerns the stability of such excitation modes. In finite nuclear systems, where gravitation is not important, the stability is dictated, in turn, by the balance between nuclear and Coulomb forces.
In recent years, there has been renewed interest in the topic of collective excitations in exotic nuclear matter, as can be encountered in the inner crust of neutron stars and supernovas [25]. Previous exploratory theoretical investigations on the shell structure of nuclei in such an environment concluded that a magnetic field of the strength scale B∼10 16 -10 17 G can significantly shift the nuclear magic numbers of the iron region towards smaller mass numbers [26]. For magnetic fields of this order of magnitude, covariant density functional theory predicts a significative change in nuclear masses and radii [27]. The implementation of advanced microscopic techniques, such as the Hartree-Fock-Bogoliubov+QRPA encounters serious difficulties due to the nontrivial shapes acquired by neutron-rich nuclear clusters immersed in a superfluid ocean of neutrons. An important problem raised by the authors of the aforementioned studies concerns the validity of the Wigner cell approximation. A suggestion to treat nuclear matter wave phenomena in neutron stars by resorting to continuum mechanics of neutron-rich liquid crystals was made in [28].
Below, I investigate the occurrence of hydromagnetic waves in infinite nuclear matter, portrayed as a normal fluid mixture composed of protons and neutrons in a very strong magnetic field.
Fluid-Dynamical Description of a Neutron-Proton Fluid Mixture Placed in an External Uniform Magnetic Field
The foundations of nuclear hydrodynamics applied to the study of the Giant Dipole Resonance (GDR) can be found, for example, in [29,30]. This framework was subsequently extended to the case when the particles of the mixture interact via Skyrme forces [31,32].
By introducing the constituent densities ρ p,n and velocities χ p,n , the kinetic energy of the fluid mixture reads The nucleons are assumed to move in a mean-field described by a Skyrme parametrized energy density, H Sky , expressed in a compact form in terms of the total and the q = n, p components of the local densities, ρ, ρ q , kinetic energy densities, τ, τ q , as well the mass-current The terms ∼ρ 2 , ρ 2 q result from the central short-range component of the Skyrme interaction, whereas the term on the last line of (2) originates from the density-dependent short-range of the force. In the above choice of the Skyrme energy density, spin-dependent (spinorbit and tensor spin-gradient) terms are dropped out, since magnetic spin-waves are not addressed in the present paper. The gradient terms in the densities are also neglected since, in the ground state, the fluid mixture is assumed to be homogeneous. The kinetic energy density, ∼τ q , is treated within the Thomas-Fermi approximation [34]. Note that the Galilean invariance of the Skyrme interaction is accounted for by terms of the type ρτ − j 2 . The terms ∼ρτ, ρ q τ q have their roots in the nonlocal part of the short-range interaction.
Next, the mean-field one-body potential U q is derived by taking the functional derivative of the energy density with respect to the q-th constituent density, The internal energy can be then written as In this paper, I consider an external static magnetic field, B 0 , that starts to act at time t = 0 upon the neutron and proton fluids. The interaction of the charged component of the fluid mixture with the external electromagnetic field of strength (E, B) reads [35] W em = e dr ρ p χ p · E+ χ p ×B , where χ p is the proton fluid displacement field, which is trivially related to the proton fluid velocity field, In order to derive the dynamical equations governing the continuum-mechanical system combining the proton and neutron fluids, I apply the Hamilton principle to the four-fold action integral [36] The last term in the above integral is related to the mass balance in the mixture and is added to the Lagrangian by means of undetermined multipliers λ p,n , As shown in a previous paper [37], the particles of the fluid mixture are subjected to a virtual variation with respect to the dynamical variables ρ q and χ q . As a result, the Lagrange equations for the proton and neutron fluid velocities yield provided the quadratic terms in the velocities are neglected. The hydrodynamical equations established above are supplemented with the equations relating the electromagnetic fields to the charge and current distributions of the fluid mixture (Maxwell equations) : In the nonperturbed state, the following relations are satisfied
Hydromagnetic Waves in Cold Neutron-Proton Mixtures
At t = 0, the p − n "plasma" is perturbed, and consequently, the density, mean-field potential, velocity, and electric and magnetic fields are varied, where ρ q denotes the equilibrium densities and δρ q ρ q . Since I neglect the contributions generating nonlocal effects, e.g., ∇ρ 0 q , ∇U 0 q = 0, as well as the second-order terms, and since small perturbations are assumed, a linear dependence of the one-body potentials on the proton and neutron density fluctuations is left, where the G-coefficients in the Skyrme parametrization are given in Section 3 of ref. [32]. The linearized hydrodynamic Equations (9) of the p − n "plasma" are The above two equations, expressing the momentum balance, are supplemented with the equations ensuring the mass balance of the proton and neutron fluids, The Maxwell equations for the fluctuated fields are obtained by substituting the transformations (13) in (11), Since the external magnetic field is prone to the induction of charged currents flowing on closed loops (vortical currents), it is reasonable to appeal to the incompressibility approximation, i.e., I assume that the total density remains constant during the excitation of hydromagnetic modes, and therefore δρ p = −δρ n . Due to this constraint, 10 independent variables are left: δρ p , v p , δE, and δB (δρ n and v n are therefore not independent). Assuming that these variables posses plane wave solutions, i.e., ∼ exp [i(k · r − ωt)], the time derivative and gradient operators are subjected to the substitution ∂/∂t −→ −iω, ∇ −→ ik. Thus, the continuity Equation (17), the Euler equation for the proton fluid (15), and the Maxwell Equation (18) are recasted in the form of a coupled system of algebraic equations Above, I introduced the square of the speed of sound in nuclear matter I also introduce the proton plasma frequency and the cyclotron frequency where v A = B 0 / mµ 0 ρ 0 p is the Alfven velocity, obtained from the above system of algebraic equations: Let us consider a configuration with B 0 aligned to the z-axis and First, I choose the orientation with k aligned to B 0 (k B 0 ); thus, k ⊥ = 0, k z = k, the motion in the x − y plane is separated from the motion along the z-axis, and therefore, two dispersion relations are obtained, Let us focus on the first equation, (28), which encodes the effect of the magnetic field inducing the rotatory motion in the x − y plane. In this case, there are two branches resulting from a cubic equation, i.e., I note in passing that due to the long-range interactions exhibited by the plasmon term, the relationship between ω and k is nonlinear [38]. For k → 0, there are two branches : In this case, for B 0 > 0, the plasmon oscillation bifurcates into a magnetic wave (ω→ω c when B 0 → ∞) and a wave decaying with an increasing magnetic field (see Figure 1). In the low-k regime this equation provides approximately a combination of plasma and acoustic modes (plasma-acoustic wave), In this paper, using the framework of nuclear hydrodynamics for two boundless fluids moving in a Skyrme nuclear mean-field and excited by an external static ultraintense magnetic field, I described the generation of small-amplitude waves for various geometrical configurations. The conditions allowing the generation of magnetic waves in nuclear matter were derived and showed that this mode arises in combination with plasma and acoustic modes. It is important to point out that, in the case of an increasing B 0 , the wave propagating along the direction of the imposed magnetic field with a speed approaching v A could be ascribed to an Alfvén wave type.
I should also remind the reader that the isospin effect is incorporated in the speed of sound c s . It is transparent from Equation (23) that, due to the dependence on the coefficients G pp and G pn , expressed according to [32], in terms of the strengths B 2 , B 4 , and B 8 , pertaining to components of the Skyrme force (2) with isovector content, the isospin effect is visible in hydromagnetic modes containing an acoustic component.
Note that, in the past, the possibility of exciting hydromagnetic modes in spherical nuclei was discussed under very restrictive assumptions [22,23]: restriction to a single, incompressible nucleon fluid (note that, in the present approach, both fluid components are compressible), ignoring the displacement current in the last Maxwell Equation (18), and neglecting the nuclear interaction. The present investigation can be straightforwardly extended to finite systems, the only additional requirement being the selection of appropriate boundary conditions. It was inferred in the previous section that, in infinite nuclear matter, a strong magnetic field, i.e., B 0 > 10 10 T (10 14 G), gives rise to a significant modification of the dispersion relation for standard plasma and sound oscillations and the dominance of magnetic (Alfvén) perturbations at large B 0 values. On the other hand, the previously mentioned exploratory investigations on magnetic-field-induced shifts in nuclear masses [26], excitation of Alfvén modes in spherical nuclei [23], or the alteration of nuclear matter properties [6], point to higher values of the magnetic field where nuclear properties are affected, i.e., B 0 ∼10 16 -10 17 G. Such high fields are suspected to arise during the merging of a binary neutron star system. The generation of wave motion in nuclear matter by such intense magnetic fields contributes, once friction is included in the hydrodynamical approach [40,41], to the heating of these astrophysical objects, and therefore affects, in a non-negligible manner, the merging process.
The present framework can be straightforwardly extended to include friction effects. It is well-known that the inclusion of viscosity (controlled by the width Γ, a quantity that can be fixed by the experiment for electric giant resonances) provides a time scale for the decay of the collective mode, i.e., τ dec ∼Γ −1 . Making the reasonable guess that the nature of viscosity is the same in finite or infinite nuclear matter and for approximately the same range of energies and noting from previous work [37] that 0.42 MeV ≤hΓ ≤ 2.25 MeV, the decay time for hydromagnetic modes should be, at most, ∼10 −20 s.
Another open issue concerns the inclusion of spin-degrees of freedom and the role played, in realistic circumstances, by the surrounding electrons. Simple arguments points to a suppression of the nuclear hydromagnetic effect due to screening. However to assess the screening effect as well the generation of additional spin-dependent hydromagnetic modes in a coherent manner, a more elaborate version of the present continuum mechanical framework is needed. A fluid viscous mixture with 6 components (protons + neutrons + electron spin-up and spin-down fluids) could be envisaged with the price of dealing with complicate couplings in the equations of motion. | 4,327.4 | 2023-05-29T00:00:00.000 | [
"Physics"
] |
Communication when it is needed most—the past, present and future of volcano geoheritage
Our understanding of volcanoes and volcanic systems has been communicated through legends maintained by indigenous communities and books and journal articles for the scientific community and for the public. Today we have additional means to communicate knowledge and information, such as social media, films, videos and websites. To build on these mechanisms, we propose a comprehensive system of information collection and dissemination which will impact and benefit scientists, officials and politicians, students and the public at large. This system comprises (1) an information web for broad understanding of volcano systems and volcanology, and (2) a second web for individual volcanoes. This integrated geoheritage approach provides a template for information dissemination and exchange in the twenty-first century.
Introduction
Volcano geoheritage generally refers to a geographical site that is generally accepted to have significant volcanological value in terms of intrinsic and/or cultural significance (Németh et al. 2017). In this contribution, we use the term volcano geoheritage in a different fashion which encompasses knowledge bases, data sources and other information related to volcanoes and volcanology. We examine how this knowledge is transmitted and communicated to the broader scientific and public communities, including past practices, current trends and future directions.
Over the last two centuries, advances in volcanology have resulted from tireless observations and communications with colleagues via professional meetings, personal discussions and teaching. The actual heritage is left in the form of the subsequent communications, be they lectures, papers, books or blogs. These materials are digested and passed on via cited materials to today's research.
Life has changed. Earlier influential volcanologists spent more time on their research and the resulting product was usually a book, for example Jaggar (1947), rather than the frequent journal articles that we now publish every year. We sometimes view our contemporary "heritage" as a means to justify the next research proposal. We should seriously consider taking more time to think about our research and putting the results into a significant volume or website for lasting impact. Those of us who publish extensively might examine who actually reads and cites our work. Will your research results survive into the future?
3 Specialty books These include books on igneous petrology (Carmichael et al. 1974;Hatch et al. 1973), and pyroclastic rocks (Fisher and Schmincke 1984;Heiken and Wohletz 1985;Westgate and Gold 1974). An accessible book on geothermal energy by Wohletz and Heiken (1992) is still available online for free. A particularly comprehensive book on the geophysical monitoring of volcanoes was published by Dzurisin (2007) and a guide to thermal remote sensing of volcanoes by Harris (2013). A comprehensive guide to volcanic hazards by Blong (1984) covers physical volcanology, social impacts, and effects on economic activity, with the main goal to reduce losses, both human and infrastructure. Volcanic Plumes by Sparks et al. (1997) covers theory, experimentation and observation of volcanic eruption plumes. The development of ideas regarding volcanic eruptions is well presented by Sigurdsson (1999).
Historical volcanic eruptions that have influenced our scientific understanding
Progress in understanding volcanoes has often followed specific volcanic eruptions, many with world-wide effects and/or great loss of life.
Vesuvius, Italy CE 79 The eruption was described by Pliny the Younger in letters to colleagues that have since been compiled and published (The Younger Pliny, translated by Radice 1963). These writings include the origin of the term "Plinian eruption." Vulcano, Italy 1880 Mercalli and Silvestri (1891) published a very detailed, complete report on the Vulcano eruption of 1880.
Krakatau, Indonesia 1883 Comprehensive descriptions by Verbeek were published in 1885 and sponsored by the Dutch Government. A more familiar book to most of us, published in English in 1888, is the British Royal Society Report on the eruption of Krakatoa (English spelling). On the 100th anniversary of the Krakatau eruption, Simkin and Fiske (1983) included both reports, plus every journal article that had been published until 1983. The Krakatau eruption has provided basic understandings of caldera formation, volcanic tsunamis, volcano sounds, pumice fall and drift and global atmospheric effects. It has also been the basis for some fictitious movie fantasies, including "Krakatoa, East of Java." Laki Fissure, Iceland, 1783-1784 Global circulation of ash and gases created a blue haze over Europe, which led to an unseasonably cold summer and subsequent famine. The connection between these atmospheric effects and the Laki eruption was made by Benjamin Franklin when he was the US ambassador to France. In Iceland, the eruption was described by Steingrimsson (translated in Steingrimsson 1998). Modern publications that describe the eruption have been published by Sigurdsson (1982) and Rampino and Self (1984).
Mont Pelée, Martinique, 1902
This eruption was described initially by the French volcanologist Alfred La Croix (1904). LaCroix was one of the first to describe pyroclastic density currents (nuées ardentes) and their catastrophic effects on the town of St. Pierre. Mont Pelée was also where rapid dome growth was observed in 1902 and again in 1929-1932(Perret 1937. , 1942-1952 This was an event where the birth of the volcano from a fissure in a cornfield led to a long-lived eruption that produced a large scoria cone and lava flows that engulfed the villages of Parícutin and San Juan Parangaricutiro. Publications by Wilcox (1954) Fries Jr (1953 and Fries Jr and Gutiérrez (1950) documented the eruption. Later studies focused on the effects of the eruption upon human, animal and plant communities. These publications and later studies were assembled into one volume by Luhr and Simkin (1993) in honor of the eruption's 50th anniversary.
Parícutin, Mexico
Kilauea and Mauna Loa, Hawaii, USA Frequent eruptions of these Hawaiian volcanoes have provided us with a steady flow of influential publications since the nineteenth century, beginning with Dutton (1884), followed by Dana (1890), Brigham (1909), followed by Jaggar and the staff of the Hawaiian Volcano Observatory (HVO). Today, HVO's communications activities are mostly online but also include a remarkable record of publications, for example, descriptions of recent activity along Kilauea's east rift. An understanding of these volcanoes can be found in the massive volumes Volcanism in Hawaii (Decker et al. 1987). We owe what we understand about most aspects of active basaltic volcanism to publications about Hawaii. Beginning with Jaggar, most geophysical observations of eruption precursors and subsequent eruptions were developed at the HVO (Apple 1987 Lipman and Mullineaux (1981). Continued activity (mostly dome growth) from 2004 to 2006 was described in an equally weighty volume edited by Sherrod et al. (2008). The Mount St. Helens eruption changed world views on sector collapse, blast effects and volcanic mudflows (lahars) and hazard mitigation.
Mount Pinatubo, Philippines, 1991
The explosive eruption of Mount Pinatubo, the second most voluminous eruption of the twentieth century, threatened a million people in the surrounding Philippine countryside and was forecast during the rapid response of volcanologists from the Philippines and the USA (Newhall and Punongbayan 1996). A quick study of previous eruptions provided an idea of the extent of the deposits from earlier eruptions. The residents were a bit skeptical about the forecasts, but a showing of a volcanic hazards video (Krafft et al. 1995) initiated a mass evacuation. Pinatubo's eruption left thick pumice and ash deposits, which were quickly eroded during heavy rainfall from a typhoon, generating fast-flowing lahars. The countryside was devastated and commerce affected (Rodolfo 1995). This was an eruption that resulted in interdisciplinary studies of the effects of volcanic eruptions on the population, an approach that has since been used at many eruptions around the world. Much of what was learned about the Pinatubo eruption was published in an 1100-page book edited by Newhall and Punongbayan (1996).
Galunggung, Indonesia, 1982
On June 24th, 1982, a British Airways flight from Kuala Lumpur, Malaysia, to Perth, Australia, flew into a volcanic ash plume from Galunggung volcano, Java. The aircraft's windows were sandblasted and all four engines shut down. Thanks to excellent piloting, the engines were restarted to make an emergency landing in Djakarta (Casadevall 1994). This event was a wakeup call to the aviation industry and to volcano observatories about the dangers of volcanic eruption plumes.
In 1986, the International Civil Aviation Organization created a volcanic ash warnings (VAW) study group. The VAW study group met in Montreal, Canada, to establish standards and rules for flights near volcanoes. The VAW study group built the framework needed to bring together volcano observatories, meteorological observatories, flight controllers, airlines and pilot associations. One of the VAW's goals was to establish an interdisciplinary meeting of volcanologists, aircraft makers, pilots and flight control experts, which was held in 1991 (Casadevall 1994). Using the flight rules established by the VAW, plus more sophisticated ways of observing eruption plumes, later prevented potential accidents.
Effects of eruptions on life, society and infrastructure
An understanding of volcanic deposits has been important to archeologists for two reasons: 1. preservation of remains and artifacts and 2. the use of volcanic ash beds to date fossils, a common practice used by paleoanthropologists. For example, working in the Ethiopian rift, where fossil remains date back to 6 million years, understanding the tephrochronology was crucial to understanding the ages and paleoenvironment of these early hominins (Woldegabriel et al. 2000). Early works about volcanoes and archeology in Central America were published by Sheets (1983) and at sites across the world (Sheets and Grayson 1979).
Cultural legends about volcanoes were summarized in "Legends of the Earth" by Vitaliano (1973). Myths in many cultural groups in Papua New Guinea about "the time of darkness" were linked to a widespread eruption cloud from Long Island Volcano about 300 years ago (Blong 1982).
Myths abound about the effects of the Lower Bronze Age eruption of Thira (Santorini) during the seventeenth century BCE and include stories of Jason and the Argonauts and the origin of "Atlantis." This is a story that involves volcanology, archeology, mythology and the possible end of Minoan culture (Luce 1969).
Volcano geoheritage has become an important means to understand a wide variety of historical and mythological events. Major changes in everyday human activities have followed many volcanic eruptions, a topic covered comprehensively by Blong (1984).
Adventure and volcano tourism
When one of us (GH) was a boy, he was intrigued and stimulated by Haroun Tazieff's "Craters of Fire" (1952). In hindsight, it is evident that Tazieff was an active volcanologist but attracted the public's interest mostly because he was a "daredevil scientist." Most of us now approach volcanoes with caution; this is excellent for self-preservation but somewhat boring for the public. A great contemporary account of volcanologists and the problems faced while studying eruptions is by Dick Thompson, a writer for Time magazine, who published "Volcano Cowboys" (Thompson 2000). It is a superb contribution to volcano geoheritage by a journalist.
Many of the eruptions of Vesuvius in the 1700's (CE) were observed by Italian scientists but became most famous across Europe because of publications by Sir William Hamilton, the British Envoy to Napoli (1768-1795). Napoli and Vesuvius became an important destination for well-educated (and wealthy) Europeans as part of the "Grand Tour" during the late 1700's and 1800's. A modern history of Vesuvius, including the "Grand Tour," was published by Scarth (2009). Visitors included J. W. Goethe, writer and philosopher, who wrote about the eruptions of Vesuvius in Italian Journey (Goethe 1816(Goethe -1817. A modern heritage from the Grand Tour has been the thousands of contemporary tours to Vesuvius, Etna and Stromboli, although these are mostly organized by travel agents and travel bureaus. A tour guide to volcanoes in America's National Parks was published by Decker and Decker (2001) 7 Volcano histories and eruption reports When the International Association of Volcanology was formed in 1922 (IAV, now IAVCEI) (Cas 2022), one of its goals was to catalog active volcanoes and their eruptions. The product was a series of monographs by region, which were published from 1951 to 1973; the volumes were valuable but incomplete, and many countries were left out. The limited number of these monographs have reduced their heritage value. Early reports of eruptions in Hawai'i were sent out as the Volcano Letter; all of the Volcano Letters were later organized into one volume by Fiske et al. (1987). The Smithsonian Institution sent out notices about recent eruptions from the Center for Short-Lived Phenomena. A more lasting heritage came in the form of the Volcanoes of the World by Simkin et al. (1981), which was updated in 1994 (Simkin and Siebert 1994) and again more recently (Siebert et al. 2011).
An important and useful heritage are the many volcano hazard maps published by geological and volcanological surveys and universities. Many are based on maps of previous deposits and some based on modeling of eruption phenomena. The hazard maps that are most effective are those that can be understood by the public in areas at risk. We explore these in more detail below.
The present and the future
The heritage we are obligated to leave is the foundation for the next generation of volcanologists. What parts of this heritage will survive? Will these include lectures and training, websites, Facebook pages, Zoom meetings, journal articles, books, or something else? What is the "preservation potential" of such materials? In some ways this is challenging since there are now hundreds of volcanologists who each have their own means of communication. There is a myriad of websites, blogs, both professional and commercial journals, and yet there is still a substantial publication base (both hard-cover and e-books).
We also have the challenge of communicating to the public factual, interesting and exciting materials. Most everyone, especially children, are stimulated by material about volcanic eruptions. How do we communicate these events without "dumbing them down"?
The various lines of communication described above have served us well in volcanology for better understanding volcanoes and communicating this understanding. Today we are in an enviable position of having access to many new and exciting approaches to communicating knowledge at many different levels, including online learning which has been accelerated by the COVID-19 pandemic. People are actively exploring novel means and ways to transfer knowledge and understanding . In designing and developing these new approaches, we should be mindful of several key principles and objectives that should guide us in our efforts. First, access to information should be simple and easy for all. Carefully designed media, e.g., websites, books, datasets and films to illustrate several examples, are important means of communication. They need to be designed so that they are widely used. Second, the principles of Equity, Diversity and Inclusion (EDI Principles) should form a fundamental pillar when designing new material. It is challenging yet essential that everybody has equal access to materials and information. This point is discussed in detail below. Third, in the design and implementation of new communication approaches, we must keep in mind that much of the world has limited resources. Given this reality, how can we best provide access to information for people and institutions with limited resources?
The 2020-2022 COVID-19 Pandemic provides an excellent template and opportunity to examine these issues. The pandemic has severely restricted our activities in every way imaginable and unimaginable. We have not been able to travel. We have not been able to physically meet and work with colleagues. Data collection has been difficult and, in some cases, impossible. Practically everything has been slowed down and made more difficult. Yet these limitations have created alternative mechanisms which work effectively, most notably virtual meetings at all scales. In this sense, the world has become a more equal and equalized environment, and we should capitalize on these developments as we push forward into the future.
With the Pandemic, therefore, we are at a fundamental crossroad in time which presents us with an opportunity to take new directions and try new approaches for communicating volcano geoheritage. Many of the ideas that we propose in the following pages are not new; a number of researchers have thought deeply about this subject. Our goal here is to provide a holistic and integrated view of communicating this geoheritage during the twenty-first century.
There are a number of interesting starting points. For example, Professor Bill Rose of Michigan Technological University has envisaged a comprehensive and central source of information for volcanoes, one where somebody can quickly and easily access and obtain key data and knowledge on a volcano (see for example https:// pages. mtu. edu/ ~raman/ VFuego/ VFuego/ Welco me. html for an excellent compendium on Fuego volcano in Guatemala). Another important central source of information is Vhub (https:// vhub. org/). Here we propose to develop and expand this concept in a number of ways, leading to a comprehensive and linked information web for volcanoes and volcanology.
For this information web, we propose two approaches which are linked at different scales. The first web comprises a general compendium on volcanoes, volcanism and volcanology, including books, films and educational resources, to cite but three of many components which are explained in more detail below.
The second web is constructed for individual volcanoes, prioritizing those which pose the highest hazard and risk, e.g., the Decade Volcano program of the 1990's (IAVCEI Subcommission on Decade Volcanoes 1994). This second web includes hazard, risk and vulnerability maps, digital elevation models (DEM's) and educational videos, again to list only several of many elements which are described below.
By taking this approach, we would be in an excellent position to decide upon the truly important modes of communication, in the broadest sense, when the next large eruption occurs. These enhanced modes of communication will benefit our understanding of such eruptions, our preparation and our mitigation of these events. We might consider planning for two timescales of eruptions, a basaltic type which occurs on a decadal basis (e.g., Miyakejima 2000, Bárðarbunga 2014, Kilauea 2018, and another larger, less frequent and more silicic type of eruption (e.g., Katmai 1912, Pinatubo 1991. Both the unrest associated with such eruptions and the scientific advances to be made in the future provide opportunities for us now to design the most effective means to disseminate knowledge, data and information flowing from these dynamic systems. The details for each web are discussed in the following sections.
An integrated web of knowledge and communication for volcanoes and volcanology
Placing a date on the birth of modern volcanology is difficult, but we can safely say that volcanoes have been studied intensively for more than a century, with accelerations after significant eruptions. Careful quantification of eruptions and eruptive products perhaps began with the pioneering work of George Walker in the 1960's and 1970's. However, we must at the same time also acknowledge that many older and prehistoric societies worldwide have both lived with volcanoes and also conducted direct observation-based "research" on volcanoes (Vitaliano 1973).
Today we have an outstanding knowledge base, comprising observational, experimental and theoretical aspects, which provides a means to evaluate, study and understand volcanism in its many manifestations. These include processes taking place in the subsurface (e.g., magma chambers, intrusions, conduits), at the surface (e.g., pyroclastic flows, lava flows, lahars), and in the atmosphere and oceans (e.g., tephra dispersal, volcanic gases, ice cores). This knowledge base therefore serves as the central reference point into which a series of component parts feed different elements, which together contribute to a holistic vision of volcanoes and volcanology for the future.
The component parts are twofold in nature. The first group involves "outreach and learning" and includes a means by which individuals and groups can gain knowledge about volcanoes, volcanism and volcanology. Many of these components are interactive in nature. The second group involves "research and using" components which comprise a series of information and data sets which we consider essential to the study of volcanoes. We now examine these individual "outreach and learning" and "research and using" components, making reference to Fig. 1, which displays our concepts in schematic and graphical form.
Outreach and learning
Equity, diversity and inclusion (EDI) When we communicate and access information related to volcanoes, volcanism and volcanology, the concepts of equity, diversity and inclusion are important to consider. In terms of public access to communication and information, it is essential to identify and prioritize marginalized groups that are both underrepresented in community decision-making processes and disproportionally impacted by disasters. These are the people who suffer the most and lose the most, whether they are living in the global north or the global south, and include women and girls, the very young and the very old, the disabled and injured, the socioeconomically disadvantaged, political and economic refugees, as well as indigenous, racial and ethnic minorities. Access to communication and information is not a one-way street. While under-represented individuals and communities need to be prioritized in terms of access to information, they can offer significant knowledge and insight into eruptions and unrest. For scientists, the same EDI principles discussed above are relevant. Key issues include traditional vs. non-traditional roles of scientists, gender parity at all levels, citizen science and leadership roles, including but not limited to decision-making, editorships of books and journals and heading organizations. The topic of EDI is a difficult one at times, yet one which needs to be a core principle for communication and volcano geoheritage.
The encyclopedia of volcanoes, 3rd edition The current second edition of the Encyclopedia of Volcanoes, published in 2015, is the principal reference resource for practicing volcanologists, graduate students and others (Sigurdsson et al. 2015). This is because the Encyclopedia is both comprehensive and detailed in its coverage of volcanism, making it a key reference work for the volcanological community today. The Encyclopedia includes 78 chapters on a full range of topics related to volcanology. Hence this 1456-page compendium is an ideal starting point for exploring a topic related to volcanoes and volcanism. What might a third edition look like? It would likely remain comprehensive. Ideally it would be fully open access, allowing everyone equal access to the information contained within. It would be easily accessible electronically. It could also be constructed to be easily updated, i.e., a volcano wikipedia with a moderator or moderators as appropriate. Finally, it would be fully integrated into the "outreach and learning" component that we are proposing here. Each of the 78 chapters could be linked to (a) a learning module and (b) a video module, which are described below. This type of integration could redefine the concept and meaning of an encyclopedia in the twenty-first century.
Massive open online courses (MOOC's), zoom courses and E-lectures
Since the Encyclopedia of Volcanoes covers 78 topics, we propose a series of MOOC's, Zoom courses and e-lectures which cover all these topics. Although practically this proposal is challenging, in principle it is straightforward. Creating such content could be highly flexible. For example, one topic might be covered by a 10-20 minute lecture, another might require an hour's lecture, while others could be examined in greater detail by a package of lectures or courses. A particular topic could be treated in a number of ways, e.g., one course aimed for the public and schools, another at the university and college level, and yet another for professionals and practitioners. As with the Encyclopedia above, these courses should be available to all for free or a small fee, they should be accessible (e.g., closed captioning), and they could be multilingual for greatest impact globally. Such efforts have already begun; examples include a MOOC on magma movement (https:// www. edx. org/ course/ monit oring-volca noes-and-magma-movem ents), another on physical volcanology (http:// www. ipgp. fr/ en/ physi cal-volca nology-mooc), Films and videos Following the logic outlined above, films and videos could be produced for each of the 78 topics, again with the same flexibility in mind, ranging from short 3-10 minute video clips to hour-long in-depth examinations of phenomena. As with courses and MOOCs, videos and films could be produced for different audiences. Open access, accessibility and multilingual principles would be over-arching goals for maximum impact. An excellent example are the recent series of short videos on Vimeo covering topics such as pyroclastic flows, lahars and gases, which are produced in a number of languages and freely available to all (https:// vimeo. com/ volfi lm).
Vir tual conferences, meetings, workshops and fieldtrips New and provocative ideas are proposed, discussed and examined at on-site conferences, meetings, workshops and fieldtrips. Recent data and findings "hot off the press" from laboratories, observatories and field campaigns are presented for the first time at such meetings. Hence these in-person meetings are crucially important venues which advance volcano science. But they have significant drawbacks: cost and carbon emissions. The total cost of attending an IAVCEI meeting, for example, is typically several thousand US dollars for an individual, sometimes more. Such expenses prevent many people from attending such meetings. They simply cannot afford them, nor can their organizations. Second, the carbon cost of travelling to such a meeting in terms of greenhouse gases is substantial. We propose that such information exchanges be modified in such a way to reduce cost and carbon and improve access and accessibility. A simple way to do this is to alternate in-person meetings with virtual meetings. For example, during one year the European Geophysical Union (EGU) annual meeting is in-person while the Fall American Geophysical Union (AGU) annual meeting is virtual. The next year they reverse. IAVCEI meetings could be modified in a similar fashion. Another possibility is a meeting which is partly in-person and partly online, i.e., hybrid, such as the Fall 2021 AGU annual meeting and the 2022 Cities on Volcanoes meeting in Crete. The COVID-19 pandemic is showing us how to conduct such meetings, and we strongly recommend that we not return to a pre-pandemic "business as usual" mindset for such events. At the same time, we recognize the crucially important interpersonal and mental health aspects of in-person meetings which virtual meetings do not capture.
Social media streams
The community continues to discover novel ways to use social media (Williams and Krippner 2018;Lowenstern et al. 2022). Many opportunities exist in the "learning" framework that we have outlined above for social media, including volcanic activity, education, EDI and meetings. Social media could be incorporated or embedded into an encyclopedia, courses and videos and films, to illustrate several examples. Social media can supply realtime information in terms of current activity at a volcano (Yute et al. 2021). For example, a course on lava dome activity could be linked to social media which is recording unrest at actively growing lava domes such as observed at Soufrière St. Vincent during December 2020-April 2021. Social media reports can be corrected and updated easily unlike a journal or book publication, although such reports are generally not peer-reviewed and can be unreliable.
Research and using
Community approaches In the past several decades, the science of volcanology has become a highly integrated and collaborative discipline. This reality is reflected in a number of initiatives including IAVCEI commissions and networks, best practices, collaborative resources and large consortia. The IAVCEI commissions and networks (https:// www. iavce ivolc ano. org/ commi ssions-netwo rks/) provide a means for researchers to collaborate, exchange data and ideas and meet in the field and at conferences and workshops. There is commonly synergy between two or more commissions with overlapping interests. Examples of best practices include volcano observatory consortia for eruption forecasting, hazard communication and long-term hazard assessment (Pallister et al. 2019), risk assessment in volcanology including hazard, exposure and vulnerability (Bonadonna et al. 2018), numerical model comparison of volcanic eruptive columns (Costa et al. 2016) and aeolian remobilization of volcanic ash (Jarvis et al. 2020), among others. Collaborative resources are numerous today. One of the best is Vhub (https:// vhub. org/), a clearinghouse which offers a wide range of materials including simulation and modeling tools, collections of data and educational resources. Many observatories maintain data (see for example http:// wwwobs. univ-bpcle rmont. fr/ SO/ telev olc/ dynvo lc/), as do the online resources of most journals. Large consortia address large-scale regional issues and aspects of volcanology, commonly involving researchers from a number of countries. Examples include EUROVOLC (https:// eurov olc. eu/), FUTUREVOLC https:// futur evolc. hi. is/) and CONVERSE (https:// volca nores ponse. org/). Volcano catalogues A number of catalogues, both past and current, have been produced. These provide basic and essential information for many volcanoes on Earth. Starting in 1951, IAVCEI produced a series of catalogues by region entitled, "Catalogue of the active volcanoes of the world, including solfatara fields." The catalogue remains useful today, with many useful facts on eruption histories, eruptive styles, petrology and so forth. The Smithsonian Institution maintains a web-based compendium of data on many volcanoes (https:// volca no. si. edu/), with an associated print compilation of essential information (Siebert et al. 2011). Web-based catalogues are also being actively developed and maintained, e.g., the European Catalogue of Volcanoes (https:// volca nos. eurov olc. eu/#). The Springer series, Active Volcanoes of the World, includes a number of books each of which provide a focus on important individual volcanoes or groups of volcanoes (https:// www. sprin ger. com/ series/ 10081/ books). Another Springer series, Advances in Volcanology, comprises books with both a thematic and geographic focus (https:// www. sprin ger. com/ series/ 11157/ books). These various collections all focus upon subaerial volcanoes and volcanism, with a notable gap in submarine volcanoes. Filling this gap is a challenge and should be a future priority.
Digital elevation models (DEM's) DEM's are a basic and fundamental element for monitoring and understanding a volcano's behavior. Without a current DEM, modern measurements and monitoring cannot be made. DEM's are fundamentally important for geophysical measurements and monitoring (Chirico et al. 2009), and they are also invaluable for physical volcanology (Albino et al. 2015), as landforms and landscapes change during eruptive activity. Modern DEM's commonly have extremely high resolution, typically one meter or even better. When a volcano is active, DEM's can be revised and updated to reflect surface changes. In turn, such updated "real-time" DEM's can be used quantitatively to reveal subsurface processes such as magma movement. In cases of relatively high DEM uncertainty, options are available, such as the stochastic approach of Favalli et al. (2005). A challenge for future DEM's is increased spatial and temporal resolution at the scale of tens of centimeters or even centimeters (Azzaro et al. 2012). A current source of DEM data can be obtained from the Shuttle Radar Topography Mission (SRTM) (https:// www2. jpl. nasa. gov/ srtm/).
Hazard and risk maps
Maps that illustrate hazard, exposure, vulnerability and risk are fundamentally important for lives, livelihoods and infrastructure (Blong 1984;Lowenstern et al. 2022). By integrating these different type of maps, an understanding of risk to primary and secondary hazards emerges for people and the environment, including both natural and developed landscapes. There is a need to assemble, in a repository and centralized fashion, maps of infrastructure, commercial and industrial property, agriculture and wildlife, along with human population distributions. A central repository of such maps produced for active and potentially active volcanoes would therefore be a remarkable resource for many stakeholders including urban planners, developers, civil defense officials, politicians and the public. Without such information, proper planning and preparation for volcanic unrest and eruptions cannot be fully accomplished. It is a significant challenge to assemble and organize a comprehensive repository of such maps, including revised and updated versions. A repository for hazard maps can be found at https:// volca nicha zardm aps. org/. A useful analogy is the need to assemble a repository of material safety data sheets (MSDS) in a centralized fashion for a chemistry lab with 500 chemicals, due to the threat that the chemicals and waste products may pose to humans and the broader environment.
Links to individual volcano repositories
The best studied, most monitored and most active volcanoes typically have extensive data resources and data collections which are available electronically. Such repositories are discussed in greater detail below; here we underline the need for links to such volcanoes. The volcanoes could be chosen on the basis of their unrest, e.g., those with lava dome growth, those with significant lahar activity, etc. A challenge is to identify the key volcanoes whose data lend themselves to this type of comparative approach. Another challenge is accessing data, which requires an open, collaborative approach and philosophy.
The World Organization of Volcano Observatories (WOVO) is a group of institutions that monitor active volcanoes. These institutions and personnel are on the front lines of assessing volcanic unrest. Currently the organizations are listed on a website (https:// wovo. iavce ivolc ano. org/). Although somewhat dated currently, this website has important links to other information sources such as WOVOdat (https:// www. wovod at. org/). The potential for increased integration among volcano observatories is extremely high. A detailed directory of each organization could be assembled, containing key links to personnel, data and other resources. Well-linked organizations could share and exchange data easily. Easy and rapid communication channels such as Zoom and Microsoft Teams could be in place and made available as needed. Using such communication resources, expert solicitations could be efficiently and rapidly accomplished using a wide range of people when volcanic unrest occurs (Lowenstern et al. 2022). A well-linked observatory network could share hardware, software and instrumentation, while an exchange program of personnel could be established to foster collaborations and best practices (see for example https:// www. ird. fr/ works hop-virtu el-sur-larepon se-aux-erupt ions-volca niques-effus ives). Dedicated funding for such activities would be an important achievement. The challenges and opportunities are numerous.
An integrated knowledge-information system for individual volcano systems
We now focus on individual volcano systems and propose a similar two-component structure to assembling information. The two components have clear links amongst a number of topics within these components. Although the "outreach and learning" and "research and using" components are different and distinct, they can be usefully integrated for a holistic view of a particular volcano system. The concepts are shown schematically in Fig. 2.
Outreach and learning
Public communication Providing knowledge and insight regarding a volcano's activity, its history and past unrest, and its resources is a significant challenge, requiring ongoing communication and discussion. Links to key stakeholders are an important component, as well as contact with schools and the use of social media. There is high potential for interesting feedbacks, e.g., communicating monitoring data, explaining hazard maps and the like. Helping people understand how a volcano observatory functions is also important, e.g., visiting the observatory and meeting observatory personnel, fieldtrips to monitoring sites and geoheritage sites, films and videos, etc. (see below).
Volcano activities Field guides and fieldtrips are an effective means of understanding a volcano. Such guides could be written, virtual and/or multimedia in nature. Trips could include visits to observatories, museums, monitoring sites, geological features and other geoheritage sites such as hydrothermal areas, protected zones and flora and fauna. In addition, an annual public meeting on the volcano could be created, possibly with different themes from one year to the next. Such meetings could include an assessment and presentations on the "state of the volcano." In the context of these activities, tourism could be developed and promoted, in the process building good links among scientists and researchers, tourist operators and the local chamber of commerce. are commonly abundant, and many volcanoes, in particular their upper slopes, are protected areas or reserves. These are significant resources for public communication, for visits by community and school groups and for tourism and ecotourism.
Volcano resources
Multimedia A series of films, videos and other multimedia products such as Youtube channels could be developed highlighting the different aspects of a volcano and its activity. Such a multimedia package might include the following elements: (1) focussing on the volcano's geologic history, past activity and current unrest; (2) demonstrating how scientists study and monitor the volcano, including the difficulties involved and the associated uncertainty in making forecasts; (3) documenting the volcano's resources; (4) showing how people live with the volcano, both in times of quiescence and during unrest and eruptive periods and the aftermath; (5) examining questions of EDI, illustrating how under-represented groups are involved in "non-traditional" activities such as fieldwork, monitoring and leadership. Such a package of multimedia products, especially if well structured and integrated together, could be a remarkable contribution to outreach and public understanding, and could also serve as a model for other volcanoes.
Research and using
Monitoring A number of active volcanoes are monitored using seismic, deformation, gas and webcam networks, which together provide a good view into the workings of a volcano. A very small number of volcanoes are highly instrumented; more commonly, only a few instruments monitor activity, even a single seismometer in some cases. Some active volcanoes have no instrumentation at all. Given limited resources, this is reasonable and understandable. In some observatories today, incoming data are livestreamed in real time, available to anyone. Such data streams are important for the monitoring effort and for research, also serving as outreach tools on the observatory website. This form of outreach can be supplemented with a daily briefing or discussion by a trusted delegate of the observatory.
Hazard maps A repository of hazard maps for a volcano is similarly useful for monitoring, research and outreach. An archive could show various versions of past maps, with the current map or maps highlighted ). Hazards could be depicted at different spatial scales, with zooms into particularly vulnerable or complex areas. Some maps are static, others are interactive. A website currently exists which incorporates many of these concepts (https:// volca nicha zardm aps. org/). Such a map repository could be expanded to include maps of exposure, vulnerability and risk.
Crisis modes An interesting aspect to consider is to establish a digital "corner" with information and data that pertain to times when a volcano is particularly active or in crisis mode. Such a corner, which could be an online forum open in digital space, could focus on issues such as expert elicitation, conveying and understanding uncertainty to scientists, officials and the public, key monitoring data, evolving hazard maps, public communication and forecasting efforts. Various scenarios which the volcano might follow in the near to medium term future could be explored, explained and tested. This corner could remain in "sleeping" mode during normal conditions and "re-activate" as the volcano reactivates. Many observatories use this practice on a regular basis where they provide eruption updates on their websites.
Volcano digital elevation models (DEM's)
An up-to-date DEM of the volcano can reside digitally for all to use. The DEM would be periodically updated and revised with new imagery including drone data (De Beni et al. 2019). It could be interactive, easily usable and downloadable. It would provide a common platform for anybody working on the volcano, including researchers, officials, planners and the public. Schools could find interesting uses of the DEM for their own purposes, and in doing so, teach students the fundamentals of modern digital cartography.
Digital datasets A series of datasets, including but not limited to monitoring data, meteorology, geological history, physical volcanology, petrology-geochemistry, geophysics and historical literature, are an invaluable tool for many different studies, including retrospective analysis, better understanding of how the volcano works and outreach and teaching purposes, both for scientists and the public. Where appropriate, these can be made open and accessible.
Concluding remarks
Volcano geoheritage has greatly evolved since a time when the only sources of information about volcanoes were comprehensive books written after years of observation and research. Access to this information was limited to those with research libraries or personal collections. Many of those at risk during volcanic eruptions had few of these resources, and observations were limited to scientists from the industrial nations. Now the world has opened up for the many resources available to those with online access-resources that include training, volcano data, advice and online observations of eruptions in real time. There are still books and they are used, but they are not limited to print runs and libraries. The new access to resources is potentially available to everyone, regardless of nationality or background. This engenders an optimistic view of the potential for volcano geoheritage.
We believe that exciting opportunities exist today to develop a new vision of volcano geoheritage for tomorrow. Clearly there are challenges about how to best implement the ideas outlined in this paper. We can imagine a number of different approaches which could help develop and jumpstart these concepts: • An IAVCEI working group could map out a five-year implementation plan. The working group would have EDI principles embedded, both in its composition and its mapping approach. The IAVCEI Commission on Volcano Geoheritage and protected Volcanic Landscapes could be an appropriate vehicle. • IAVCEI could provide seed funding for an initial mapping effort, in order to establish directions, priorities and mechanisms of implementation. • As these concepts include and impact many sectors of society in different and diverse ways, donors could be approached to support the project financially. • A number of national and regional funding agencies now have programs which specifically target outreach and education. A coordinated proposal-writing effort could be instituted among researchers from different countries. • Some of the individual components discussed above could have their own funding mechanisms, e.g., a regional or global effort to develop and expand volcano DEM's. • International agencies such as UNESCO could be approached for support. Not only would the project benefit volcanology, it could also be seen as a template and have appeal for other organizations and other disciplines within the IUGG umbrella and beyond.
While our ideas might seem daunting and difficult to achieve in terms of a fully holistic approach and a full integration of the concepts presented herein, they need not necessarily be so. They could be instituted step-wise, at different scales, and progressively. They could build from small-scale to large-scale. Some of the growth could be organic, in the sense of rapid growth and acceptance after the initial stages. In conclusion, it will be interesting and exciting to observe the development of volcano geoheritage over the next decade and beyond.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 9,972.2 | 2022-06-17T00:00:00.000 | [
"Environmental Science",
"Geology",
"Education"
] |
RANS- and TFC-Based Simulation of Turbulent Combustion in a Small-Scale Venting Chamber
A laboratory-scale chamber is convenient for combustion scenarios in the practical analysis of industrial explosions and devices such as internal combustion engines. The safety risks in hazardous areas can be assessed and managed during accidents. Increased hydrogen usage in renewable energy production requires increased attention to the safety issues since hydrogen produces higher explosion overpressures and flame speed and can cause more damage than methane or propane. This paper reports numerical simulation of turbulent hydrogen combustion and flame propagation in the University of Sydney's small-scale combustion chamber. It is used for the investigation of turbulent premixed propagating flame interaction with several solid obstacles. Obstructions in the direction of flow cause a complex flame front interaction with the turbulence generated ahead of it. For numerical analysis, OpenFOAM CFD software was chosen, and a custom-built turbulent combustion solver based on the progress variable model—flameFoam—was used. Numerical results for validation purposes show that the pressure behaviour and flame propagation obtained using RANS and TFC models were well reproduced. The interaction between larger-scale flow features and flame dynamics was obtained corresponding to the experimental or mode detailed LES modelling results from the literature. The analysis revealed that as the propagating flame reached and interacted with obstacles and the recirculation wake was created behind solid obstacles, leaving traces of an unburned mixture. The expansion of flames due to narrow vents generates turbulent eddies, which cause wrinkling of the flame front.
Introduction
There is an increasing range of investigated future applications of hydrogen with the increasing focus on sustainable energy [1]. Hydrogen is sought to replace hydrocarbons as a sustainable energy carrier [2], and hydrogen-powered vehicles are being actively developed and tested [3,4].
Increasing the use of hydrogen necessitates managing the risks it poses. If hydrogen was to leak in a confined or semi-confined environment, it would form a combustible mixture with air. Depending on the conditions, this mixture could pose explosive combustion or even detonation risk. Hydrogen produces higher explosion overpressures, flame speed and can cause more damage than methane or propane.
In the worst-case scenario, a large premixed cloud of the combustible hydrogen-air mixture would form in the confined volume. The flame would propagate in this cloud from the ignition point. Interaction of flame and induced flow with the structural elements and other obstacles would result in turbulence generation. Interaction between the turbulence and flame could accelerate the latter, creating pressure shocks.
Premixed turbulent combustion problem is difficult due to complex interactions between fluid dynamics, mass/heat transport and chemistry. There are still unknowns in understanding the mechanisms of premixed turbulent combustion, and the prediction of the flame propagation velocity remains an unsolved issue, largely because of the issue of flame-turbulence interaction. The flame-turbulence interaction is responsible for the burning rate, the rate of pressure increase and achieved overpressure, the geometry of accelerating flame front and resulting structures in the flow field.
Researchers have studied various configurations in which the flame propagates through obstacles, inducing turbulence. Turbulence amplification results in fast propagation speeds and intense combustion [5]; therefore, accelerated flames drive pressure waves with large overpressure [6]. The burning rate grows due to the creation of vortical structures which stretch the burning surface area, thus increasing it [7].
More obstacles resulting in a higher blockage ratio give rise to more pronounced turbulence and a faster flame [8]. It can be explained by an increased number of vortical structures in the flow. The form of the obstruction is also important; sharp geometric edges induce the formation of vortex and vortex shedding, which result in strong mixing [7]. This paper reports numerical simulation of turbulent hydrogen combustion and flame propagation in the University of Sydney's vented small-scale combustion chamber. A laboratory-scale chamber is convenient for combustion simulation at higher resolution, allowing the study of the interaction between the flame and main flow structures in higher detail.
The simulations were performed using OpenFOAM and flameFoam-a custom open source computational fluid dynamics (CFD) solver developed by the authors for the simulation of premixed turbulent combustion in hydrogen-air mixtures. There are simulations of combustion in vented small-scale chambers published in the literature with turbulence modelled according to the Large Eddy Simulation (LES) approach [8][9][10][11][12]. Furthermore, mentioned research papers investigates sensitivity to the ignition source [8], comparison of mixtures [9,12], analysis of the equivalence ratio effect [11] and different configurations of baffles [9,10,12]. Most of them show flame front structure. However, there is a lack of analysis of the interaction of the flame front with obstacles and the resulting larger-scale flow structures.
LES has a superior predictive capability compared to the unsteady Reynolds-averaged Navier-Stokes (URANS) approach; however, it is much more computationally demanding. Therefore, RANS usage is widespread in practical applications, where the computational cost of LES becomes prohibitive. Even when combustion takes place in large-volume compartments-for example, containments of nuclear power plants-strong flame acceleration cannot be excluded and needs to be treated reliably [13]. However, due to the simplified turbulence treatment in the URANS case, simulation accuracy can be limited, and the approach needs to be extensively validated.
Validation motivates comparative numerical research based on the RANS method connected to turbulent flame propagation experiments. In relation to obstacle-driven turbulent flame acceleration, the RANS method suitability has been demonstrated in a number of cases. For example, in several works by different authors [14][15][16][17], numerical simulations of hydrogen flame propagation in a large-scale facility-ENACCEF acceleration tube-was performed employing URANS, and turbulent flame speed closure approaches with varying but generally satisfactory accuracy. In [18], URANS based simulations were used to investigate the deflagration to detonation transition (DDT) process in a channel with arc obstacles. Tolias et al. [19] investigated and compared LES and URANS models for medium-scale hydrogen deflagration modelling and came up with many benefits URANS may have over the LES; for example, URANS models are easier to apply, and they are more effective.
Up to now, there is a belief that URANS is used for large-or medium-scale experiments and mostly for application/practical or optimisation needs. Nevertheless, there are researches that suggest it can be not only effective but also accurate in modelling combustion phenomena and predicting flame structure in small-scale facilities. For example, URANS based simulations were validated and employed to perform a detailed study of interactions between the flame and flow in a duct with obstacles [20]. Another recent work, [21], validated and used the URANS approach to study the mechanism behind DDT in hydrogen-air mixtures in a channel with obstacles. In [22], a need for partial flame-quenching model inclusion into the URANS-based application-oriented modelling of accelerating H2-CO-air flames in obstructed channels was evaluated based on DNS and experimental data. A recent review of CFD application in process safety [23] also lists a number of URANS method applications for combustion cases, including interacting with obstacles.
Satisfactory validation cases of flame-obstacle interactions present in mentioned and other works encourage the further application of the URANS approach to practical and analytical studies. However, given turbulence treatment simplifications present in RANS, the universality of validation results can be questioned. Furthermore, turbulence-flame interaction, responsible for the flame acceleration in the URANS case, is often modelled by parametrising turbulent burning velocity on computed turbulence parameters, increasing accuracy demand on turbulence simulation. Therefore, to maintain a level of confidence in the methodology, validation in each specific case should be sought.
Simulation of small case chamber using URANS approach allows checking if the obtained results are comparable not only to the experiment but to the LES approach as well. At the same time, since larger flow structures can be resolved in the RANS case as well, and interaction of small-scale turbulence and flame is parametrised through a combustion model; successful analysis of such case still allows studying the interaction of the flame front with obstacles and the resulting larger-scale flow structures and flame acceleration due to turbulence.
The Laboratory-Scale Chamber
The experimental test case from the University of Sydney is used here for an analysis of turbulent hydrogen combustion. The schematic diagram of the laboratory-scale combustion chamber is shown in Figure 1. The chamber measures 50 × 50 × 250 mm with a total volume of 0.625 litres. The chamber is equipped with three rows of baffles with five 3 mm thick and 4 mm wide strips separated by 5 mm gaps which give an area blockage of 0.4. Each row of baffles is placed at 19 mm, 49 mm and 79 mm from the ignition source at the base of the chamber. The small solid obstacle has a square cross-section of 12 × 12 mm and is placed at 96 mm from the base of the chamber [9].
Hydrogen and air enter the chamber through a non-return valve at atmospheric pressure, and the mixture is left to settle before the ignition. A moment before ignition, the flap at the top of the chamber is opened and remains opened during the whole process to allow venting. In the experiment, the mixture is ignited by focusing an infrared output from an Nd-YAG laser 2 mm above the base. One of the Keller-type PR21-SR piezo-electric pressure transducers is placed in the base of the chamber and the other one is located in the wall, 64 mm from the top.
Flame propagates from the ignition point upwards and is accelerated by the turbulence induced when flame encounters and interacts with obstacles present in the chamber. According to the numerical flow velocities, Re number reaches values up to and around 10 6 in the chamber, while Re t ranges from several hundred to thousands during main acceleration.
Combustion in the solver is modelled using a transport equation for the progress variable (Equation (4)) and the turbulent flame-speed closure (TFC) approach [24]. TFC is a simplified (compared to chemistry simulation) method with the source term expressed through the turbulent flame speed (Equation (5)), suitable and extensively used for practice-oriented simulations and research where this method has been demonstrated to be appropriate. The turbulent flame speed is usually estimated using empirical or analytical correlations with turbulence parameters or using a more complex approach.
flameFoam
Numerical calculations of premixed turbulent flame propagating past repeated obstructions were performed using a custom-built solver-flameFoam-built using Open-FOAM toolkit. Solver is partially based on buoyantPimpleFoam, rhoPimpleFoam cht-MultiRegionFoam solvers. OpenFOAM does not have any solver based on a progress variable and turbulent flame-speed closure approaches. flameFoam is publicly hosted on https://github.com/flameFoam/flameFoam (accessed on 4 September 2021). The governing equations are compressible Navier-Stokes equations. Solved conservation equations for mass, momentum and energy are as follows: where ρ-density, t-time, U-velocity, τ e f f -shear stress, p-pressure, g-gravitational acceleration, h-enthalpy, K-kinetic energy, α e f f -effective thermal diffusivity, S h -enthalpy source, S c -combustion source. Combustion in the solver is modelled using a transport equation for the progress variable (Equation (4)) and the turbulent flame-speed closure (TFC) approach [24]. TFC is a simplified (compared to chemistry simulation) method with the source term expressed through the turbulent flame speed S t (Equation (5)), suitable and extensively used for practice-oriented simulations and research where this method has been demonstrated to be appropriate. The turbulent flame speed S t is usually estimated using empirical or analytical correlations with turbulence parameters or using a more complex approach.
where c-progress variable, µ e f f -effective dynamic viscosity, Sc T -turbulent Schmidt number.
The progress variable is defined as: where Y H 2 0 -initial hydrogen mass fraction, Y H 2 -hydrogen mass fraction, Y H 2 ∞ -assumed final hydrogen mass fraction.
Progress variable can have values from the interval 0 ≤ c ≤ 1; value 0 denotes unburnt mixture, while value 1-burned mixture.
Turbulent flame speed was evaluated using Bradley correlation [32] where u -RMS velocity, Ka-Karlovitz stretch factor, Le-Lewis number. Where fluctuating velocity: where k-turbulent kinetic energy. Karlovitz stretch factor [32] where S L -laminar flame speed, Re T -turbulent Reynolds number, ν-kinematic viscosity, l B t -Bradley turbulent length scale, ε-turbulent dissipation rate.
Initial and Boundary Conditions
The present study focuses on the hydrogen-air mixture; it constitutes 22.65% of H 2 and 77.35% of air. In the experiments, the hydrogen-air mixture is injected and allowed to rest. Therefore, initial turbulence is considered negligible and initial turbulence parameters were set as extremely low values (e.g., 0.001 m 2 /s 2 for turbulent kinetic energy). Since the laminar combustion regime is not modelled, ignition was initiated by imposing an ignition radius of 0.0055 m at the bottom of the chamber. Initial conditions are selected according to the experiment; they are shown in Table 1. Model constants are selected according to literature, while thermophysical properties depending on the initial composition of the mixture were calculated using an open-source suite of tools, Cantera [33]. Complete combustion was assumed, and the final hydrogen mass fraction was set to 0. The computational domain of the chamber has dimensions of 25 × 50 × 250 mm, with a symmetrical boundary condition in the x-z plane. While turbulence is inherently non-symmetric, experimental images do not display significant deviation from symmetric flame [8]; therefore, this assumption should not significantly distort simulation results. The chamber domain constitutes 50 × 100 × 500 cells in the x, y and z directions, respectively. The grid is structured and uniform, giving a grid size of ∆ = 0.5 mm. It is extended to 350 mm in the z direction and to 30 mm in the x and y directions at the top of the facility to facilitate venting simulation. For the present study configuration, BBBS-three rows of baffles located starting near the ignition and a small square obstacle after baffles-was used. The computational domain is presented in Figure 2. The computational domain of the chamber has dimensions of 25 × 50 × 250 mm, with a symmetrical boundary condition in the x-z plane. While turbulence is inherently nonsymmetric, experimental images do not display significant deviation from symmetric flame [8]; therefore, this assumption should not significantly distort simulation results. The chamber domain constitutes 50 × 100 × 500 cells in the x, y and z directions, respectively. The grid is structured and uniform, giving a grid size of Δ = 0.5 mm. It is extended to 350 mm in the z direction and to 30 mm in the x and y directions at the top of the facility to facilitate venting simulation. For the present study configuration, BBBS-three rows of baffles located starting near the ignition and a small square obstacle after baffles-was used. The computational domain is presented in Figure 2. Turbulence was modelled using k-ω SST model [34]. k-ω-SST is composed of two zonally blended models-k-ε and k-ω. These two models are dynamically blended during the simulation since the k-ω model is more suitable for wall-bounded flows and k-ε for free stream flows. This allows the k-ω SST model to appropriately describe turbulence in both zones, while the separately used k-ε model would perform worse than k-ω in the logarithmic region in equilibrium adverse pressure gradient flows. On the other hand, the standard k-ω model is sensitive to free stream conditions and is not suitable for turbulence simulation in the region further from the surfaces.
Adiabatic and no-slip boundary conditions were employed on the chamber walls and obstacles. Boundaries of the expanded upper part above the chamber were considered as outlets. Standard OpenFOAM boundary conditions for turbulence parameters at surfaces were used-kqRWallFunction (zero gradient wrapper) for turbulent kinetic energy, omegaWallFunction for specific turbulent dissipation rate and a wall-function automatically calculating ω with viscous and inertial sublayer expressions depending on y+. nutkWallFunction was used for eddy viscosity boundary at surfaces, wall functions based on k and automatically calculating viscous and inertial sublayer expressions depending on y+. Pressure and temperature were set to room values, and the standard OpenFOAM outlet condition was used for velocity and turbulence parameters at the outlet boundary.
The time step was automatically adjusted during the simulation run to keep the Courant number under 0.75. The simulation was performed using the Euler time discretisation scheme, and model equations were discretised using the Gauss linear scheme for gradients, second-order linear-upwind scheme for velocity, first-/second-order limited linear for turbulence parameters and second-order Van Leer scheme for scalars.
Two numerical grids have been studied for mesh independence study. Sizes of the numerical grids were ∆ = 0.001 m and 0.0005 m, having 866,200 and 6,845,200 cells, respectively. As mentioned before, the ignition radius was 0.0055 m; therefore, pressure evolution cannot be compared due to discrepancies of discrete ignition area shapes between meshes. Nevertheless, velocity field distribution and flame front structure are compared in Figure 3. It is shown that there are no major differences, except that finer mesh gives more details of the flame structure and flow field distribution. As this numerical investigation is focused on the combustion-turbulence interaction, the finer mesh is more appropriate for further research. Turbulence was modelled using k-ω SST model [34]. k-ω-SST is composed of two zonally blended models-k-ε and k-ω. These two models are dynamically blended during the simulation since the k-ω model is more suitable for wall-bounded flows and k-ε for free stream flows. This allows the k-ω SST model to appropriately describe turbulence in both zones, while the separately used k-ε model would perform worse than k-ω in the logarithmic region in equilibrium adverse pressure gradient flows. On the other hand, the standard k-ω model is sensitive to free stream conditions and is not suitable for turbulence simulation in the region further from the surfaces.
Adiabatic and no-slip boundary conditions were employed on the chamber walls and obstacles. Boundaries of the expanded upper part above the chamber were considered as outlets. Standard OpenFOAM boundary conditions for turbulence parameters at surfaces were used-kqRWallFunction (zero gradient wrapper) for turbulent kinetic energy, omegaWallFunction for specific turbulent dissipation rate and a wall-function automatically calculating ω with viscous and inertial sublayer expressions depending on y+. nutkWallFunction was used for eddy viscosity boundary at surfaces, wall functions based on k and automatically calculating viscous and inertial sublayer expressions depending on y+. Pressure and temperature were set to room values, and the standard OpenFOAM outlet condition was used for velocity and turbulence parameters at the outlet boundary.
The time step was automatically adjusted during the simulation run to keep the Courant number under 0.75. The simulation was performed using the Euler time discretisation scheme, and model equations were discretised using the Gauss linear scheme for gradients, second-order linear-upwind scheme for velocity, first-/second-order limited linear for turbulence parameters and second-order Van Leer scheme for scalars.
Two numerical grids have been studied for mesh independence study. Sizes of the numerical grids were Δ = 0.001 m and 0.0005 m, having 866,200 and 6,845,200 cells, respectively. As mentioned before, the ignition radius was 0.0055 m; therefore, pressure evolution cannot be compared due to discrepancies of discrete ignition area shapes between meshes. Nevertheless, velocity field distribution and flame front structure are compared in Figure 3. It is shown that there are no major differences, except that finer mesh gives more details of the flame structure and flow field distribution. As this numerical investigation is focused on the combustion-turbulence interaction, the finer mesh is more appropriate for further research. With the finer mesh, during the main part of the simulated transient, y+ values were kept mostly between 1 and 5 for the bottom wall and 30 and 70 for the side and front walls and obstacles.
Comparison with Experimental Results
URANS results presented in this paper are for unsteady turbulent premixed deflagrating flames propagating past obstructions in a chamber. The result of overpressure evolution is given in Figure 4, while Figure 5 presents a comparison of numerical and experimental flame arrival times at different heights.
Comparison with Experimental Results
URANS results presented in this paper are for unsteady turbulent premixed deflagrating flames propagating past obstructions in a chamber. The result of overpressure evolution is given in Figure 4, while Figure 5 presents a comparison of numerical and experimental flame arrival times at different heights.
The pressure was measured in the centre of the base of the chamber. Experimental results were extracted from [8]. The simulation was found to be in good agreement with the experiment. Modelling predicts the first pressure rise at around 3.57 ms, and a pressure peak at 4.3 ms was captured. Afterwards, the overpressure begins to drop. The pressure rise is not interrupted in numerical results, contrary to the experiment at 4 ms. This stagnation of pressure increase is observed around the moment when flame
Comparison with Experimental Results
URANS results presented in this paper are for unsteady turbulent premixed deflagrating flames propagating past obstructions in a chamber. The result of overpressure evolution is given in Figure 4, while Figure 5 presents a comparison of numerical and experimental flame arrival times at different heights.
The pressure was measured in the centre of the base of the chamber. Experimental results were extracted from [8]. The simulation was found to be in good agreement with the experiment. Modelling predicts the first pressure rise at around 3.57 ms, and a pressure peak at 4.3 ms was captured. Afterwards, the overpressure begins to drop. The pressure rise is not interrupted in numerical results, contrary to the experiment at 4 ms. This stagnation of pressure increase is observed around the moment when flame The pressure was measured in the centre of the base of the chamber. Experimental results were extracted from [8]. The simulation was found to be in good agreement with the experiment. Modelling predicts the first pressure rise at around 3.57 ms, and a pressure peak at 4.3 ms was captured. Afterwards, the overpressure begins to drop.
The pressure rise is not interrupted in numerical results, contrary to the experiment at 4 ms. This stagnation of pressure increase is observed around the moment when flame fingers start to merge behind the small obstacle (at 3.9 ms and 4 ms in Figure 6). Interaction of flames and fresh colder gas in the turbulent environment behind the obstacle might include a high rate of local quenching, which would lower heat production and pressure increase rate. flameFoam does not support quenching simulation yet, and therefore is not able to predict a decrease of combustion rate in this situation. The moment of overpressure peak corresponds to the time just before the flame exits the chamber. This is the moment after which the rate of combustion in the chamber decreases only due to smaller flame surface area (flames near the walls catch up with the flame at the centre) and due to exhaustion of combustible mixture-almost the whole initial mixture has been burnt, except a few small unburnt pockets. The correspondence of maximum overpressure timing with flames reaching the compartment exit has also been shown in the literature [35].
Agreement of flame propagation simulation and experimental results ( Figure 5) is very good up to 3.75 ms, which is when interaction with the small obstacle started. Simulations overpredict flame acceleration caused by this interaction, possibly due to missing local quenching modelling.
Study of Flame Propagation
This section's objective is to illustrate the typical flame behaviour in a vented channel with repeated obstructions. Figure 6 shows a sequence of images of propagating flame development at different times. After the first baffle, the flame tends to propagate in finger-like shapes. Laminar-like fingers correspond to experimental works of Alharbi et al. [36] and Masri et al. [37], where some LIF-OF images of the hydrogen flames are given. This structure is not wrinkled much because the turbulence level is still low. However, expanding unburned mixture generate vortices behind every baffle; consequently, vortices interact with the flame front and distort it.
After passing the second baffle at 3.1 ms, the flame is accelerated and distorted even more due to a strong interaction between vortices and the flame front. Finger-like flame front shapes merge in the middle part of the chamber, resulting in lateral propagation towards the walls.
The evolution of the turbulent flame is shown in Figure 7 in terms of the progress variable. After ignition, the leading edge of the flame front starts to expand hemispherically and elongates in the z direction. Upon reaching the first baffle plate, the hemispherical laminar flame shape is distorted due to protrusion through the narrow vents and starts to roll up; thus, turbulent combustion begins as the flame is compressed and expanded in order to pass through obstacles. As the flame is distorted, the surface area of the flame increases; therefore, more combustible mixture is consumed, and a higher flame propagation velocity develops.
Turbulent structures are generated in the wake of each baffle, which are shown in Figure 8. As it is presented, the intensity of vorticity increases with each obstruction as the flame front propagates through; therefore, stronger vortices produce larger and faster recirculation regions behind the subsequent obstructions, increasing the flame surface and combustion rate. Vortices formed ahead of the flame front wrinkle the flame, thus enhancing the transport of mass and heat and also disrupt the flame. At 3.5 ms, the leading edge of the flame front reaches the last baffle, as presented in Figures 6 and 7. The flame front forms finger-like shapes again. Nevertheless, this time, fingers do not merge in the middle of the chamber; they are stretched and wrinkled due to induced turbulence. The flame/vortex interaction is clearly seen in the last frame of Figure 8, as the flame initially tries to propagate around the vortex but then is suddenly drawn into the vortex core. Afterwards, the flame front reaches the square obstacle at 3.7 ms, and from that time, the overpressure increases enormously (see Figure 4). After the flame fingers encounter the square obstacle, they are directed around the obstacle and wrap around it at a very high speed. Although the solid square obstacle does not induce turbulence as much as baffles, it increases the blockage ratio and distorts the development of the flame front. When the flame propagates through the last obstacle, the wrinkled flame front becomes reconnected in the recirculation region, creating a pocket of unburned mixture behind the obstacle, a feature of obstacle-flame interaction expected from the previous experimental work [38], and then the flame spreads towards the chamber exit.
The mentioned/described flame shape evolution was also reproduced in LES studies [8,9,12,39,40], which means that URANS simulation prediction is adequate and it resolves turbulent flow structures. However, none of the LES studies described flame-vortex interaction in detail.
Since the obstacles and vortical structures induced behind them wrinkle the flame and disrupt front continuity, flame pockets consuming remaining unburned gases are formed (visible in Figure 6 as well), a feature of flame and obstacle-induced turbulence interaction confirmed by more detailed LES modelling in previous works [41][42][43]. At the same time, the unburned mixture is trapped near the walls at various stages of combustion. Even at the last shown moment in Figure 6 (t = 4.5 ms), there are several flame/unburned mixture pockets alongside the walls. This could be due to the flame/vortex interaction, which directs the flame front to the centre of the chamber, as well as decreasing flow rates and turbulence towards the walls.
Conclusions
The research presented in this paper studied premixed hydrogen-air mixture flame propagation in a small-scale combustion chamber. A custom-built turbulent combustion solver, flameFoam, based on the progress variable model, was applied. According to the numerical results, the solver can adequately reproduce pressure behaviour. The simulation correctly predicted the maximum overpressure of 0.8 bar and its timing. The solver was not able to predict a brief period of pressure stagnation, possibly due to missing support for quenching simulation in flameFoam.
The performed simulation complement the available body of works related to validation of the URANS method application to simulation of the interaction between obstacleinduced turbulence and flame. Relevant suitability and limitations of the flameFoam solver also have been demonstrated.
Flame propagation investigation showed that vortices are formed behind every obstruction. The vorticity intensity increases with further obstacles as the flame front propagates through them due to increased flow and flame velocities. Therefore, the positive feedback loop is formed-with increasing velocities, the flame front is perturbed and stretched by strengthening vortices, thus inducing turbulence as well as increasing burning rate and flame propagation velocity. The flame/vortex interaction results not only in a wrinkled flame front but also the flame being pulled into vortices, consequently intensifying the mixing of the unburned mixture (vortex core) and burned mixture.
Further validation of RANS/TFC simulations of a small chamber with different obstacle configurations is required, including support for local quenching modelling. It would be interesting to perform both RANS and LES simulations with equivalent combustion models to see the actual extent of RANS limitations in given conditions. | 6,689.2 | 2021-09-10T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
BCL11B Regulates Epithelial Proliferation and Asymmetric Development of the Mouse Mandibular Incisor
Mouse incisors grow continuously throughout life with enamel deposition uniquely on the outer, or labial, side of the tooth. Asymmetric enamel deposition is due to the presence of enamel-secreting ameloblasts exclusively within the labial epithelium of the incisor. We have previously shown that mice lacking the transcription factor BCL11B/CTIP2 (BCL11B hereafter) exhibit severely disrupted ameloblast formation in the developing incisor. We now report that BCL11B is a key factor controlling epithelial proliferation and overall developmental asymmetry of the mouse incisor: BCL11B is necessary for proliferation of the labial epithelium and development of the epithelial stem cell niche, which gives rise to ameloblasts; conversely, BCL11B suppresses epithelial proliferation, and development of stem cells and ameloblasts on the inner, or lingual, side of the incisor. This bidirectional action of BCL11B in the incisor epithelia appears responsible for the asymmetry of ameloblast localization in developing incisor. Underlying these spatio-specific functions of BCL11B in incisor development is the regulation of a large gene network comprised of genes encoding several members of the FGF and TGFβ superfamilies, Sprouty proteins, and Sonic hedgehog. Our data integrate BCL11B into these pathways during incisor development and reveal the molecular mechanisms that underlie phenotypes of both Bcl11b−/− and Sprouty mutant mice.
Introduction
Tooth initiation in the mouse is characterized by a thickening of the oral epithelium at embryonic day (E) 11.5. The proliferating epithelium invaginates into the underlying neural crest-derived mesenchyme and forms a bud at E12.5-E13.5 (bud stage). The epithelium expands and folds around the condensed mesenchyme to form a cap-like structure at E14.5 (cap stage). The cap stage is characterized by formation of the enamel knot, a critical signaling center, and lateral protrusions of the epithelium, known as cervical loops (CLs). CLs extend during bell stage (E16.5-E18.5), at which point cytodifferentiation begins [1,2,3,4].
Continuous growth of the rodent incisor requires the presence of epithelial and mesenchymal stem cells that provide a continuous supply of enamel-producing ameloblasts and dentin-producing odontoblasts, respectively. Epithelial stem cells (EpSCs) are slowcycling cells located in the CLs [5,6,7]. The labial CL consists of a core stellate reticulum (SR) and stratum intermedium cells surrounded by basal epithelial cells, known as the inner and outer enamel epithelium (IEE and OEE, respectively) [8]. EpSCs reside in the labial CL and give rise to transit amplifying cells that migrate anteriorly along the IEE while sequentially differentiating to mitotic pre-ameloblasts, post-mitotic secretory ameloblasts, and mature ameloblasts [9]. The lingual CL contains a smaller EpSC niche, which does not give rise to ameloblasts, resulting in a complete lack of enamel deposition on the lingual aspect of the rodent incisor. Thus, enamel, the hardest substance in the body, is secreted uniquely on the labial aspect of the incisor. This leads, to preferential abrasion of the lingual incisor surface during feeding, counteracting the continuous growth of the mouse incisor to produce an incisor of fixed length [10].
Tooth development is regulated by sequential and reciprocal signaling between the epithelium and mesenchyme and is accompanied by patterning and differentiation of specialized cell types at distinct anatomical locations. A complex network of fibroblast (FGFs) and transforming (TGFb) growth factors regulates proliferation and differentiation of EpSCs during development. The antagonists of these pathways, Sprouty (Spry) proteins and Follistatin (FST), respectively, also regulate EpSC niche development, and growth and asymmetry of the mouse incisor [6,8].
CTIP2/BCL11B (BCL11B hereafter) is a transcription factor that plays essential roles in the development of the immune [11,12], central nervous [13,14], and cutaneous [15] systems and is required for perinatal survival [11]. Bcl11b 2/2 incisors and molars are poorly developed, and exhibit a hypoplastic SR. Ameloblasts do not differentiate properly on the labial side, and ectopic ameloblast-like cells form on the lingual side of the Bcl11bnull incisor [16].
Our analyses of Bcl11b 2/2 mice revealed that BCL11B plays important roles throughout incisor development. Mice lacking Bcl11b exhibit epithelial proliferation defects early in development, which ultimately impact incisor size and shape. BCL11B also controls formation of both labial and lingual epithelial stem cell niches and differentiation of ameloblasts. However, BCL11B does so in a bidirectional manner: promoting development and differentiation of the epithelium on the labial side while suppressing that on the lingual side, which strongly enforces asymmetric ameloblast development in the mouse mandibular incisor.
Histological Analysis, RNA in Situ Hybridization, and Immunohistochemistry
Embryonic heads were fixed in 4% paraformaldehyde, cryopreserved in 30% sucrose, and frozen in O.C.T. Hematoxylin and eosin (H&E) staining and RNA in situ hybridization (ISH) with digoxigenin-labeled probes were performed according to standard protocols on 16 mm-thick sagittal sections. Immunohistochemistry using anti-BCL11B (Abcam, 1:300) was performed as described [23].
Cell Proliferation Assay
Pregnant mice (E11.5-E16.5) were injected intraperitoneally with 100 ml of 5 mg/ml BrdU solution per 100 g of body weight and sacrificed after 2 h. Cryopreserved heads were serially sectioned (10 mm), and an anti-BrdU antibody (Accurate Chemical, 1:100) was used to detect BrdU incorporation. The BrdU index was calculated as the mean relative amount of BrdU-positive cells as a fraction of total, DAPI-positive cells. An unpaired, twotailed Student's t-test was used to determine statistical significance. At least six sections from a minimum of three animals per genotype and age were analyzed.
BCL11B is Expressed at all Stages of Incisor Development
BCL11B is expressed in the ectoderm of the first branchial arch at E9.5 and E10.5 and in the molar at all stages of development [16]. To determine the function of BCL11B in the developing incisor, we analyzed BCL11B expression on sagittal sections of the mandibular incisor from E11.5 to birth. At initiation (E11.5) and early bud (E12.5) stages, BCL11B was expressed in the thickened epithelium; lower levels of BCL11B were detected in the underlying mesenchyme (Figs. 1A and B). High levels of BCL11B persisted in the dental epithelium at cap stage (E14.5), whereas mesenchymal cells surrounding CLs and the follicle continued to express lower levels ( Fig. 1C). At early (E16.5) and late (E18.5) bell stages, BCL11B was detected in the lingual epithelium and in the labial OEE, and at lower levels in the papillary mesenchyme surrounding both CLs, dental follicle, SR, and ameloblasts at all stages of differentiation (Figs. 1D, E and S1). BCL11B was expressed in the tissue surrounding the tip of the incisor and the vestibular lamina, an invagination of the oral epithelium that gives rise to the oral vestibule (Figs. 1C and D).
Several signaling molecules and transcription factors orchestrate invagination of the epithelium between initiation and early bud stages. For example, BMP4, a critical signaling molecule that regulates tooth initiation and morphogenesis [24], is expressed in the dental epithelium and underlying mesenchyme during the initiation of tooth development at E11.5 (Fig. 2K). Bmp4 expression largely shifts to the dental mesenchyme by early bud stage in wild-type mice (Fig. 2M) [3,25]. Alterations in Bmp4 expression were not detected in Bcl11b 2/2 incisors at E11.5 (Fig. 2L). However, the Bcl11b 2/2 epithelium failed to downregulate expression of Bmp4 at E12.5 (Fig. 2N). The expression patterns of other critical signaling molecules and transcription factors, including Activin, Shh, Pax9, and Msx1, were not altered in Bcl11b 2/2 incisors at the bud stage (Fig. S2).
Altered Development of Bcl11b 2/2 Incisors at Cap Stage
The wild-type, mandibular incisor is characterized by a cap-like shape of the dental epithelium at E14.5, with an enamel knot in the center and protruding CLs (Fig. 3A). The enamel knot is a transitory signaling center that is characterized by minimal proliferation and clearly defined apoptosis [26,27]. In contrast, the CLs are highly proliferative with a low apoptotic index [28]. The Bcl11b 2/2 incisor exhibited a delay in epithelial invagination and protrusion of both CLs at E14.5 (Fig. 3B). BrdU-labeling studies revealed that the incisor epithelium of Bcl11b 2/2 mice was hypoproliferative compared to that of wild-type mice (27.664.1% and 51.862.7% BrdU-positive cells, respectively), whereas mesenchymal proliferation appeared unchanged from controls ( Fig. 3D and E).
Proliferation of the dental epithelium at cap stage is controlled in part by FGF10, which is derived from mesenchymal cells of the dental papilla [29]. The Bcl11b 2/2 papillary mesenchyme was essentially devoid of Fgf10 transcripts at early E14 and ectopic Fgf10 expression was noted between the dental epithelium and vestibular lamina, the latter of which exhibited impaired invagination (Fig. 3F, G). The delay of initiation of mesenchymal Fgf10 expression may contribute to decreased dental epithelial proliferation, delayed invagination of the dental epithelium, and subsequently decreased size of the mutant incisor.
Apoptotic cells were predominantly localized in the enamel knot of wild-type incisors at E14.5 (Fig. S4A). However, very few apoptotic cells were detected in the Bcl11b 2/2 enamel knot ( Finally, Bcl11b 2/2 incisors were approximately one-half the length of wild-type incisors at birth and correspondingly narrower across the entire tooth (Fig. S5).
These results demonstrate that BCL11B plays an important role in development of the labial CL and differentiation of ameloblasts, while simultaneously suppressing these processes on the lingual side of the incisor.
Delay in Ameloblast Development and Ectopic Formation of Lingual Ameloblast-like Cells in Bcl11b 2/2 Incisors
To determine if labial ameloblasts and lingual ameloblast-like cells underwent differentiation in Bcl11b 2/2 incisors, we examined expression of sonic hedgehog (Shh) and amelogenin (Amelx), markers of pre-ameloblasts [6,30] and mature ameloblasts [31], respectively. Shh expression was observed in a gradient along the length of the labial IEE of wild-type incisors at E16.5 and E18.5, with the most intense staining in the posterior region ( Fig. 5A, C). Shh expression was greatly reduced in the labial epithelium of Bcl11b 2/2 mice, and ectopic Shh transcripts were detected in the lingual epithelium at E16.5 and E18.5 (Figs. 5B and D). The expression pattern of Gli1, a mediator of SHH signaling [32], reflected changes in Shh expression in the mutant incisor (Fig. S6).
Amelx expression was greatly reduced in the labial epithelium of Bcl11b 2/2 mice at E16.5 (compare Figs. 5E and F) but recovered to a level similar to that of wild-type mice by E18.5 (Fig. 5G). Ectopic Amelx expression was observed in the anterior lingual IEE These results demonstrate that BCL11B plays a key role in the establishment and/or enforcement of developmental incisor asymmetry and cellular differentiation within the ameloblast lineage.
Alteration of the FGF Signaling at Bell Stage in Bcl11b 2/2 Incisors
Asymmetric development of the CLs is controlled by several signaling pathways. FGFs and their intracellular antagonists, the Sprouty proteins, are crucial for proper development of the labial and lingual CLs [6]. FGF3 and FGF10 are key mesenchymal instructive signals that cooperatively stimulate proliferation of the incisor epithelium at bell stage [5,33,34]. Fgf3 is expressed The Fgf3 expression domain, which is located in the mesenchyme just anterior to the labial CL in wild-type mice, was absent in Bcl11b 2/2 mutants at E16.5 and E18.5. However, Fgf3 was ectopically expressed in the mesenchyme adjacent to the lingual CL in Bcl11b 2/2 mice (Figs. 6B and D). The expression pattern of Fgf10 was altered in Bcl11b 2/2 incisors in a manner that was qualitatively similar to that of Fgf3 (Figs. 6F and H).
Epithelial FGF9 forms a positive-feedback signaling loop with mesenchymal FGF3 and FGF10 on the labial side of the wild-type incisor [6]. Fgf9 RNA was detected anterior to the labial CL of the wild-type incisor (Figs. 6I and K). This Fgf9-positive domain was reduced in Bcl11b 2/2 incisors, and ectopic expression of Fgf9 was detected in the lingual epithelium at E16.5 and E18.5 (Figs. 6J and L).
Sprouty proteins are responsible, in part, for inhibition of ameloblast differentiation in the lingual epithelium [6]. Spry4 RNA was detected in the mesenchyme adjacent to the labial CL, and at lower levels in the posterior lingual and labial epithelium in wildtype mice at E16.5 and E18.5 (Figs. 6M and O). Spry2 expression Spry4 expression was up-regulated in the lingual basal epithelium and underlying mesenchyme of Bcl11b 2/2 incisors at E16.5 and E18.5, and down-regulated on the labial side of the developing Bcl11b 2/2 incisor at both developmental stages (Figs. 6N and P). Spry2 expression was up-regulated in the lingual CL and slightly down-regulated in the labial epithelium of Bcl11b 2/2 mice at E16.5 and E18.5 (Figs. 6R and T).
Mesenchymal FGF10 stimulates expression of Lunatic Fringe (Lfrn), which encodes a secretory molecule that modulates the Notch pathway [5]. To determine if the Notch pathway was altered in Bcl11b 2/2 incisors at bell stage, we examined expression patterns of Lfrn and Notch1. Lfrn RNA was detected predominantly along the length of IEE and in the posterior OEE of wild-type incisors at E16.5 and E18.5 (Figs. S7A and C). Lfrn expression was down-regulated at the posterior end of the labial CL of Bcl11b 2/2 incisors at both E16.5 and E18.5 (Figs. S7B and D). Ectopic Lfrn expression was detected in the posterior part of the mutant lingual epithelium at E18.5 (Fig. S7D). Loss of BCL11B did not affect the level of expression or localization of Notch1 transcripts. However, Notch1 expression reflected the morphological expansion and contraction of lingual and labial SR, respectively, in Bcl11b 2/2 incisors (Figs. S7E-H).
These findings highlight dysregulation of the FGF signaling pathways as being central to the incisor phenotype of Bcl11b 2/2 mice. As asymmetric expression of Fgf3 and Fgf10 contribute to asymmetric development of labial and lingual EpSC niches [28]. Thus, the complete reversal of asymmetric Fgf3 and Fgf10 expression, together with that of Fgf9, likely underlies the enhanced and repressed development of the lingual and labial CLs, respectively, in Bcl11b 2/2 mice.
Disruption of TGFb Signaling at Bell Stage in Bcl11b 2/2 Mice
The TGFb family members, BMP4 and activin bA, and the antagonist FST play key roles in the generation and maintenance of asymmetric ameloblast localization during incisor development. For example, FST inhibits ameloblast differentiation on the lingual side of the incisor, whereas BMP4 promotes it on the labial side. In contrast, activin enhances development of the labial CL, whereas BMP4 limits CL growth [24,28,35].
Bmp4 expression was detected predominantly in the labial mesenchyme, anterior to the labial CL, in wild-type mice at E16.5. Lower levels of Bmp4 transcripts were present in the mesenchyme underlying the lingual epithelium and in an anterior region of the ameloblast layer (Fig. 7A). The boundaries of mesenchymal Bmp4 expression were disrupted in Bcl11b 2/2 incisors at E16.5, with ectopic expression noted in the mesenchyme posterior to the lingual CL. Expression of Bmp4 in the ameloblast layer appeared reduced in mutants at this stage (Fig. 7B). At E18.5, Bmp4 transcripts were detected predominantly in the labial epithelium, in a wide region of labial mesenchyme, and at lower levels on the lingual side of the wild-type incisor (Fig. 7C). Bmp4 expression increased uniformly in all of these domains in Bcl11b 2/2 mutants at E18.5 (Fig. 7D).
Activin expression was restricted to the labial mesenchyme directly underlying the posterior epithelium, within the tip of the labial CL, and in the posterior part of the dental follicle in wild-type mice at E16.5 and E18.5 (Figs. 7E and G). Activin expression was lost within the labial mesenchyme and epithelium in Bcl11b 2/2 mice at both developmental stages. However, ectopic mesenchymal expression of activin was observed around the lingual CL and follicular expression of activin appeared to be delocalized in the Bcl11b 2/2 incisors at E16.5 and E18.5 (asterisks in Figs. 7F and H).
stages of progression of ameloblast differentiation in L, N, P-S, U-X. The Bcl11b 2/2 labial CL was smaller than that of wild-type mice by 60% at E16.5 and by 69% at E18.5 (p # 0.001, n = 11), whereas the mutant lingual CL was enlarged by 35% at E16.5 and by 64% at E18.5 relative to wild-type tissue (p # 0.001, n = 11). (I-J) BrdU immunostaining (green) in sections of wild-type and Bcl11b 2/2 mice at E16.5. All sections were counterstained with DAPI (blue). The epithelium is outlined in white. (K) BrdU index of wild-type and Bcl11b 2/2 basal epithelium of labial and lingual CLs; ns, not significant; *** denotes statistical significance at p # 0.001, n = 3. Scale bars: (A-S, U-X) 200 mm; other panels, 100 mm. a, ameloblasts; cl, cervical loop; lab, labial; lin, lingual. doi:10.1371/journal.pone.0037670.g004 Fst transcripts were observed in the OEE on the labial and lingual sides at E16.5 ( Fig. 7I; see also [35]). Fst expression in the OEE persisted at E18.5, and Fst transcripts were also detected in highly-defined domains at the anterior epithelial tip of the incisor on both labial and lingual sides ( Fig. 7K; data not shown). In contrast, Fst transcripts were diffusely distributed throughout the labial and lingual epithelium of Bcl11b 2/2 incisors, particularly at the anterior (incisal) tip of the epithelium, and in the papillary mesenchyme at E16.5 (Fig. 7J). Fst expression within the posterior region of the wild-type incisor at E18.5 was indistinguishable from that of Bcl11b 2/2 mice (data not shown). However, we noted a dramatic expansion of the Fst expression domain within the anterior labial epithelium at E18.5. Additionally, Bcl11b 2/2 incisors failed to extinguish Fst expression along the length of the labial OEE at E18.5 (Fig. 7L).
Cell Autonomous Effects of BCL11B in Lingual Epithelium
BCL11B is expressed in both ectodermal-derived epithelium and neural crest-derived mesenchyme (Fig. 1). We created lines conditionally null for Bcl11b expression in both germinal layers to determine the expression domain responsible for BCL11B-mediated suppression of ameloblast differentiation in the lingual epithelium. Mice harboring an epithelial-specific deletion of Bcl11b (Bcl11b ep2/2 ), which were created by crossing floxed Bcl11b L2/L2 mice with the K14-cre deleter strain [21], clearly lacked BCL11B in the entire dental epithelium (Figs. 8A and B). However, specific BCL11B expression persisted in the dental mesenchyme and other non-epithelium-derived tissues. Bcl11b ep2/2 mice expressed the pre-ameloblast marker Shh with an ectopic gradient along the length of the lingual epithelium at E16.5 (Figs. 8C and D), and this persisted at a lower level at E18.5 (Fig. S8B) However, Bcl11b ep2/2 mice did not express Amelx in the lingual epithelium at either E16.5 (Fig. 8F) or E18.5 (Fig. S8E). Considered together, these results suggest that Bcl11b ep2/2 mice initiate but do not complete ameloblast differentiation within the lingual dental epithelium.
Next, we examined the expression of several genes encoding signaling molecules to determine the effect of epithelium-specific inactivation of Bcl11b on generation of labial-lingual asymmetry at E16.5. A low level of ectopic expression of Fgf3, Fgf9, and activin was detected on the lingual side of the Bcl11b ep2/2 incisor, and this was qualitatively similar to Bcl11b 2/2 incisors (compare Figs. 8G-L, Fig. 6B and F, and Fig. 7F). However, we did not observe The size and shape of the Bcl11b ep2/2 incisors were similar to control incisors (see Fig. 8), and the slight variations in lingual gene expression in Bcl11b ep2/2 incisors did not result in altered amelogenesis as determined by X-ray micro-CT radiography performed on P21 mandibles (Fig. S9).
These data indicate that epithelial, but not mesenchymal Bcl11b expression is required for suppression of ectopic pre-ameloblast formation in the lingual epithelium. However, loss of Bcl11b in the epithelium is not sufficient for the lingual pre-ameloblasts to persist or to undergo further differentiation into mature, Amelx-positive ameloblasts.
FGF Signaling Negatively Regulates BCL11B Expression in the Lingual IEE and SR
The ectopic development of lingual pre-ameloblasts expressing Shh in Bcl11b 2/2 mice is similar to that reported in Spry4 2/2 ; Spry2 +/2 mice. Loss of Sprouty gene expression results in abnormal FGF gene expression and establishment of a FGF positive-feedback signaling loop on the lingual side of the incisor. In addition, Spry4 2/2 ; Spry2 +/2 mice were characterized by upregulated expression of Etv4 and Etv5 (previously known as Pea3 and Erm), which are considered to be transcriptional targets of FGF signaling, and indicative of activation of the FGF signaling pathway(s) in mutant incisors [6,36,37]. We assessed expression of BCL11B in incisors from Spry4 2/2 ; Spry2 +/2 embryos in order to determine if the FGF signaling pathway(s) regulates BCL11B expression.
BCL11B was highly expressed in the entirety of the wild-type lingual epithelium at E16.5, including the CL, anterior OEE, and IEE ( Fig. 9A; see also Fig. 1D). In contrast, BCL11B protein levels were dramatically decreased in the Spry4 2/2 ; Spry2 +/2 incisor, particularly within the lingual IEE and SR, while BCL11B levels in the lingual OEE and labial epithelium were largely unaffected (Fig. 9B).
Expression of Tbx1, which is also important for incisor developmental asymmetry, was increased in the lingual epithelium of Spry4 2/2 ; Spry2 +/2 incisors [38]. These findings prompted us to examine Tbx1 expression in Bcl11b 2/2 mice. Tbx1 was predominantly expressed in the posterior basal epithelium on the labial side of wild-type incisors at both E16.5 and E18.5, and diffusely at a much lower level in the lingual epithelium (Figs. S11A and C). We observed striking up-regulation of Tbx1 expression in the lingual IEE of Bcl11b 2/2 mice at E16.5 and E18.5 (Figs. S11B and D), suggesting that BCL11B directly or indirectly represses the Tbx1 expression in the lingual epithelium, and that up-regulation of Tbx1 expression in Spry4 2/2 ; Spry2 +/2 mice [38] may occur through down-regulation of BCL11B protein levels. These findings place BCL11B downstream of FGF signaling and upstream of Tbx1 expression in the lingual epithelium of the developing incisor. Tbx1 expression was severely decreased in the labial epithelium of Bcl11b 2/2 mice at E16.5 (Fig. S11B). Labial expression of Tbx1 in the Bcl11b 2/2 incisor recovered by E18.5 (Fig. S11D), suggesting that another factor(s) may compensate for loss of BCL11B expression in the control of expression of Tbx1 in the labial epithelium. These above findings suggest that the FGF signaling pathways regulate BCL11B expression in the lingual epithelium, and we hypothesized that inactivation of FGF signaling may lead to upregulation of BCL11B expression within the labial IEE. In order to test this hypothesis, we assessed BCL11B expression in Fgf3 2/2 ; Fgf10 +/2 incisors; however, BCL11B immunostaining was indistinguishable from wild-type incisors (Fig. S12). It is conceivable that another FGF family member(s) may compensate for loss of Fgf3 expression and partial loss of Fgf10 expression by enforcing the repression of BCL11B expression within the labial IEE [39]. Indeed, Fgf3 2/2 ; Fgf10 +/2 and wild-type incisors are nearly identical in size (Fig. S12), suggesting that loss or partial loss of these two signaling molecules did not compromise proliferation during incisor development. Finally, it is possible that regulation of BCL11B expression within the labial epithelium may not involve the FGF signaling pathways, as was clearly evident on the lingual side (Fig. 9B).
Discussion
The studies reported here demonstrate that the transcription factor BCL11B participates in several essential aspects of mouse incisor development. First, BCL11B controls epithelial proliferation, which ultimately impacts the size and shape of the incisor. Second, BCL11B plays a key role in the establishment and maintenance of labial-lingual asymmetry by regulating the expression of several key signaling molecules and transcription factors. Third, BCL11B is essential for the proper formation, differentiation, and localization of ameloblasts.
To our knowledge, this is the first report of a transcription factor that integrates developmental control of both labial and lingual EpSC niches and ameloblasts. Such regulation appears to be bidirectional: BCL11B stimulates the development of the labial CL by enhancing the expression of key signaling molecules on the labial side while limiting development of the lingual CL by repressing the expression of the same signaling molecules on the lingual aspect. Subsequently, BCL11B promotes differentiation of the labial, EpSC-derived IEE cells into mature ameloblasts and blocks ectopic formation and differentiation of the lingual IEE into cells of the ameloblast lineage.
BCL11B Regulates Proliferation of the Dental Epithelium
BCL11B is initially required for proper transition from initiation to early bud stage of tooth development. Specifically, BCL11B is necessary for the proper timing of epithelial proliferation, invagination, and down-regulation of epithelial Bmp4 expression. During tooth initiation, BMP4 is secreted by the epithelium and induces the mesenchymal expression of genes (Msx1, Msx2, and Bmp4) that further direct incisor formation [4]. While the significance of down-regulation of Bmp4 expression in the dental epithelium at bud stage is unknown, overexpression of Bmp4 in the distal respiratory epithelium results in decreased epithelial proliferation [40]. Thus, a delay in down-regulation of epithelial Bmp4 expression may contribute to reduced proliferation of the dental epithelium between initiation and bud stages.
The size and shape of the incisor is tightly regulated by the opposing forces of cellular proliferation and apoptosis, both of which are controlled by signaling pathways. FGF10 induces a mitogenic response in dental epithelial cells [29], and a delay in induction of Fgf10 in Bcl11b 2/2 incisors may further contribute to the proliferation defect in Bcl11b 2/2 dental epithelium. Therefore, a combination of at least two molecular dysregulations, a delay in down-regulation of epithelial Bmp4 expression at E12.5 and in the induction of mesenchymal Fgf10 expression at E14, may contribute to decreased proliferation of Bcl11b 2/2 dental epithelium. We further propose that altered epithelial proliferation in the absence of BCL11B may account for the slowed invagination of this tissue, resulting in an incisor that is approximately one-half of the size of a wild-type incisor at birth.
Apoptosis of epithelial cells comprising the enamel knot at cap stage also regulates the overall shape of the incisor [27]. Thus, delayed initiation of apoptosis in Bcl11b 2/2 incisors may also contribute to altered morphology in the mutant.
BCL11B Controls the Expression of FGF and TGFb Family Members
Transition from the cap to bell stage of incisor development is accompanied by establishment of an asymmetric shape. The size and asymmetry of the mouse incisor is dictated by transcription factor networks, which control the expression of genes encoding components of various signaling pathways that play deterministic roles in cellular specification and organogenesis. The FGF and TGFb signaling pathways, and their respective antagonists-the Sprouty proteins and FST-are particularly important in incisor development.
FGF3 and FGF10, both of which are expressed predominantly in the labial mesenchyme, maintain proliferation of EpSCs and, thus, directly contribute to the asymmetric shape of the incisor [5,28,33]. Furthermore, both FGF3 and FGF10, together with epithelial FGF9, form a positive-feedback loop on the labial aspect of the tooth. This FGF feedback loop is inhibited by Sprouty proteins on the lingual side, resulting in the limited development of the lingual EpSC niche [6]. In turn, the expression of the Sprouty genes can be induced by FGF signaling [41].
In the absence of BCL11B, a remarkable inversion of the expression patterns of FGF genes relative to the labial-lingual axis occurred, such that these genes were expressed predominantly on the lingual side, with no or little expression was observed on the labial side. Therefore, BCL11B may function as a spatial switch governing expression of FGF signaling pathway members (Fig. S13). Expression of Sprouty genes was altered in a similar manner in Bcl11b 2/2 incisors, possibly in a compensatory or feedback manner. Consistent with this, expression of Tbx1, which is positively regulated by FGF signaling in the developing incisor [38], was similarly altered. Our findings strongly suggest that complete loss of expression of FGF family members on the labial side coupled with ectopic expression of these signaling proteins on the lingual side of the Bcl11b 2/2 incisor underlies the abnormal morphology of the Bcl11b 2/2 tooth.
The FGF and TGFb signaling pathways are closely interweaved during incisor development. For example, BMP4 represses Fgf3 expression in the mesenchyme; however, activin abrogates this repression on the labial side of the incisor, which allows FGF3 expression within this domain. Low activin expression in the lingual mesenchyme allows BMP4 to inhibit Fgf3 expression on the lingual side in an unopposed fashion [28]. In addition, BMP4 promotes ameloblast differentiation within the labial epithelium, possibly by inducing expression of p21 and ameloblastin. Ameloblast-inducing activity of BMP4 is inhibited on the lingual side by FST, but the relative lack of Fst expression in the labial epithelium facilitates terminal differentiation of ameloblasts [35]. Alteration of Bmp4 expression in Bcl11b 2/2 incisor generally paralleled those observed with FGF family members. Thus, BCL11B appears to be required for Bmp4 expression on the labial side of the developing incisor and for suppression of Bmp4 expression in the lingual mesenchyme.
Expression of activin also exhibited complete reversal in the mutants at the bell stage. Activin expression was completely lost on the labial side of the mutant incisor, perhaps allowing BMP4 to inhibit Fgf3 expression within this domain. In contrast, activin transcripts were detected in the lingual mesenchyme of Bcl11b 2/2 incisors, suggesting that this ectopic expression domain allows activin to block the repressive action of BMP4 on Fgf3 expression in this tissue. Because FGF3 functions in a positive-feedback loop with FGF10 and FGF9 [6], the expression patterns of the latter were also altered. FGF3, and possibly FGF family members, could then induce the development of the lingual EpSC niche in Bcl11b 2/2 mice. Thus, we propose that activin contributes to the establishment of new borders of expression of FGF family members in the Bcl11b 2/2 incisor.
BCL11B Controls Asymmetric Development of the Ameloblasts
The asymmetric pattern of expression of FGF and TGFb signaling molecules is thought to lead to the asymmetric development of ameloblasts in wild-type incisors. Therefore, dysregulated expression of these signaling pathways in Bcl11b 2/2 incisors likely contributes to the ectopic development of lingual, ameloblast-like cells and delayed development of the labial ameloblasts. The expanded lingual EpSC niche in Bcl11b 2/2 incisors gave rise to ectopic Shh-expressing pre-ameloblasts, which further differentiated into mature, Amelx-positive ameloblasts. The mutant labial EpSC niche also gave rise to some pre-ameloblasts, which were characterized by down-regulated Shh expression. These pre-ameloblasts failed to differentiate into mature ameloblasts at the early bell stage. Although the pool of Shh-positive pre-ameloblasts was greatly reduced in Bcl11b 2/2 incisors at late bell stage, it was remarkable that differentiation to Amelx-positive ameloblasts occurred in the labial epithelium by E18.5. This observation suggests that another transcription factor(s) may compensate for loss of Bcl11b expression in the ameloblast lineage, allowing ameloblast development to occur, albeit in a delayed manner.
Ectopic Shh-positive pre-ameloblasts were abundant on the lingual aspect of the Bcl11b ep2/2 incisor at early bell stage. Low levels of ectopic lingual expression of Fgf3, Fgf9, and activin might contribute to such differentiation of IEE. However, the ectopic, lingual Shh-positive domain was dramatically reduced in size and was present only in the posterior epithelium by late bell stage, and mature ameloblast-like cells were not observed on the lingual aspect of the Bcl11b ep2/2 incisor, suggesting that other factors and/or mesenchymal BCL11B may be sufficient to suppress terminal differentiation in the ameloblast lineage within the lingual epithelium. Deletion of Bcl11b in either the epithelium or mesenchyme did not affect labial expression of ameloblast markers and key signaling molecules, the morphology of the labial CL, ameloblast development or amelogenesis, suggesting that both epithelial and mesenchymal BCL11B may contribute to developmental processes on the labial side of the incisor.BCL11B regulates expression of signaling molecules and transcription factors that are essential for establishment and maintenance of asymmetric incisor development. The majority of these changes in expression of key genes in Bcl11b 2/2 mice were qualitatively similar and characterized by down-regulation on the labial side and up-regulation on the lingual aspect of the developing incisor. The single exception to this observation was Fst, the expression domain of which appeared to be maintained by BCL11B. Collectively, these data suggest that BCL11B regulates the expression of downstream signaling molecules and transcription factors bidirectionally, activating expression on the labial side while repressing expression of the same genes on the lingual side (Fig. S13).
Integration of BCL11B into FGF and SHH Signaling Pathways
Combined deletion of Spry4 and one allele of Spry2 [6] also results in ectopic development of Shh-positive pre-ameloblasts along the lingual epithelium of the developing incisor, suggesting a possible convergence in the FGF signaling pathways and BCL11Bdependent transcriptional regulation. This interpretation was supported by the strongly decreased expression of BCL11B within the lingual IEE of Spry4 2/2 ; Spry2 +/2 incisors, indicating that unrestrained activity of the FGF signaling pathways results in pronounced down-regulation of BCL11B expression in the lingual IEE and subsequent de-repression of Shh expression. Based on these findings, we propose a model (Fig. 10) to explain the role of BCL11B in the FGF signaling pathways within the lingual epithelium at E16.5 (Fig. 10A) and in the Spry4 2/2 ; Spry2 +/2 incisor (Fig. 10B). This model posits that: 1. SPRY4 and SPRY2 inhibit establishment of a FGF positivefeedback signaling loop [6] and the repressive effect of FGFs on BCL11B expression in the wild-type lingual IEE.
2. As a result, BCL11B is highly expressed in the entire lingual epithelium in wild-type mice and directly or indirectly inhibits Fgf9 and Shh expression in the lingual epithelium, as was demonstrated in both Bcl11b 2/2 and Bcl11b ep2/2 incisors. FGF9 acts in the mesenchyme to induce expression of Fgf3 and Fgf10 [6,28]. Therefore, the repression of Fgf9 expression by BCL11B in the lingual epithelium prevents activation of Fgf3 and Fgf10 expression in the dental mesenchyme. (Fig. 10A). 3. In contrast, Sprouty gene inactivation leads to up-regulation of FGF gene expression on the lingual side [6] and subsequent repression of BCL11B expression in the lingual IEE. This down-regulation of BCL11B expression leads to up-regulation of FGF family members, as well as that of Shh, which induces development of ectopic pre-ameloblasts in the lingual epithelium [7,42] (Fig. 10B).
These data, which suggest that FGFs and BCL11B form a reciprocal, inhibitory circuit that is upstream of SHH, integrate FGFs, BCL11B, and SHH in a single pathway and provide insight into the molecular mechanisms underlying both the Sprouty and Bcl11b 2/2 phenotypes. Figure S1 Expression of Bcl11b at early bell stage. (A) RNA ISH using Bcl11b probe in sections of wild-type mice at E16.5. (B-D) Sections of wild-type mice stained with DAPI and immunostained for BCL11B. Scale bar, 500 mm. (TIF) Figure S2 Expression patterns of selected genes in Bcl11b 2/2 incisor at early bud stage. RNA ISH using the indicated probes in sections of wild-type and Bcl11b 2/2 mice at E12.5. The epithelium is outlined by red dots. Scale bar, 100 mm. | 7,949.2 | 2012-05-22T00:00:00.000 | [
"Biology"
] |
Phenomenology in minimal theory of massive gravity
We investigate the minimal theory of massive gravity (MTMG) recently introduced. After reviewing the original construction based on its Hamiltonian in the vielbein formalism, we reformulate it in terms of its Lagrangian in both the vielbein and the metric formalisms. It then becomes obvious that, unlike previous attempts in the literature of Lorentz-violating massive gravity, not only the potential but also the kinetic structure of the action is modified from the de Rham-Gabadadze-Tolley (dRGT) massive gravity theory. We confirm that the number of physical degrees of freedom in MTMG is two at fully nonlinear level. This proves the absence of various possible pathologies such as superluminality, acausality and strong coupling. Afterwards, we discuss the phenomenology of MTMG in the presence of a dust fluid. We find that on a flat homogeneous and isotropic background we have two branches. One of them (self-accelerating branch) naturally leads to acceleration without the genuine cosmological constant or dark energy. For this branch both the scalar and the vector modes behave exactly as in general relativity (GR). The phenomenology of this branch differs from GR in the tensor modes sector, as the tensor modes acquire a non-zero mass. Hence, MTMG serves as a stable nonlinear completion of the self-accelerating cosmological solution found originally in dRGT theory. The other branch (normal branch) has a dynamics which depends on the time-dependent fiducial metric. For the normal branch, the scalar mode sector, even though as in GR only one scalar mode is present (due to the dust fluid), differs from the one in GR, and, in general, structure formation will follow a different phenomenology. The tensor modes will be massive, whereas the vector modes, for both branches, will have the same phenomenology as in GR.
I. INTRODUCTION
The idea that a spin-2 field such as the graviton might have a mass has been first put forwards in 1939 by Fierz and Pauli [1]. However, the idea had to be put aside for some time due to the presence of a ghost, the so-called Boulware-Deser (BD) ghost found in 1972 [2]. On top of that, the theory of a massless graviton was so successful that it seemed unnecessary to explore this exotic possibility.
However, thanks to the pioneering work by de Rham, Gabadadze and Tolley (dRGT) in 2010 [3,4], it became clear that not all the theories of massive gravity would suffer from the presence of the BD ghost. Indeed, the dRGT theory has only five degrees of freedom, two tensor, two vector and one scalar modes. While the original theory does not allow for a flat or closed Friedmann-Lemaître-Robertson-Walker (FLRW) solution [5], there exists an open FLRW solution with self-acceleration [6]. If the fiducial metric is modified from Minkowski to either de Sitter or more general FLRW one then all types of FLRW solutions become possible [7]. However, it was soon realized that at the level of linear perturbation on the FLRW background, only the gravitational waves are propagating, whereas the other modes are merely Lagrange multipliers [7]. In fact, it was shown that for the same theory all homogeneous and isotropic backgrounds are unstable, either due to the presence of a ghost at nonlinear level which cannot be set to be massive enough [8] or due to the so called Higuchi ghost at the linear level [9,10], depending on the branch of solutions.
Therefore the dRGT massive gravity leads to non-trivial phenomenologies, as one has to abandon the hypothesis of a homogeneous and isotropic space to describe our universe at sufficiently large scales [5,11,12]. Another possibility to avoid the ghost instability consists of extending the simplest model of the dRGT massive gravity by adding extra degrees of freedom such as a scalar field [13][14][15], or studying its bigravity counterpart [16][17][18].
Recently the present authors have proposed a new theory of Lorentz-violating massive gravity, which was constructed so that: 1) the number of physical degrees of freedom is two at fully nonlinear level; 2) the FLRW background equations of motion are identical to the dRGT theory [19]. These two conditions are sufficient to allow for stable FLRW backgrounds: there is no BD ghost, no Higuchi ghost, no nonlinear ghost. Hence the new theory serves as a stable nonlinear completion of the self-accelerating cosmological solution of [6]. The two physical degrees freedom in this theory are simply two tensor modes, whose quadratic Lagrangian on FLRW backgrounds is the same as that of the dRGT theory. In particular, the kinetic term of the two modes are essentially given by the Einstein-Hilbert term and thus its coefficient is always of order unity. In addition, the propagation speed of the tensor modes are not modified. Therefore, this theory automatically avoids pathologies known in the literature, such as superluminality, acausality and the above mentioned ghost instabilities. While in the literature there have been classes of massive gravity theories with modifications in the potential part of the action, the MTMG modifies the kinetic part as well (see section III). Thus, as far as the present authors know, this theory does not fall into any one of the classes of theories considered in the past. We call this theory the minimal theory of massive gravity (MTMG).
In Lorentz-invariant massive gravity theories (without the BD ghost), one scalar, two vector and two tensor modes form a multiplet of 5 degrees of freedom. Therefore the first of the two requirements imposed on the MTMG implies that Lorentz invariance should be broken. In Lorentz violating theories, on the other hand, scalar, vector and tensor parts can be independent from each other. This is the reason why it is possible to realize a theory of massive gravity with only two physical degrees of freedom. Needless to say, the Lorentz violation is in the gravity sector and disappears in the massless limit. Hence the Lorentz violation induced on the matter sector via graviton loops should be suppressed by a minuscule factor m 2 /M 2 P , where m is the graviton mass. There have been classes of Lorentz-violating massive gravity theories in the literature [20][21][22][23][24][25]. As mentioned above, however, previous attempts modify only the potential part of the action and leave the kinetic part unchanged 1 . More importantly, none of them fulfills the two requirements that we impose on the MTMG. The MTMG differs from the earlier attempts because it fulfills the two requirements stated above. The four-dimensional Lagrangian for the MTMG is fully nonlinear, only has two degrees of freedom and, as we shall see later on, it contains non-trivial constraints which modify not only the potential term for the graviton but also the kinetic structure of the Lagrangian.
In general, one should expect that the phenomenology of the MTMG would be easier with respect to the one of dRGT, because, being the scalar mode absent (as well as the vector ones), one does not need to implement the Vainshtein mechanism at the solar system scale, because no extra scalar force is present. On the other hand, it is of interest to explore the phenomenology of this theory and try to find its differences from GR. In this paper we do address this issue.
In the present paper we first review the MTMG introduced in [19] in the vielbein formalism, and count the number of physical degrees of freedom. Afterwards, we find the Lagrangian of MTMG by using the three-dimensional vielbeins. Third, we also write this same Lagrangian in the metric formalism. This shows that the MTMG, which was introduced in [19] by means of its Hamiltonian, so as to make sure that only two degrees of freedom were propagating on any background, can be equally described in the Lagrangian formalism.
On using the Lagrangian of the theory written in the metric formalism, we discuss the phenomenology of MTMG on a flat FLRW background in the presence of a dust matter fluid. We confirm the existence of two branches: the normal branch and the self-accelerating one. As already mentioned, the background equations of motion are, by construction, identical to the ones in dRGT theory.
Furthermore, we study the behavior of the linear perturbations, and find: i) the self-accelerating branch has a phenomenology which is identical to GR both for scalar and vector perturbations, however, the tensor modes, being massive, have a different propagation dynamics; ii) the normal branch, on the other hand, has a different phenomenology with respect to GR both in the scalar and tensor sectors. This makes this branch ready to be tested against contributions to structure formation. In particular we find that, depending on the dynamics of the fiducial metric, it is possible to have non-trivial values at late times for the linear-perturbation observables, e.g. G eff , η.
II. CONSTRUCTION
In this section we review the construction of the minimal theory of massive gravity (MTMG) proposed in [19]. The construction consists of the following three steps: (i) to define a precursor theory by substituting the ADM vielbein to the dRGT action (subsection II A); (ii) to switch to Hamiltonian (subsection II B); and (iii) to add two additional constraints to define the minimal theory (subsection II C). We then confirm that the number of physical degrees of freedom in the minimal theory is indeed two at fully nonlinear level (subsection II D).
A. Precursor theory
The basic variables of the theory are the lapse function N , the shift vector N i and the spatial vielbein e I j . The theory also contains the fiducial lapse function M , the fiducial shift vector M i and the fiducial spatial vielbein E I j . While the first set of variables (N , N i , e I j ) is dynamical, the second set (M , M i , E I j ) is fixed by the theory as a part and the Levi-Civita symbol is normalized as ǫ 0123 = 1 = −ǫ 0123 . By choosing the ADM form of the vielbeins, we have fixed the local Lorentz boost, have picked up a preferred local Lorentz frame and thus have already modified the original dRGT theory. The precursor action can be rewritten as where we have defined X I J and Y I J as One can easily see that the graviton mass term in the precursor action is manifestly linear in the lapses and does not depend on the shift variables. This is in sharp contrast to the original dRGT theory.
B. Hamiltonian analysis of precursor theory
Primary constraints
Since the graviton mass term is manifestly linear in the lapses and shifts, we consider N and N i as Lagrange multipliers. We then have 9 components of e I j as basic variables. We define canonical momenta conjugate to them in the standard way as where The fact that K ij is symmetric leads to the following 3 primary constraints where and indices between the square brackets are anti-symmetrized as A [ab] = A ab − A ba . The remaining 9 − 3 = 6 relations between the canonical momenta and the time derivative of the basic variables can be inverted as Thus there are no more primary constraints associated with (15). The Hamiltonian of the precursor theory, together with the primary constraints, is where D j is the spatial covariant derivative compatible with γ ij , √ γ = det γ ij , and α MN (antisymmetric) are 3 Lagrange multipliers. Here and in the following we work in units for which M 2 P = 2. The Hamiltonian is manifestly linear in the lapse N and the shift N i and does not contain their time derivatives. Thus, as already stated, we consider N and N i as Lagrange multipliers. Correspondingly, we have the following primary constraints in addition to (17):
Secondary constraints and total Hamiltonian
In order to implement the conservation in time of the primary constraints, we need the following Poisson brackets to vanishṖ The partial time derivative in Eq. (23) appears because of the choice of the unitary gauge, so that R 0 explicitly depends on time through the fiducial vielbein. Then Eq. (22) leads to three new secondary constraints, namely where we have defined This secondary constraints fixes Y MN to be symmetric. Since then we can use Eq. (23) to find the expression of one of the components of N i (say N i=3 ) in terms of the other variables. For the same reason we can solve one of the three Eqs. (24) (say for i = 3) for the lapse variable N . Therefore the remaining two Eqs. (24) give rise to two secondary constraints, (sayṘ 1 ≈ 0 andṘ 2 ≈ 0 after solvinġ R 3 ≈ 0 with respect to one of Lagrange multipliers). On naming these two constraints asC τ (τ = 1, 2), then we have the total Hamiltonian Any further time-derivative of the constraints does not lead to any new (tertiary) constraints, therefore Eq. (30) represents the total Hamiltonian.
Number of physical degrees of freedom in precursor theory
It is straightforward to show that the determinant of the 12 × 12 matrix made of the Poisson brackets among 12 constraints is non-vanishing. This implies that the 12 constraints are independent second class constraints and that the consistency of them with the time evolution uniquely determines all Lagrange multipliers without generating additional constraints. Since each of these 12 second class constraints removes one single degree of freedom in the phase space, we finally have 1 2 (9 × 2 − 12) = 3 physical degrees of freedom on a generic background at nonlinear level. This is consistent with the analysis of [23].
It can be proven that these degrees of freedom on FLRW cosmological backgrounds in the so called normal branch reduce to the two tensor modes and an extra scalar degree of freedom. In the self-accelerating branch, on the other hand, the scalar mode has a vanishing kinetic term at the quadratic order and acquires its kinetic term only at higher order, meaning that the scalar degree of freedom is strongly coupled in the self-accelerating branch.
So far, breaking Lorentz symmetry with the precursor Hamiltonian has removed the vector modes present in the dRGT theory, but we should expect the remaining scalar degree of freedom to be strongly coupled on some backgrounds such as the FLRW background in the self-accelerating branch. Since our aims is to heal the dRGT theory, we then further try to remove this unwanted degree of freedom, while keeping the same background equation of motion of the dRGT theory.
C. Minimal theory
We have seen that, besides Y [MN ] ≈ 0, the precursor theory possesses the two secondary constraintsC τ (τ = 1, 2), which are two linear combinations of the three quantities C i (i = 1, 2, 3) defined as follows and ∂H 0 /∂t is the partial derivative of H 0 as a function of (t, e I j ) with respect to t. The explicit t dependence of H 0 is through the fiducial vielbein.
The minimal theory of massive gravity is defined by imposing the four constraints where SinceC τ (τ = 1, 2) are linear combinations of C i , only two constraints among the four in (32) are independent new constraints. Therefore, the minimal theory is defined by the Hamiltonian where Here we have defined The main difference between the two Hamiltonians in Eqs. (33) and (30) consists of the presence of the four constraints C 0 , C i rather the two constraintsC τ . Furthermore the constraints C 0 , C i are the time-derivative of the primary constraints with respect to H 1 (and not H, although H ≈ H 1 ).
D. Number of physical degrees of freedom in minimal theory
Having added the extra two constraints, we now have 14 constraints in the 9 × 2 = 18 dimensional phase space. Thus the number of dimensions of the physical phase space is less than or equal to 18 − 14 = 4, where the equality holds if all 14 constraints are second class and if there is no more constraint. Therefore, we conclude that (number of d.o.f.) ≤ 1 2 · 4 = 2 at the fully nonlinear level. On the other hand, in section VIII we shall explicitly show that cosmological perturbations around FLRW backgrounds contain two tensor modes at the linear level, meaning that (number of d.o.f.) ≥ 2 at the nonlinear level. Combining the two inequalities we conclude that (number of d.o.f.) = 2.
One can reach the same conclusion also in a more formal way. Since the actual calculation is somehow cumbersome, we shall simply give a brief outline. What we need to show is that the consistency of the 14 constraints with the time evolution does not lead to additional constraints but simply determines all Lagrange multipliers. For this purpose it is necessary and sufficient to show that the determinant of the matrix represents the 14 constraints. In other words, we need to show that, for a vector field v σ , the equationˆd has the unique solution v σ = 0. Once this proposition is proved, we can conclude that all the 14 constraints are independent second class constraints and that the consistency of them with the time evolution does not lead to additional constraints. Since we have 14 second-class constraints in the 9 × 2 = 18 dimensional phase space, the number of physical degrees of freedom in this theory is 1 2 · (9 × 2 − 14) = 2 at fully nonlinear level.
III. LAGRANGIAN
The Hamiltonian equation of motion for e I j can be inverted to express π ij and Π I j in terms of the extrinsic curvature as and where Equivalently, What is important here is that the relation (38) in MTMG differs from the corresponding relation (16) in the precursor theory. This difference stems from the fact that the additional constraints depend on the canonical momenta. Hence the action of the theory is where we have dropped α MN P MN and β MN Y [MN ] from the Hamiltonian as they will automatically come out (since Θ ij is defined as a symmetric tensor, and as we shall explicitly see below) and it is understood that π ij and Π I j are expressed in terms of the extrinsic curvature using the above formulas. Explicitly, where S pre is the action for the precursor theory. It is understood that C 0 is now defined as while C i , P MN and Y [MN ] are defined as before. Finally,C 0 is defined as As a consistency check, let us calculate the Hamiltonian of the system defined by the action and compare it with the Hamiltonian defined in the previous section. The system has the following primary constraints where π N , π i , π λ and π λ i are canonical momenta conjugate to N , N i , λ and λ i , respectively, and P [MN ] is defined in the previous section. The canonical momenta conjugate to e I j is then given precisely by (39). The Hamiltonian is thenH where H (with α MN P [MN ] and β MN Y [MN ] included) was defined in the previous section and Y [MN ] has been added to the Hamiltonian as a solution to the secondary constraint associated with the primary constraint P [MN ] = 0. Since H depends linearly on N , N i , λ and λ i , it is obvious that π N = 0, π i = 0, π λ = 0 and π λ i = 0 are first class. We can then safely downgrade N , N i , λ and λ i to Lagrange multipliers, and drop π N , π i , π λ and π λ i from the phase space variables. After that, the HamiltonianH in (47) becomes manifestly equivalent to H defined in the previous section.
IV. METRIC FORMULATION
Let us introduce the Lagrangian of the theory in the metric formulation. In order to define the theory in unitary gauge we need to introduce two explicitly time dependent external fields The meaning of these two fields can be better understood in the language of the fiducial vielbein E M j as being where E L i is the inverse vielbein. These two quantities are given functions of time (and possibly of space). Consider the tensor K m n , such that K m l K l n =γ ms γ sn , and we define its inverse, K m j , as In terms of the vielbein we can write In the metric formalism, provided that Y I J = E I i e J i is symmetric, we have Let us build the following tensor then we further define the four constrained imposed into the action in order to reduce the degrees of freedom: where K ij is the extrinsic curvature, K andζ represent K n n andζ n n , respectively. The following is the action of the minimal theory of massive gravity written in the metric formalism: where we have explicitly re-inserted standard units for the Planck mass, M P , and integrated by parts the constraint in λ i . As it is well known, in the 1+3 formalism, it is possible to write the action of General Relativity as where Therefore, we have The contribution from S 4 gives rise to a cosmological constant term. Furthermore, it is clear, as expected, that also in the metric formalism the graviton mass term in the action, 4 i=1 S i , is linear in the lapses and does not depend on the shift variables. This is a consequence of the Lorentz violations in the gravity sector.
The action for the minimal theory of massive gravity introduces four constraints associated with the four Lagrange multipliers λ and λ i , in addition to those associated with N and N i . It is possible, in principle, to integrate out these Lagrange multipliers, e.g. the field λ, leading to a non-standard contribution to the action since the dependence of the scalarC 0 on the extrinsic curvature. Therefore the action of minimal massive gravity cannot be written as the sum of the Einstein-Hilbert term plus a general potential term.
As for the matter fields we will consider a pure dust component (see e.g. [27,28]) as in where J α is a vector with weight 1, that is under a coordinate transformation it transforms as J α ′ = J ∂x α ′ ∂x β J β , and J = det ∂x β ∂x α ′ . Instead, ρ m , n and ϕ are scalar fields. The numerical constant µ 0 represents instead the mass of one dust particle. The 4-vector of the dust fluid, u α , is defined via as this vector is normalized, u α u α = −1. On taking variation of the action with respect to J α , one finds
V. FRIEDMANN BACKGROUND
From the Lagrangian approach, the Friedmann equation reads where The second Einstein equation reads We also have introduced the quantity We also have the equation of motion coming from variations of the Lagrangian with respect to λ, as in From this last equation, we can notice the existence of two branches. The matter satisfies the usual conservation equation We can build a convenient non-trivial linear combination of equations as in Then we find that E B can be written as a polynomial expression in λ, given by This equation should be used in order to find the background value for λ in the Lagrangian formalism. We can introduce an effective equation of state parameter for the massive-gravity component, as A. Self-accelerating branch In this case we consider the case which implies that X = constant. In this case we find Furthermore, we have for which we find for λ the solution which also implies In this branch, we have that at the level of the background we have a pure cosmological constant. In this case we can summarize the equations of motion asḢ
B. Normal branch
In this case we have the solution Then we find that We now show that the first factor on the right hand side is non-vanishing and that is enforced. To prove this by contradiction is easy. For this purpose, let us suppose that H = XH f , then we find This condition would introduce a would-be extra dynamical constraint, in addition to the Friedmann equation, which will not be in general satisfied. Therefore the only physical solutions to E B = 0 are those satisfying (V B), which, in turn, leads to Therefore, no matter which branch we are in, we will always find: However, if the self accelerating branch was leading to a pure cosmological constant, for the normal branch, we have the possibility of a non-trivial dynamics for the background. In fact, the Friedmann equation reads 3M 2 P H 2 = ρ Λ + ρ X + ρ m , where we have found it convenient to split the total gravitational energy density ρ g into a pure cosmological constant term (proportional to c 4 ) and in a (non-trivially) dynamical term ρ X as in: Indeed at the level of the background, there would be a dark component whose effective equation of state would be given by: which is, in general, a time-dependent quantity. We notice here that in the case the dynamics leads to In other words, after choosing a specific dynamics for the fiducial metric, it is possible to have also ρ X behave as a cosmological constant component.
VI. SCALAR PERTURBATIONS
Let us consider perturbing the metric in the following form and let us perturb the dust components as follows where N 0 is a constant resulting from integrating the background equation of motion for ϕ, which satisfies the relation ρ = µ 0 N 0 /a 3 , and corresponds to the total number of dust particles. We can also verify that combining Eq. (74) with Eq. (113) leads to δu i = −v m . We also need to perturb the Lagrange multipliers as follows In the following, it will be useful to introduce the following gauge invariant variables The two potentials Ψ, Φ reduce to the Bardeen potentials in the Newtonian gauge.
Since we have that ρ m = ρ m (n), on expanding it up to first order, we find that so that, on using Eq. (118), we can substitute δj 0 in the Lagrangian for δ m .
A. Self accelerating branch
After expanding at second order the action, one finds that the perturbation field δℓ gives the constraint ζ = 0. Furthermore, the field δλ gives the extra constraint s = 0. Therefore the Lagrangian reduces to Let us first integrate out the field δj, as Then the Lagrangian reduces to Next let us use the equation of motion for χ to integrate out α. Then we find Finally, we can integrate by partsv m , so that v m becomes a Lagrange multiplier which can be easily integrated out. In fact, we find and the no-ghost condition reduces to ρ m > 0. The equation of motion for δ m reads which corresponds to the standard GR equation of motion. Therefore the phenomenology of this branch coincides with the one in General Relativity. In particular, this mode has c 2 s = 0, as expected.
Phenomenology
Let us consider the equations of motion for the gauge invariant fields. Since ζ, s vanish, we find that On combining several equations of motion we find, without any approximation, which describes exactly the phenomenology of the dust fluid in General Relativity. Therefore we conclude that, regarding the scalar sector, we should not see any difference between the minimal theory of massive gravity and General Relativity. The difference only appears, as we shall see later on, in the tensor sector, since the gravitational waves acquire in general a non-zero mass.
B. Normal branch
Here we discuss the behavior of the perturbations and their phenomenology for the normal branch of the background solutions, namely the ones defined byẊ where we have introduced the quantity Therefore for r = 1, X is constant and its contribution reduces to a cosmological constant. After expanding the equation of motion at second order in the fields, the Lagrange multiplier δℓ gives the following constraint We then integrate out the fields δj and δλ (using their own equations of motion), and replace δj 0 in terms of Then one can solve the linear constraint of α for the field v m . After this step we can integrate out the field χ, so that the Lagrangian takes the form where where we have defined After integrating out the auxiliary field s, we find where so that the no-ghost condition for the field δ m is equivalent to setting
Phenomenology
Let us consider the equation of motion for the variable δ m . The time-evolution of the variable δ m describes, at linear order, the growth of structures in our universe. It can be written as where In the large k-limit, the coefficients of the differential equation reduce to where we have defined Here we have used the Friedmann equation 3M 2 P H 2 = ρ m + ρ g , in order to make appear only the dust density and the dark energy density induced in the MTMG theory, ρ g .
We notice here that in the large-k limit, the leading term in C 1 , which corresponds to the no-ghost condition, is positive. On assuming that for some redshift interval we have ρ m ≃ |m 2 |M 2 P , but still |ρ g | < ρ m , then one can find a non-trivial evolution for the matter density profile, even in the case r = 1 (for which ρ g is a constant), as In this same case, if the following inequalities are satisfied then it is possible to have 0 <Ḡ eff < G N , i.e. weak gravity regimes, together with a positive mass for the gravitational waves, as will be explained in Section VIII. It is possible to write down the expression for the fields Ψ and Φ in terms of δ m andδ m . On considering the subhorizon approximation, namely that k/(aH) ≫ 1, and, at the same time,δ m /N ≃ Hδ m , then we find that where we have also imposed that 3M 2 P H 2 = ρ m + ρ g . Therefore, in general, at those redshifts for which H 2 |Γ 1 m 2 | is verified, it is indeed possible to have a non-trivial phenomenology (compared to GR) in the normal branch, even if no extra-scalar mode has been added into the theory. On the contrary, for those redshift for which |Γ 1 m 2 | ≪ H 2 holds, then the phenomenology will tend to agree with the one of GR.
VII. VECTOR MODES
On perturbing the action for the vector modes, we consider the metric perturbations as follows Furthermore the shift vector will be split as and also the perfect fluid will possess vector modes u T i . Finally the vector λ i will have a vector mode contribution as with C T i , V T i , u T i and L i T all satisfying the usual transverse relation, e.g. ∂ i C T i = 0. Treating the perfect fluid along the lines of [29], after expanding the action at second order for the vector-mode variables, one finds that the constraint L i T sets In this case the action exactly reduces to the action in General Relativity describing the vector modes. Therefore the phenomenology for the vector modes is exactly the same as in General Relativity in both branches. In fact, we find
VIII. TENSOR MODES
The tensor modes for this theory have been already discussed before in the literature [19]. But it is easy to see that since the constraints coming from λ and λ i have only scalar and vector contributions, then the tensor mode action, at quadratic order, will be exactly the same as in dRGT model. In particular we find where This expression is valid both for both the normal branch and the self-accelerating one. In order to ensure stability, one requires µ 2 > 0.
In the case r = 1, in the normal branch, we find that µ 2 = −2Γ 1 m 2 > 0, so that, in this case, m 2 and Γ 1 need to have opposite signs. In the same case, r = 1, in the self-accelerating branch, since Γ 1 = 0, actually µ 2 vanishes. It should be mentioned that in both branches the phenomenology of the tensor modes is different from General Relativity because of the presence of the mass µ for the gravitational waves.
IX. CONCLUSIONS
After reformulating the minimal theory of massive gravity (MTMG) [19] in terms of its Lagrangian in both the vielbein and the metric formalisms, we have studied the evolution of the linear cosmological perturbations in both the self-accelerating and the normal branches with a dust fluid. Solutions in both branches are stable as far as µ 2 ≥ 0. The strongest phenomenological upper bound on µ known to date is: µ today < 7.6 × 10 −20 eV (µ today < 1.8 × 10 −5 Hz) from binary pulsar [30,31] and µ today < 1.2 × 10 −22 eV (µ today < 2.9 × 10 −8 Hz) from the detection of gravitational waves by LIGO [32], where µ today is the value of µ in the late time universe.
We have found that the phenomenology in the self-accelerating branch exactly coincides with the one in general relativity (GR), except that the expansion of the universe acquires acceleration due to the graviton mass term even without the genuine cosmological constant and that the tensor modes acquire a non-zero mass. Therefore, the MTMG serves as a stable nonlinear completion of the self-accelerating cosmological solution [6] found originally in the de Rham-Gabadadze-Tolley theory [3,4].
In the normal branch we have found that in addition to having massive tensor modes, the scalar sector gets affected in a non-trivial way, leading to a modified dynamics (compared to GR) for the only scalar dynamical field δ m . In particular both the friction term and G eff get modifications which depend on the parameters of the theory and on the time-dependent fiducial metric.
Depending on the actual value of µ, then it is possible to distinguish two different eras of the normal branch: a) H ≫ µ (at early redshifts), and in this case the phenomenology tends to coincide with the one in GR; b) H µ (at intermediate/low redshifts), and in this case the dynamics of δ m gets, in general, significant modifications. In this case, though, also the background will feel significant contributions from the MTMG sector. However, these contributions depend on the dynamics of the fiducial metric. In fact, it is even possible to choose the fiducial metric so that ρ g (the MTMG effective energy density in the Friedmann equation) in the normal branch behaves as an effective cosmological constant.
We have studied the behavior of G eff and η in the large k limit and found that in the normal branch, there exists non-null parameter-space for which G eff < G N , while the background is stable, namely the graviton mass squared is positive. Nonetheless, at low redshifts, when ρ m ≃ ρ g , then the evolution of G eff will be strongly parameter dependent. We leave the study of consistency of the theoretical predictions with the data to a future project.
While the main focus of the present paper was on phenomenological aspects of MTMG, here we point out some of theoretical issues to be explored in the future work. The identification of the strong coupling scale and the cutoff scale is among the most important ones. Because of the existence of non-trivial constraints that are essential for the exclusion of the scalar mode, the analysis in the previous attempts of Lorentz-violating massive gravity in the literature does not necessarily apply to MTMG directly. In this respect, it is expected to be insightful to see how helicity-0 and helicity-1 degrees are removed in the Stueckelberg language that was introduced in the context of massive gravity in [33].
As already stated in the introduction, Lorentz violation in the matter sector induced by graviton loops should be suppressed by a minuscule factor m 2 /M 2 P , where m is the graviton mass. It is worthwhile proving this by explicit computation. Calculation should be straightforward, but one might need to deal with some complication due to the existence of non-trivial constraints in the gravity sector.
As constructed in [19] and reviewed in section II of the present paper, MTMG was obtained by imposing two additional constraints on the precursor theory. The additional constraints are chosen carefully so that they do not over-constrain the system nor kill the FLRW background solution. We conjecture that our choice, i.e. C 0 and the linear combination of C i (i = 1, 2, 3) that is orthogonal toC τ (τ = 1, 2), is unique if we further demand that the resulting theory should respect the spatial diffeomorphism invariance. One of the reasons behind this conjecture is that for the FLRW background in the precursor theory, C 0 is essentially the time derivative of the primary constraint R 0 . Another reason is that the three components of C i (i = 1, 2, 3) form a spatial vector and thatC τ (τ = 1, 2) are two linear combinations of them. It is worthwhile proving this conjecture in a more rigorous way.
Last but not least, it would be interesting to seek a UV completion or a partial UV completion of MTMG.
where X = −(∂σ) 2 /2, and σ is a scalar field. On defining we find that, on studying the perturbations of such a field, δu i = −∂ i v m , where N (t) δσ/σ = v m (assumingσ > 0), then, for a general fluid, we find that, on choosing the gauge-invariant combination v m − ζ/H as the canonical field, the action for the scalar perturbation tends to blow up in the limit c 2 s → 0, where c 2 s ≡ P ,X /(2X P ,X X + P ,X ). One may wonder why this happens, as in this work, the action for the scalar modes remains always finite.
It is not a problem intrinsic of the action written in Eq. (A1), rather it is a problem of the choice of v m as the field which is supposed to describe the degrees of freedom of the system. There are several ways to prove this statement. In fact, it is clear that for a dust fluid in General Relativity, in the flat gauge (ζ = 0 = γ), the equation for v m can be found by taking variations of the Lagrangian (123) with respect to δ m , and reads as followṡ This same equation of motion can be found independently of the action one considers. For example, on using the action given in Eq. (A1), it corresponds to combining the equation of motion for the field χ with α =v m /N . Most importantly, Eq. (A3) is a closed equation for the field v m . Therefore it completely determines the evolution for v m . In particular, the essential point here to notice, is that this equation is only first order. Therefore, there is only one single initial condition which need to be imposed in order to completely determine the dynamics of the field v m . In this case, if it were possible to choose v m as the canonical field for the dust fluid, this would imply that the scalar sector of the dust fluid would have only 1 degree of freedom (rather than two). This is impossible, as indeed the equations of motion coming from the Lagrangian in Eq. (124) for the field δ m do require two independent initial conditions (or, equivalently, there is another one independent initial condition to be imposed in the Lagrangian in Eq. (123) for the field δ m ). Therefore the canonical field for the dust fluid cannot be chosen to be proportional to v m , but it can be chosen to be proportional, e.g. to δ m .
Appendix B: Integrating auxiliary variable in & out
Let us consider a simple harmonic oscillator described by the Lagrangian This can be rewritten as L = A 2C 2 (Cq + Dq) 2 − AD 2 + BC 2 2C 2 q 2 + (total derivative).
This Lagrangian is equivalent to the following one.
It is easy to see that the previous Lagrangian is obtained from the present one by simply integrating out Q. In other words,L is obtained from L by integrating-in the auxiliary variable Q.
By integrating-out q, we then obtain the following equivalent Lagrangian where ǫ is a constant, then we obtainĀ We thus have the equivalence under the correspondence This is equivalent to the following canonical transformation where p =q and P =Q/ǫ 2 are momenta conjugate to q and Q, respectively. The Hamiltonians corresponding to the Lagrangians are equal to each other. | 9,727.2 | 2015-12-13T00:00:00.000 | [
"Physics"
] |
Impaired cerebral autoregulation is associated with poststroke cognitive impairment
Abstract Objective To investigate whether dynamic cerebral autoregulation (CA) and neuroimaging characteristics are determinants of poststroke cognitive impairment (PSCI). Methods Eighty patients within 7 days of acute ischemic stroke and 35 age‐ and sex‐matched controls were enrolled. In the patients with stroke, brain magnetic resonance imaging and dynamic CA were obtained at baseline, and dynamic CA was followed up at 3 months and 1 year. Montreal Cognitive Assessment (MoCA) was performed at 3 months and 1 year. Patients with a MoCA score <23 at 1 year were defined as having PSCI, and those with a MoCA score that decreased by 2 points or more between the 3‐month and 1‐year assessments were defined as having progressive cognitive decline. Results In total, 65 patients completed the study and 16 developed PSCI. The patients with PSCI exhibited poorer results for all cognitive domains than did those without PSCI. The patients with PSCI also had poorer CA (lower phase shift between cerebral blood flow and blood pressure waveforms in the very low frequency band) compared with that of the patients without PSCI and controls at baseline and 1 year. CA was not different between the patients without PSCI and controls. In the multivariate analysis, low education level, lobar microbleeds, and impaired CA (very low frequency phase shift [≤46°] within 7 days of stroke), were independently associated with PSCI. In addition, impaired CA was associated with progressive cognitive decline. Interpretation Low education level, lobar microbleeds, and impaired CA are involved in the pathogenesis of PSCI.
Introduction
Cerebral autoregulation (CA) is the mechanism that minimizes changes in cerebral blood flow (CBF) during blood pressure fluctuations. Impaired CA results in unstable CBF and is detrimental to the outcome of neurological diseases, including subarachnoid hemorrhage, traumatic brain injury, and ischemic stroke. [1][2][3][4][5][6] Moreover, impaired CA is associated with neurodegenerative pathology, including cerebral amyloid deposition and white matter hyperintensities (WMHs). 7 Therefore, CA may be a biomarker of both cerebrovascular and neurodegenerative diseases.
Poststroke cognitive impairment (PSCI) can hinder activities of daily living, decrease quality of life, and increase the healthcare burden. 8 PSCI may occur immediately or months after a stroke. Early-onset PSCI is caused by severe cerebral tissue loss or a lesion on the cognition-related network, whereas the mechanisms of late-onset PSCI are largely unclear. The prevalence of mild PSCI can be up to 52% 6 months after a stroke, 9 but cognitive function may not be thoroughly assessed in all patients with stroke. Therefore, PSCI is a common but overlooked sequela of stroke. Because CA is impaired in patients with cerebrovascular or neurodegenerative diseases, impaired CA is likely to be a risk factor for late-onset PSCI. Some neuroimaging characteristics such as cerebral microbleeds or WMHs are known risk factors for cognitive impairment and are common in patients with stroke. 10 However, the relationships between CA, neuroimaging characteristics, and PSCI are unclear.
Dynamic CA is an approach to CA measurement in which CBF and peripheral blood pressure (BP) are noninvasively monitored in the resting state; 11 therefore, it is feasible for clinical practice. In the present study, we followed up the temporal change in cognitive function and dynamic CA for 1 year in patients with acute ischemic stroke to determine whether dynamic CA indices are associated with the occurrence of PSCI at 1 year. In addition, we investigated the association between patients' dynamic CA and neuroimaging characteristics, including the presence of cerebral microbleeds and WMHs.
Methods
The deidentified data employed in the current study are available to qualified investigators upon reasonable request.
Participants
This study was approved by the Institutional Review Board of Taipei Medical University. Patients who were admitted to Taipei Medical University Shuang Ho Hospital within 7 days of acute ischemic stroke were consecutively screened for eligibility to participate in the current study. The exclusion criteria were as follows: (1) having a known cognitive impairment or a neurodegenerative disease that impairs daily activities before the stroke, (2) having a large cerebral infarct (greater than one third of the middle cerebral artery territory) or a strategic infarct (paramedian thalamus, medial frontal cortex, or hippocampus) that would cause early-onset PSCI, (3) having a severe language or physical disability that impeded neuropsychological tests, (4) having atrial fibrillation (cognitive decline can be caused by a cardioembolism in the absence of clinical stroke 12 ), and (5) no reliable dynamic CA result at the beginning of study. In total, 80 patients were recruited, and written informed consent was obtained from all participants or their legal guardians. Each patient was evaluated within 7 days of a stroke at admission and was followed up after 3 months and 1 year at the outpatient clinic. Six patients declined to participate in the follow-up studies and were excluded. The data of 35 age-and sex-matched healthy volunteers recruited in our past study were employed as control data. 13 A flowchart of the patient enrollment and study protocol is provided (Fig. 1).
Clinical characteristics, neurological and cognitive tests, and neuroimaging The patients' stroke severity was evaluated using the National Institutes of Health Stroke Scale (NIHSS) at admission and at 1 year. Daily activity functional status was evaluated using the modified Rankin Scale (mRS) at 3 months and 1 year. Cognitive functions were evaluated using the Montreal Cognitive Assessment (MoCA) screening tool at 3 months and 1 year. A trained research assistant blinded to the results of patients' neuroimaging and dynamic CA conducted the neurological and cognitive assessments. Patients with a MoCA score <23 at 1 year were defined as having PSCI, and those with a MoCA score that decreased by 2 points or more between the 3month and 1-year assessments were defined as having progressive cognitive decline.
The following brain magnetic resonance images (MRI) were obtained once at admission (GE Signa HDx 1.5T, General Electric Healthcare, Waukesha, WI, USA ): T1and T2-weighted images, T2 fluid-attenuated inversion recovery (FLAIR) images, diffusion-weighted images (DWIs), susceptibility-weighted angiography (SWAN), and time-of-flight magnetic resonance angiogram (TOF MRA). Carotid Doppler ultrasonography and electrocardiogram were performed once at admission. The ischemic lesion volume was calculated using the DWI, and the severity of vascular stenosis was estimated using TOF MRA and carotid Doppler ultrasonography. 14,15 The distribution of cerebral microbleeds was determined using SWAN, and the severity of WMHs was evaluated using FLAIR images and the Fazekas scale. 16 The images were interpreted by an experienced neurologist blinded to the patients' outcomes. Neurological and cognitive tests were conducted and neuroimaging was obtained in patients but not in controls.
Dynamic cerebral autoregulation measurement and analysis
Dynamic CA was measured under spontaneous fluctuation in BP and CBF velocity (CBFV) during 5 min in a supine resting state. In brief, the CBFV of the extracranial internal carotid artery was recorded using a Doppler ultrasonography monitor (DWL MultiDop-T, Compumedics DWL, Singen, Germany), and BP was recorded using a ª 2020 The Authors. Annals of Clinical and Translational Neurology published by Wiley Periodicals LLC on behalf of American Neurological Association noninvasive BP monitor on the basis of finger plethysmography (Finometer Pro, Finapres Medical Systems, Enschede, The Netherlands), as in our previous studies. 1,13 The 5-min CBFV and BP waveforms were recorded simultaneously, and a dynamic CA algorithm, namely transfer function analysis (TFA; the MATLAB code is available at http://www.car-net.org/content/resources), was applied to calculate the phase shift, gain, and coherence between the BP and CBFV waveforms in the very low frequency (VLF, 0.02-0.07 Hz) and low frequency (LF, 0.07-0.20 Hz) bands. 11 CA minimizes the changes in CBFV during spontaneous hemodynamic fluctuation; therefore, the changes in CBFV are smaller in amplitude and are restored to baseline faster than those in BP, which could be quantified as the gain and phase shift between BP and CBFV waveforms by using TFA. In patients with impaired CA, the gain is larger and phase shift is smaller than those in patients with normal CA. 11,17 In the current study, dynamic CA was tested within 7 days of stroke at admission, and the test was repeated at 3 months and 1 year at the outpatient clinic. In controls, dynamic CA was tested once. In total, 9 of the 80 patients were excluded after the first dynamic CA test because they had unacceptably low VLF coherence (<0.34 for a 5-min recording) 11 between BP and CBFV on ipsilesional side. We did not have MoCA and dynamic CA results at 3 months for 2 of the 49 patients without PSCI and 1 of the 16 patients with PSCI because our research assistant could not reach the patients by telephone after they were discharged. Contact was established with these three patients when they returned to the outpatient clinic; therefore, they were able to complete the 1-year follow-up. In addition, we did not obtain dynamic CA data at 1 year for 3 of the 16 patients with PSCI because they were unwilling to undergo a dynamic CA test after finishing the MoCA. In total, 65 patients with complete data-namely brain MRI, first dynamic CA result, and MoCA at 1 yearwere included in the final analysis ( Fig. 1).
Statistical analysis
Data normality was determined using the Shapiro-Wilk test. Normally distributed data are expressed as means AE standard deviations, whereas nonnormally distributed data are expressed as medians with interquartile ranges. Clinical characteristics, neuroimaging characteristics, and dynamic CA indices were compared between the patients with PSCI, the patients without PSCI, and controls by using the Kruskal-Wallis test or chi square test with post hoc analysis, as applicable. The average value of the bilateral sides of each dynamic CA index in the controls was compared with the ipsilesional side of each dynamic CA index in the patients. The patients' dynamic CA indices and MoCA scores were compared between different visits by using generalized estimating equations. The patients' dynamic CA indices were compared between bilateral sides by using the Wilcoxon signed-rank test. Univariate logistic regression was conducted to determine the odds ratio of developing PSCI related to the clinical characteristics, neuroimaging characteristics, and dynamic CA indices. Receiver operating characteristic analysis with Youden's J statistic was used to test the sensitivity and specificity and determine the optimal cut-off value of dynamic CA indices for identifying patients more likely to develop PSCI. Multivariate logistic regression using automatic forward variable selection was conducted to construct a model including significant variables associated with PSCI (variables were eligible for inclusion in the model if P < 0.10). A P value of <0.05 was considered statistically significant. The data were analyzed using MedCalc Statistical Software v19 (MedCalc Software bvba, Ostend, Belgium) and PASW Statistics v18 (SPSS Inc. Chicago, IL, USA).
Results
The clinical characteristics of the patients with PSCI (n = 16), the patients without PSCI (n = 49), and controls (n = 35) were summarized in Table 1. Age and sex were not different between the patients (n = 86) and controls. Most patients had mild stroke severity (median NIHSS score = 4) and small vessel disease, which was consistent with our study inclusion and exclusion criteria. The patients with PSCI had more advanced age, lower education level, and a higher lobar microbleed burden than did those without PSCI. The prevalence of vascular risk factors (hypertension, diabetes mellitus, hyperlipidemia, and stenosis of internal carotid or middle cerebral artery), stroke severity (baseline NIHSS score and DWI lesion volume), stroke etiologies, WMH severity (Fazekas scale score), and the burden of deep or infratentorial microbleeds were not different between the patients with and without PSCI. In both of these groups, the NIHSS scores at 1 year were significantly lower than those obtained within 7 days, suggesting that the neurological deficits improved regardless of the onset of PSCI.
In the patients with PSCI, all MoCA subdomain scores including visuospatial and executive function, naming, attention, language, abstraction, delayed recall, orientation, and total scores were significantly lower than those in the patients without PSCI at both 3 months and 1 year ( Table 2). In the patients with PSCI, all MoCA subdomain scores were not significantly different between 3 months and 1 year, although deterioration in visuospatial and executive function, attention, and delayed recall was observed between 3 months and 1 year. The patients without PSCI had a significantly higher score for abstraction and total score at 1 year compared with those at 3 months; the remaining MoCA subdomain scores were not different between these two time points. In the patients with PSCI, the prevalence of progressive cognitive decline was significantly higher than that in the patients without PSCI (40% vs. 6%, P = 0.001). In the patients with progressive cognitive decline, delayed recall was the only cognitive domain in which they deteriorated between 3 months and 1 year (median score decreased from 3 to 0; Table 3).
A dynamic CA comparison was performed of the patients with PSCI, the patients without PSCI, and controls (Fig. 2). The VLF phase shift of the patients with PSCI was significantly lower than that of the patients without PSCI and controls within 7 days and at 1 year. No difference in VLF phase shift was observed between the patients without PSCI and controls nor any difference in the LF phase shift between the three groups ( Fig. 2A). The gain was also not significantly different between the three groups for both the VLF and LF bands (Fig. 2B).
The results of univariate logistic regression of clinical and neuroimaging characteristics in relation to PSCI were summarized in Table 4. Education level and presence of lobar microbleeds were significant predictors of PSCI, whereas age and Fazekas scale score were predictors with ª 2020 The Authors. Annals of Clinical and Translational Neurology published by Wiley Periodicals LLC on behalf of American Neurological Association borderline significance. The other clinical and neuroimaging characteristics including sex, hypertension, diabetes mellitus, hemoglobin A1c level, hyperlipidemia, NIHSS score within 7 days, DWI lesion volume and side, and presence of deep or infratentorial microbleeds were not predictors of PSCI. The results of univariate logistic regression for hemodynamics and CA were summarized in Table 5. Mean BP and mean CBFV (5-min average of BP and CBFV waveform, respectively) at all visits were not predictors of PSCI. Ipsilesional VLF phase shift within 7 days and bilateral VLF phase shift at 1 year were significant predictors of PSCI, whereas contralesional VLF phase shift within 7 days was a predictor of PSCI with borderline significance. VLF phase shift at 3 months and VLF gain at all visits were not predictors of PSCI. In addition, LF phase shift and gain were not predictors of PSCI at any visit. The optimal value of the ipsilesional VLF phase shift within 7 days in predicting PSCI was ≤ 46°; therefore, we defined VLF phase shift ≤ 46°as "impaired CA" and used this criterion to evaluate CA during subsequent visits as well as on the contralesional side. Impaired CA on either the ipsilesional or contralesional side within 7 days was a predictor of PSCI and so was impaired CA on the contralesional side at 1 year, whereas impaired CA at 3 months was not a predictor of PSCI.
The results of multivariate logistic regression were summarized in Table 6. We entered the clinical and neuroimaging characteristics as well as the ipsilesional VLF phase shift within 7 days into the regression model. By using the automatic forward variable selection method, the education level, presence of lobar microbleeds, and VLF phase shift were selected into the model, implying that they were independent predictors of PSCI. When we entered "impaired CA within 7 days (ipsilesional VLF phase shift ≤ 46°)" into the regression model instead of "VLF phase shift within 7 days," educational level, Table S1. Phase and gain were not significantly different between all visits for either the VLF or LF band in the patients with and without PSCI. Mean CBFV, phase shift, and gain were not significantly different between bilateral sides at the same visit in the patients with and without PSCI. Moreover, mean BP and mean CBFV were not significantly different between visits in both the patients with and without PSCI, nor were they significantly different between these patients at the same visit.
Impaired CA was not associated with the severity of initial neurological deficit (NIHSS score), severity of WMHs (Fazekas scale score), or presence of microbleeds (Table S2). In addition, progressive cognitive decline was associated with impaired CA but not with clinical characteristics, the severity of WMHs, or the presence of microbleeds (Table S3).
Discussion
In the current study, we followed up 65 patients with ischemic stroke for 1 year to investigate the risk factors of PSCI. Low education level, presence of lobar microbleeds, and impaired CA were identified as independent risk factors of PSCI. Patients with PSCI exhibited poorer results in all cognitive domains compared with the patients without PSCI. Although the patients with PSCI had higher incidence of progressive cognitive decline than did those without PSCI, more than half of the patients with PSCI exhibited stability or improvement in cognitive function. This finding suggests that the pathogenesis of PSCI involves poor cognitive function before or after stroke in most patients and progressive cognitive decline in some patients. Impaired CA was a risk factor not only of PSCI but also of progressive cognitive decline. In addition, impaired CA was not associated with the severity of initial neurological deficit, WMHs, or the presence of lobar microbleeds. Therefore, impaired CA is an important risk factor of PSCI and is likely not the consequence of a preexisting brain lesion or acute minor stroke.
In related studies, old age, low education level, diabetes, and atrial fibrillation have been found to increase the risk of PSCI. 18,19 However, these factors also increase the risk of prestroke cognitive impairment 18 and have been associated with Alzheimer disease (AD). 20,21 Therefore, cognitive impairment and stroke have the same risk factors, and patients with PSCI might actually have had subclinical prestroke cognitive impairment despite reporting normal premorbid cognitive function. Nevertheless, the occurrence of a stroke accelerates cognitive decline, and the effect is stronger in patients who are older and have experienced cardioembolic stroke. 19 In the current study, although education level was lower in the patients with PSCI than in the patients without PSCI, education level was not different between the patients with and without progressive cognitive decline (Table S3). Therefore, low education level may reflect poor baseline cognitive function but is not a risk factor of progressive cognitive decline. Neuroimaging characteristics including strategic stroke, stroke lesion volume, total brain tissue volume, temporal lobe atrophy, WMHs, and microbleeds were reported to be associated with PSCI. 22 In the current study, stroke lesion volume and WMHs were not associated with PSCI, and lobar microbleeds, but not deep or infratentorial types, were associated with PSCI. This discrepancy might be explained by the patient characteristics in the current study. The patients had uniformly low burden of pathological neuroimaging characteristics, including small infarction, mild WMHs, and few microbleeds (Table 1). Therefore, the neuroimaging characteristics may not have affected cognitive function to a great extent in the current study.
Nevertheless, the presence of lobar microbleeds, even those that were not severe, was associated with PSCI in the current study. This could be because lobar microbleeds indicate the presence of subclinical neurodegenerative pathomechanisms such as AD-related cerebral amyloid angiopathy. 23,24 The prevalence of cerebral amyloid pathology, detected using positron emission tomography (PET), was approximately 10% in patients with ischemic stroke 25 and approximately 20% in patients with PSCI. 26 The prevalence of a positive amyloid PET result in patients with PSCI was not higher than that in patients without PSCI. 25 Therefore, both preexisting amyloid pathology and other factors contribute to the development of PSCI. A novel finding that emerged from the current study is that impaired CA is associated with PSCI. Impaired CA results in unstable CBF and secondary injury after a stroke, 27 and impaired CA is associated with reduced functional connectivity in cognition-related networks. 28 In related studies, impaired CA was associated with large lesion volume and elevated BP, but it also independently predicted poor functional recovery from stroke. 1,3,5 In the current study, the VLF, but not the LF, phase shift was associated with PSCI. The clinical or physiological significance of different frequency bands is not yet well-understood. Nevertheless, the literature suggests that neural network activity is associated with VLF spontaneous fluctuation, whereas vasomotion and sympathetic activity are associated with LF spontaneous fluctuation. 29,30 VLF, but not LF, phase shift has been reported to be associated with the functional outcomes of ischemic stroke and traumatic brain injury. 1,31 Therefore, VLF phase shift may reflect the functioning of the neurovascular unit, and impaired CA may be detrimental to poststroke cognitive function recovery through its negative influences on cognition-related networks. Most patients enrolled in the current study did not have significant cerebrovascular stenosis. Although 32,33 The possible mechanism behind this phenomenon is neurovascular unit dysfunction in small vessel disease, 34 which is consistent with the finding that VLF phase shift is the optimal CA index for predicting poststroke cognitive impairment. Patients with AD were found to have normal CA, 35 which is consistent with the lack of association between CA and lobar microbleeds observed in the current study. Therefore, CA is likely independent of AD-related pathology.
No dynamic CA indices changed significantly between all visits in the current study regardless of the onset of PSCI and improvement in NIHSS score. In a 6-month longitudinal study, patients with lacunar infarction exhibited sustained impaired CA, 32 which is consistent with the results of current study. This finding suggests that regular healthcare was unable to alter CA in patients with minor stroke despite some neurological improvement. Some therapeutic interventions, including antihypertensive medicines and remote ischemic preconditioning, can alter CA [36][37][38] and thus may represent a novel strategy to prevent the onset of PSCI.
The current study had some limitations. First, although PSCI refers to cognitive impairment after a stroke, subclinical cognitive impairment might have been present before the stroke; however, all participants of the current study reported normal premorbid cognitive function. Second, patients with atrial fibrillation were excluded, and these patients might account for 10 to 15% of all patients with ischemic stroke. 39,40 Third, most patients had mild disease. Fourth, this study included relatively few participants from a single hospital, which may have affected the generalizability of the findings. Fifth, 9 of the 80 patients were excluded at the first visit because of unacceptably low VLF coherence of BP and CBFV in the TFA algorithm. A longer dynamic CA recording time may reduce the incidence of this problem.
In conclusion, the occurrence of PSCI is influenced by low education level, the presence of lobar microbleeds, and impaired CA. In addition, impaired CA is a risk factor of progressive cognitive decline. Treating impaired CA may be a novel strategy for preventing PSCI.
Supporting Information
Additional supporting information may be found online in the Supporting Information section at the end of the article. Table S1. Comparison of systemic and cerebral hemodynamic parameters between different visits. Table S2. Clinical and neuroimaging characteristics of the patients with and without impaired cerebral autoregulation (ipsilesional VLF phase shift ≤46°) within 7 days. Table S3. Clinical, neuroimaging, cerebral autoregulation characteristics of the patients with and without progressive cognitive decline. | 5,408.2 | 2020-05-28T00:00:00.000 | [
"Psychology",
"Biology"
] |
The Long-Run Determinants of Indian Government Bond Yields
This paper investigates the long-term determinants of the nominal yields of Indian government bonds (IGBs). It examines whether John Maynard Keynes’ supposition that the short-term interest rate is the key driver of the long-term government bond yield holds over the long run, after controlling for key economic factors. It also appraises if the government fiscal variable has an adverse effect on government bond yields over the long run. The models estimated in this paper show that in India the short-term interest rate is the key driver of the long-term government bond yield over the long run. However, the government debt ratio does not have any discernible adverse effect on IGB yields over the long run. These findings will help policy makers to (i) use information on the current trend of the short-term interest rate and other key macro variables to form their long-term outlook about IGB yields, and (ii) understand the policy implications of the government's fiscal stance.
This paper examines whether Keynes' supposition that the short-term interest rate is the key driver of the long-term government bond yield holds in India over the long run after controlling for various key economic factors, such as inflationary pressure and measures of economic activity. It also appraises if government fiscal variables, such as the ratio of government debt to nominal gross domestic product (GDP), have an adverse long-run effect on government bond yields in India. Akram andDas (2015a and2015b) report that Keynes' conjectures hold in India for the short-run horizon. They also find that government fiscal variables do not appear to exert upward pressure on Indian government bond (IGB) yields. However, they do not examine if these results hold over a long-run horizon. This paper fills that critical lacunae.
Understanding the determinants of government bond yields in India over the long-run horizon is important not just for scholarly reasons but also for policy purposes and policy modeling, particularly for discerning the effects of fiscal and monetary policy on IGB yields. Understanding the drivers of government bond yields in emerging markets such as India has crucial implications for the government's fiscal and macroeconomic policy mix. It is also relevant for fixed income investment and portfolio allocation, as well as the management of government debt.
India's institutional features, its economic rise, and the evolution of its financial system make it worthwhile to examine the long-run trends in its government bond market. First, India's financial markets are in the development stage. While India has liberalized its economy and many aspects of its financial system, there are still various restrictions. Its bond market is not as deep as those of advanced capitalist economies such as Japan, the United Kingdom, and the United States (US). The country's banking system is dominated by state-owned or state-controlled financial institutions, and its fixed income investors in the local currency bond market are largely confined to investing in government securities since the depth and liquidity of corporate bonds and other fixed income securities are limited. It is, hence, appropriate to inquire whether Keynes' supposition regarding the link between the short-term interest rate and the long-term interest rate holds in the institutional and structural circumstances of emerging market economies such as India. Second, whether the central bank's setting of the policy rate(s) and other monetary policy actions influence the long-term interest rate over the long run in India has meaningful policy implications for monetary transmission mechanisms. If the evidence suggests that the central bank can decisively affect the long-term interest rate, not just in the short run but also over the long run, this would show that the Government of India has considerable policy space. If no such relationship can be established, then this would mean that its policy space is rather restrictive and narrow. Hence, it is important to examine what conjectures are empirically warranted in India and other emerging markets.
The paper is organized as follows. Section II sets the foundation for the empirical investigation. First, it discusses Keynes' view on interest rates and provides the theoretical framework. Second, it summarizes Keynes' stance on the loanable funds theory and explains why he rejects this theory. Third, it presents a simple two-period model of government bond yields. Fourth, it recounts the stylized facts about government bond yields and government debt ratios. Fifth, it briefly reviews the relevant literature on government bond yields in emerging market economies. Section III describes the data, the behavioral equations to be estimated, and the econometric methodology applied here. Section IV reports the empirical findings. Section V analyzes the policy implications of the results and concludes. Appendix 1 presents the details of the simple two-period model of government bond yields used in the paper. Appendix 2 presents additional regressions to examine the effects of credit growth, global investors' risk appetite, and the nominal effective exchange rate on government bond yields.
Keynes rejected the loanable funds theory of interest rates. According to the proponents of this theory, the interest rate is primarily determined by the demand and supply of loanable funds. The loanable funds theory has a distinguished pedigree. It is endorsed in classical economics such as Cassel (1903), Böhm-Bawerk (1959), Hayek (1933 and1935), Marshall (1890), Pigou (1927), Ricardo (1817), von Mises (1953), and Wicksell (1962[1936). Keynes rejects the loanable funds theory because he believes it is insufficient to determine interest rates solely on the basis of knowledge of the demand for investment and the supply of savings. He criticizes the loanable funds theory for neglecting the roles of national income, the marginal propensity to consume, and liquidity preference in the determination of interest rates. In his view, the "rate of interest is the reward for parting with liquidity for a specified time" (Keynes 2007(Keynes [1936, p. 167). It follows that the interest rate is "a measure of the unwillingness of those who possess money to part with their liquid control over it." Liquidity preference is quite central to Keynes' view on the interest rate. Liquidity preference arises from fundamental uncertainty about future economic and financial conditions, and the divergence among investors about their outlook for the future. Interest rates have institutional and behavioral foundations. Hence, for Keynes, institutions like the central bank and investors' psychology and social orientation, as manifested in herding and the formation of long-term expectations, play decisive roles in the determination of the interest rate, rather than just the demand and supply of loanable funds. The demand and the supply of loanable funds are outcomes of income, the propensity to consume, and liquidity preference, which occur within a context that consists of institutions, such as the central bank, and amid investors' psychology that is guided by animal spirits, instincts, and social conventions.
C. A Simple Two-Period Model of Government Bond Yields
A simple model, based on Akram andDas' (2014 and and Akram andLi's (2016 and interpretations of Keynes' views, is presented here to show the connection between the current short-term interest rate and the long-term interest rate.
To simplify the exposition, a two-period horizon is used. There are two periods: t = 1, 2. The long-term interest rate on a government bond in period 1 is r LT ; the short-term interest rates on a Treasury bill in period 1 and period 2 are, respectively, r 1 and r 2 ; the expected short-term interest rate in period 2 is Er 2 ; the 1-year, 1-year forward rate is f 1,1 ; the term premium is z; the current rate of inflation in period 1 is π 1 ; the actual rate of inflation in period 2 is π 2 ; the expected rate of inflation in period 2 is Eπ 2 ; the current growth rate in period 1 is g 1 ; the actual growth rate in period 2 is g 2 ; the expected growth rate in period 2 is Eg 2 ; the government fiscal variable in period 1 is ν 1 ; the government fiscal variable in period 2 is ν 2 ; and the expected government fiscal variable in period 2 is Ev 2 .
It can be shown that the long-term interest rate is a function of either (i) the short-term interest rates in period 1 and period 2, and the growth rate and the rate of inflation in period 2; or (ii) the short-term interest rates in period 1 and period 2, and the growth rate, the rate of inflation, and the government fiscal variable in period 2. Hence, the models of the determinants of the long-term bond yields take the following forms: (1) A detailed derivation of the above models is presented in Appendix 1.
It is appropriate to incorporate the government fiscal variable in the model of the long-term interest rate for several reasons. First, government fiscal variables affect the long-term interest rate in the standard IS-LM Keynesian models. Second, it is also included in the standard theoretical and empirical literature, including Ardagna, Caselli, and Lane (2007); Baldacci and Kumar (2010); and other studies cited in section II.A. Third, since the paper assesses whether Keynes' conjecture regarding the importance of the short-term interest rate in driving the long-term interest rate is more warranted than that of the conventional view, it is necessary to empirically estimate the effect of government fiscal variables on the long-term interest rate. Ruling out, a priori, the role of the government fiscal variable on the long-term interest rate would be arbitrary and could be regarded as an ad hoc and unjustified maneuver. Undoubtedly, the empirical findings of this and other studies that find support for the Keynesian perspective can influence the choice of variables in the construction of models of the long-term interest rate in the future.
D.
Institutional Background Akram andDas (2015a and2015b) provide the institutional background to the monetary policy framework, the government bond market, and monetaryfiscal coordination in India. Yanamandra (2014) gives additional perspective on monetary policy making in India in light of economic reforms, modernization, and recent developments, while Chakraborty (2016) provides a detailed description and analysis of the country's monetary-fiscal policy mix and monetary-fiscal coordination. Jácome et al.'s (2012) survey of global practices among central banks in extending credit and coordinating with the national Treasury includes a description of Indian laws, regulations, and practices related to its Treasury and central bank.
India enjoys monetary sovereignty as defined by Wray (2012). The Government of India issues its own currency, the rupee. The country's central bank, the Reserve Bank of India (RBI), sets the policy rates and can use a wide range of monetary policy tools. The RBI enjoys a wide range of authority and control over the country's financial system. The Government of India has the legal and political authority to collect taxes from households, businesses, financial institutions, and other organizations. The country's sovereign debt is predominantly issued in its own currency, the rupee. The multifaceted roles played by the RBI in the payment system, monetary policy, financial stability policy, and policy coordination with the Treasury gives it the operational ability to influence government bonds' nominal yields by setting and changing the short-term interest rate and using other tools of monetary policy as it deems appropriate. RBI (2014) provides a detailed institutional description of the IGB market, while RBI (various years) Annual Reports give useful summaries of the central bank's monetary policy and background. The 2009 report presents a valuable perspective on the operational aspects of monetary-fiscal coordination in India.
E. Stylized Facts
A set of figures are presented in this section to highlight important stylized facts related to IGBs and government finance. Figure 1 compares the evolution of 10-year government bond yields in India with that of other major emerging markets, such as Brazil, Mexico, the People's Republic of China, the Russian Federation, and South Africa. It shows that since the global financial crisis, government bond yields in India have been generally higher than in the People's Republic of China and Mexico, but lower than in Brazil. Government bond yields in the Russian Federation and South Africa have been more volatile than those in India. In recent years, as commodity prices tumbled, financial flows to emerging markets weakened, and their central bank policy rates increased, and government bond yields in the Russian Federation and South Africa rose. Figure 2 shows the evolution of key government fiscal variables in India such as the (i) ratio of gross government debt to nominal GDP, (ii) ratio of government fiscal balance to GDP, and (iii) 10-year government bond yield. It shows that the government debt-to-GDP ratio rose from 70% to nearly 85% in the early 2000s, but subsequently declined to around 70% as the country's annual fiscal balance improved from a deficit of around 11% of GDP in the early 2000s to a deficit of just 4% of GDP in the 2010s. Since the beginning of the 2010s, India's government debt ratio has been stable at around 70%, while its fiscal deficit has hovered around 7% of GDP. The figure also suggests that, prima facie, the evolution of government bonds yields in India is not directly affected by government fiscal conditions. Figure 3 shows the evolution of the sector balances as a share of nominal GDP in India. It uses annual flow data to display (i) the government balance, (ii) the private sector balance, and (iii) the current account balance. It visually shows that the flow of government dissaving is equal to private sector saving and the rest of the world's saving in Indian rupees. Figure 4 displays that the changing relationship between the credit default swap (CDS) premium on IGBs and the spread between the nominal yields of 10-year IGBs and 10-year US Treasury notes since 2010. It shows that the correlation can change drastically. Between 2010 and 2013, the CDS premium and the yield spread were tightly correlated. However, since 2014, the correlation between the CDS premium and the yield spread has been quite weak.
F. A Brief Review of the Literature on Government Bond Yields
There is a substantial literature on government bonds yields, including on the determinants of government bond yields in emerging markets such as India. Nevertheless, the debate on the determinants of bond yields and the relative importance of the key drivers is still unsettled.
We examine the findings of recent studies on government bond yields to ascertain how relevant these are to the question that this paper addresses. Andritzky (2012) provides a useful database on the investor base for government securities and investigates the effect of the composition of the investor base on government bond yields. Even though the study relies on G20 advanced economies and the eurozone, a key finding appears to be relevant for emerging markets. An increase in the share of bonds held by institutional or nonresidents by 10 percentage points is correlated with a decline in bond yields by about 25-40 basis points (bps). Asonuma, Bakhache, and Hesse (2015) find that an increase in domestic bank holdings of government bonds reduces bond yields and provides fiscal space for the sovereign authorities. Ebeke and Lu (2014) argue that the rise in foreign holdings of local currency government bonds in emerging markets has led to a decline in bond yields but a rise in their volatility, particularly since the global financial crisis. Acharya and Steffen (2015) provide an insightful analysis of the cause of the divergence of bond yields between the core of the eurozone and its periphery. They also discuss the vital role played by the "carry trade" of eurozone banks in causing the widening of the spread. The results of Ardagna, Caselli, and Lane (2007) are in line with the conventional wisdom cited earlier in the introduction. They claim that an increase of 1 percentage point in the ratio of the primary deficit leads to (i) an increase in the current long-term interest rate by 10 bps and (ii) cumulative increases in the long-term interest rate by 150 bps after 10 years. These and other results in the conventional literature on government bond yields are interesting. However, the conventional literature does not probe sufficiently the key role of the central bank in influencing government bond yields in emerging markets. Hence, a Keynesian perspective may provide a more insightful analysis of the decisive factors and may be more pertinent for understanding government bond yields in India. This view is reinforced by the empirical literature on IGBs, which largely refutes the conventional view that higher (lower) government debt or government deficits induce higher (lower) government bond yields. Chakraborty's (2016) detailed and careful institutional and empirical study finds that there is no evidence of any link between fiscal deficit and interest rates in India. Vinod, Chakraborty, and Karun (2014) use the maximum entropy bootstrap method and report that the government fiscal deficit ratio is not significant for interest rate determination in India. Chakraborty (2012), applying asymmetrical vector autoregressive models, finds that an increase in the fiscal deficit ratio does not lead to a rise in interest rates. Akram andDas (2015a and2015b) show that changes in the short-term interest rate, after controlling for other crucial variables such as changes in the rates of inflation and economic activity, take a lead role in driving the changes of the nominal yields of IGBs. Additional results show that higher fiscal deficits do not appear to exert upward pressures on government bond yields. Findings from Akram andDas (2015a and2015b) are, however, valid solely for the short run. One of the important goals of the current paper is to examine if the findings from Akram and Das (2015a and 2015b) hold over the long-run horizon.
The next section introduces behavioral equations, time series data, and econometric methods to examine the role of the short-term interest rate, the rate of inflation, the government fiscal variable, and other key macroeconomics variables to determine the nominal yields on IGBs over the long-run horizon.
A. Data 1
For the purpose of econometric estimations, time series data on the nominal yields of long-term IGBs, the short-term interest rate, the rate of inflation, the growth of industrial production, and government fiscal variables are used.
Nominal yields on Indian Treasury bills with 3-month maturities are used for the short-term interest rate, while the nominal yields on IGBs of various tenors-including 2-year, 3-year, 5-year, 7-year, and 10-year maturities-are used to represent long-term government bond yields. The RBI (2014) classifies government securities with a maturity of less than 1 year as short-term securities, and those with a maturity of 1 year or more as long-term securities. Figure 5 shows the evolution of nominal yields of IGBs. Figure 6 shows the evolution of the short-term interest rate along with the RBI's policy rates (repo rates and reverse repo rates). The rate of inflation is defined as the year-on-year percentage change in the total consumer price index for all items. Growth in industrial production is the year-on-year percentage change in the index of industrial activity in India. The ratio of government debt to nominal GDP is used here as the government fiscal variable. The ratio of private sector credit (from all sectors) to nominal GDP is used to measure credit growth. The Institute for International Monetary Affairs' index of the volatility in global bond markets is a proxy for global investors' risk appetite. An increase (decrease) in volatility in global bond markets means that investors' perception of and appetite for risk has risen (declined). The nominal effective exchange rate, calculated by the Bank for International Settlements, is the exchange rate used here. The data of all the variables are collected from Macrobond's (various years) data services. Table 1 Both monthly and quarterly data are used to examine the determinants of nominal yields of long-term government bonds. Indian government fiscal data is available only in quarterly form. Hence, the debt-to-GDP ratio is included only in the quarterly equations.
B. Behavioral Equations
A set of behavioral equations for monthly data and for quarterly data are constructed in concordance with the model based on the Keynesian framework presented earlier. These behavioral equations readily lend themselves to empirical testing. The specific-to-general approach is deployed here. For the monthly dataset, the long-term government bond yields are first regressed individually with the short-term interest rate, inflation, and the growth rate of industrial production. The dependent variables are then regressed with the short-term interest rate and inflation, and the short-term interest rate and growth rate. In the general form of the behavioral equation, the long-term interest rate is determined by all three explanatory variables including the short-term interest rate, rate of inflation, and growth rate. The general equation takes the following form: The same approach is used when the quarterly dataset is employed to examine the determinants of long-term bond yields in India. However, to understand the effects of the government fiscal variable on government bond yields, the ratio of government debt to nominal GDP is included in the general equation of the quarterly dataset. Hence, the behavioral equation can be written in the following manner:
C. Econometric Methodology
The first step is to examine the nature of the data. The presence of unit roots in most macroeconomic variables is fairly common (Nelson and Plosser 1982). Hence, estimating the long-run relationships of stationary variables using standard cointegration techniques (e.g., Johansen cointegration) is inconsistent. Therefore, unit root tests on the variables used in this paper are imperative. Conventional research has used both the Augmented Dickey-Fuller (ADF) (Dickey andFuller 1979, 1981) and the Phillips-Perron (PP) (Phillips and Perron 1988) tests to Notes: *** , ** , and * indicate statistical significance at 1%, 5%, and 10% levels, respectively. The null hypothesis of all three tests is that the series contains unit roots. Source: Authors' calculations.
identify the existence of unit roots. Elliott, Rothenberg, and Stock (1996) proposed the Dickey-Fuller Generalized Least Square (DFGLS) test, which is a modified version of the standard ADF test. According to the DFGLS procedure, the data are detrended before testing for stationarity. Different versions of the ADF, PP (with no constant and trend, constant and no trend, and constant and trend), and DFGLS tests (with constant but without trend, and constant and trend) are applied in this paper. All of these versions produce similar results. Due to space constraints, only the results with constant but without trend are presented here. All remaining results are available upon request. 2 Unit root results for monthly variables are displayed in Table 2 and the results for quarterly variables are displayed in Table 3. For the monthly dataset, most variables are nonstationary at levels and stationary at the first difference. The year-on-year percentage change in consumer price index is found to be nonstationary at levels and stationary at the first difference by two out of three Notes: *** , ** , and * indicate statistical significance at 1%, 5%, and 10% levels, respectively. The null hypothesis of all three tests is that the series contains unit roots. Source: Authors' calculations.
tests. The year-on-year percentage change in industrial production (IPIYOY) and the global bond market volatility index are stationary at levels. Thus, most variables are integrated of order one, I(1). All three tests suggest that IPIYOY is stationary at levels; that is, I(0). Similar results are found for the quarterly variables. Government bond as a percentage of GDP is found to be stationary at levels by the PP test, and nonstationary at levels by the ADF and DFGLS tests. Therefore, all quarterly variables are either I(0) or I(1). Given the results from the unit root tests, it is appropriate to estimate the long-run cointegrating relationships using the autoregressive distributive lag (ARDL) proposed by Pesaran and Shin (1998) and Pesaran, Shin, and Smith (2001). The ARDL bounds test approach is based on the ordinary least squares estimation of a conditional unrestricted error correction model for cointegration analysis. The ARDL technique is more appealing than the Johansen cointegration technique (Johansen and Juselius 1990) because the latter requires that the variables are integrated of the same order of I(1). However, the ARDL approach is not constrained by the outcomes of unit root tests. It is applicable irrespective of whether the regressors in the model are purely I(0), purely I(1), or mutually cointegrated. In the present case, most variables are I(1) with the exception of IPIYOY and DRATIO_Q (i.e., government debt as percentage of nominal GDP), which are I(0). Moreover, the ARDL technique allows different variables to take different optimal numbers of lags, while this is not permitted in the Johansen cointegration approach. Therefore, the ARDL technique, which will accommodate both I(0) and I(1) variables, is used in this paper to estimate the long-run relationships between long-term government bond yields and other control variables.
A. Monthly Results
The ARDL bounds test results generated from monthly variables are presented in Tables 4-8. When the short-term interest rate is included with inflation, in most cases the computed F-statistic based on a Wald test exceeds the upper bound value at the 5% level. In the case of the 2-year government bond yield, the computed F-statistic exceeds the upper bound value at the 10% level when the short-term rate is included in the equation with both inflation and the industrial production index (equation 4.6). The null hypothesis of no cointegration is rejected whenever the F-statistic value is higher than the upper bound value. This analysis confirms the presence of a long-run relationship among long-term government bond yields, the short-term interest rate, the rate of inflation, and the growth of industrial production. It enables the estimation of the long-run coefficients of the short-term interest rate and other control variables. The coefficients of the short-term interest rate are always positive and statistically significant at the 1% level. The size of this coefficient tends to be smaller as the tenor of the government bond rises. These results suggest that in the long run the short-term interest rate strongly influences long-term government bond yields in India.
B. Quarterly Results
Estimated results using quarterly data are presented in Tables 9-13. When the short-term 3-month interest rate is included with inflation and the ratio of government debt to nominal GDP, the computed F-statistic value is mostly higher than the upper bound value. Long-run coefficients of the short-term interest rate are positive when significant. The magnitude of this coefficient lies between 0.13 and 0.53. The coefficient of the ratio of government debt to nominal GDP is mostly negative and significant at the 1% level, suggesting that in the long run a higher debt ratio tends to reduce the nominal yields of IGBs. This is contrary to the conventional wisdom. Quarterly data allow the use of government fiscal variables but a clear limitation is that these results are based on a smaller number of observations.
C. The Main Finding and Its Relevance
The main finding is that the short-term interest rate is a key driver of the long-term interest rate on IGBs in both the short run and the long run. This finding has important policy implications. For example, it suggests that the RBI's monetary policy decisions not only have an immediate effect on the long-term interest rate and the Treasury yield curve, but also on the direction and the level of the long-term interest rate over a longer horizon. The results obtained are robust. Additional regressions estimated in Appendix 2 show that the coefficient of the short-term interest rate is positive and statistically significant, at least at the 5% level, even after controlling for variables such as credit growth, global investors' risk appetite, and the nominal effective exchange rate. Therefore, the main finding that the short-term interest rate is the most important determinant of long-term bond yields does not change with adjustments to the specifications. These results reinforce the findings in Akram and Das' (2015a and 2015b) recent studies on IGBs in which they report that changes in the short-term interest rate are important determinants of changes in long-term government bond yields in India. Whereas Akram and Das (2015a and 2015b) established the results for the short run, the current study extends this for the long run.
V. Policy Implications and Conclusion
The empirical results reported here support Keynes' conjecture that the central bank's actions, through its influence on the short-term interest rate and its use
In the case of India, the actions of the RBI affect the long-term interest rate. The long-term interest rate on IGBs is positively associated with the short-term interest rate on Indian Treasury bills after controlling for the relevant variables such as the rate of inflation, growth of industrial production, and debt ratio. A higher (lower) long-term interest rate on IGBs is associated with a higher (lower) short-term interest rate, higher (lower) rate of inflation, and faster (slower) pace of industrial production. The results show that a higher level of government indebtedness does not have an adverse effect on IGBs' nominal yields, contrary to the conventional view. These findings concur with the results obtained in Akram andDas' (2015a and2015b) studies of the short-term dynamics of IGBs. The findings also align with those obtained in studies by Chakraborty (2012 and and Vinod, Chakraborty, and Karun (2014), which use quite different econometric and statistical methods.
Variable
Equation 11 The findings reported in this paper have implications for policy debates in India and other emerging markets with monetary sovereignty that issue government debt mostly in their own currencies. The findings are also relevant for ongoing debates over fiscal policy, the sustainability of government debt, monetary policy, monetary-fiscal coordination and the policy mix during economic fluctuations, and macroeconomic and monetary theory (Bindseil 2004, Fullwiler 2008, Kregel 2011, Sims 2013aand 2013b, Tcherneva 2011, Woodford 2001, and Wray 2003 and 2012). First, the results show that the RBI can exert a strong influence on IGB yields by affecting the short-term interest rates. The RBI can affect the short-term interest rates on Indian Treasury bills through setting the repo rate and the reverse repo rate (Figure 6). These findings support Keynes' conjecture about the influence of a sovereign central bank on long-term interest rates. Second, the results also suggest that, contrary to the conventional wisdom, higher government indebtedness does not raise IGBs' nominal yields. While this DRATIO_Q = government debt as percentage of nominal gross domestic product, IGB7YR_Q = 7-year government bond yield, IPIYOY_Q = year-on-year percentage change in industrial production, TB3M_Q = 3-month government auction rate, TCPIYOY_Q = year-on-year percentage change in consumer price index. Notes: *** and ** represent 1% and 5% levels of significance, respectively. Standard errors are in parentheses.
actions drive long-term interest rates and that an investor's long-term outlook is mostly shaped by the investor's near-term outlook and assessment of current conditions. This paper shows that Keynes' conjecture has empirical support in India over the long-run horizon by extending Akram andDas' (2015a and2015b) findings for the short-run horizon to the long-run horizon for the case of India. It contributes to the nascent literature-such as Akram (2014) and Akram and Das (2014a and 2014b) on Japan; Akram and Das (2017b and 2017c) on the eurozone; and Akram andLi (2016, 2017a, and 2017b) on the US-on this topic of examining whether Keynes' conjecture holds in various countries. Further research should extend this to a wider range of countries-both advanced capitalist economies and emerging markets and other developing areas-and apply a broad spectrum of suitable econometric methods to establish whether these findings can be generalized and determine under which institutional contexts they are warranted.
Appendix 1. Derivation of the Two-Period Model of Government Bond Yields
The long-term interest rate on the 2-year government bond depends on the short-term interest rate on Treasury securities in period 1 and the 1-year, 1-year forward rate (equation A1). The 1-year, 1-year forward rate is based on an investor's expectation of the short-term interest rate on Treasury securities in period 2 and the term premium (equation A2). However, the expected short-term interest rate on Treasury securities in period 2 and the term premium is a function of the investor's expectation of growth and inflation in period 2 (equation A3). Hence, the 1-year, 1-year forward rate is merely the sum of the expected short-term interest rate on the Treasury bill in period 2 and a function of the expected growth rate and expected inflation in the same period (equation A4). This implies that the forward rate is a function of expected short-term interest rates on Treasury securities, the expected growth rate, and expected rate of inflation in period 2 (equation A5). Since the long-term interest rate is a function of the short-term interest rate on the Treasury securities in period 1 and the 1-year, 1-year forward rate (equation A6), it follows that the long-term interest rate is a function of the short-term interest rate in period 1, and a function of the expected short-term interest rate, expected growth rate, and expected rate of inflation in period 2 (equation A7).
Keynes' view is that the investor resorts to the present and the past. The investor relies on his view of the near-term future to form his conception of the long-term future since it is not really possible to have a proper mathematical expectation of the unknown and uncertain future. Hence, for the investor, the expected short-term interest rate in period 2 is based on the actual short-term interest rate in period 1 (equation A8), the expected growth rate in period 2 is based on the actual growth rate in period 1 (equation A9), and the expected rate of inflation in period 2 is based on the actual rate of inflation in period 1 (equation A10). Similarly, the expected government fiscal variable in period 2 is based on the government fiscal variable in period 1 (equation A11). These Keynesian assumptions results in a model (equation A12) where the long-term interest rate is a function of either (i) the current short-term interest rate, the current growth rate, and current inflation (equation A13); or (ii) the current short-term interest rate, the current growth rate, current inflation, and the current government fiscal variable (equation A14).
The Keynesian view that an investor's expectation of key economic variables depends largely on current conditions or the investor's assessment of current conditions may appear intriguing and counterintuitive. But if key economic variables follow a Markov process (equation A15, equation A16, equation A17, and equation A18), then the Keynesian view of the trajectory of expected values of these variables is entirely reasonable. Empirical and behavioral studies of the investor's expectations of the interest rate and the rate of inflation show that Keynes' If the variables in period 2 are to follow a simple Markov process, these variables can be modeled in the following terms: In the above equations, the restrictions on the parameters are as follows: 0 < 2 < 1, 0 < 4 < 1, 0 < 6 < 1, and 0 < 8 < 1. It is useful to contrast the Keynesian model with a Lucasian (rational expectations) model. Under rational expectations: Under Lucasian assumptions, the long-term rates are modeled, respectively, without and with government fiscal variable, as follows: r LT = F 7 (r 1 , r 2 , g 2 , π 2 ) (A23) r LT = F 8 (r 1 , r 2 , g 2 , π 2 , v 2 ) (A24) (reduction) in the value of the currency. Increased (decreased) perception of risk, as measured by higher (lower) volatility in global bond markets, should lead to higher (lower) government bond yields in India. This appendix examines whether any of these variables have a discernable influence on government bond yields as posited.
The hypothesis that credit growth, global investors' risk appetite, and the exchange rate matter is supported in some of the findings reported in the recent empirical literature on the determinants of government bond yields. Arslanalp and Poghosyan (2014) show that an increase in the share of government debt held by foreign investors can explain a reduction in long-term government bond yields. Ebeke and Lu (2014) report that foreign holdings of local currency government bonds in emerging markets exert downward pressure on government bond yields, though they note that an increase in such holdings is associated with somewhat increased yield volatility in the post-Lehman period. Other researchers have explored the effects of overall credit growth and the exchange rate on government bond yields in emerging markets.
The evolution of some of these additional variables for India is shown in the figures below. Figure A2.1 shows that the ratio of overall credit to nominal gross domestic product steadily increased for many years before stabilizing in recent years. Figure A2.2 depicts the evolution of volatility in global bond markets. Volatility in government bond markets rose sharply during both the global financial crisis and the eurozone debt crisis. Such volatility is a good proxy for global investors' risk appetite. Figure A2.3 displays the evolution of the nominal effective exchange rate for the Indian rupee. The Indian rupee depreciated steadily versus the United States dollar between 2000 and 2014. Since 2014, it has appreciated modestly and has been fairly stable.
After controlling for the short-term interest rate, rate of inflation, growth of industrial production, and debt ratio, the effects of credit growth, global risk appetite, and the nominal effective exchange rate on the nominal yields of Indian government bonds (IGBs) of various tenors are examined using monthly data. Autoregressive distributive lag bounds test results are obtained. When the computed F-statistic value is higher than the upper bound value, the long-run relationships are estimated.
The results of the empirical investigation are presented in Tables A2.1-A2.5. An increase in the ratio of credit to nominal GDP leads to slightly higher IGB yields rather than lower yields. The coefficient for the index of the nominal CREDIT = credit to the private sector as percentage of gross domestic product, IGB3YR = 3-year government bond yield, IPIYOY = year-on-year percentage change in industrial production, NEER = nominal effective exchange rate, RISK = global bond market volatility index, TB3M = 3-month government auction rate, TCPIYOY = year-on-year percentage change in consumer price index. Notes: *** represents 1% level of significance. Standard errors are in parentheses. Source: Authors' calculations. effective exchange rate is positive. This implies that as the Indian rupee appreciates (depreciates), IGB yields rise (fall). The estimated coefficient on risk shows that as risk (as measured by global bond market volatility) rises (falls), IGB yields decline (increase).
The results from the additional regressions estimated in this Appendix suggest that the ratio of credit to nominal GDP, nominal effective exchange rate, and investors' risk appetite (volatility) in global bond markets are not important drivers of IGB yields in India. However, the short-term interest rate is always found to be CREDIT = credit to the private sector as percentage of gross domestic product, IGB5YR = 5-year government bond yield, IPIYOY = year-on-year percentage change in industrial production, NEER = nominal effective exchange rate, RISK = global bond market volatility index, TB3M = 3-month government auction rate, TCPIYOY = year-on-year percentage change in consumer price index. Notes: *** , ** , and * represent 1%, 5%, and 10% levels of significance, respectively. Standard errors are in parentheses. Source: Authors' calculations. positive and statistically significant, irrespective of the equations used to estimate the determinants of long-term government bond yields. This particular result is robust and insensitive to any changes in the specification. This result supports Keynes' contention in the case of India. CREDIT = credit to the private sector as percentage of gross domestic product, IGB7YR = 7-year government bond yield, IPIYOY = year-on-year percentage change in industrial production, NEER = nominal effective exchange rate, RISK = global bond market volatility index, TB3M = 3-month government auction rate, TCPIYOY = year-on-year percentage change in consumer price index. Notes: *** , ** , and * represent 1%, 5%, and 10% levels of significance, respectively. Standard errors are in parentheses. Source: Authors' calculations. CREDIT = credit to the private sector as percentage of gross domestic product, IGB10YR = 10-year government bond yield, IPIYOY = year-on-year percentage change in industrial production, NEER = nominal effective exchange rate, RISK = global bond market volatility index, TB3M = 3-month government auction rate, TCPIYOY = year-onyear percentage change in consumer price index. Notes: *** , ** , and * represent 1%, 5%, and 10% levels of significance, respectively. Standard errors are in parentheses. Source: Authors' calculations. | 9,388.4 | 2017-01-12T00:00:00.000 | [
"Economics"
] |
Prediction of the Normalized COVID-19 Epidemic Prevention Costs of Construction Projects Based on an Optimized Neural Network
During the COVID-19 epidemic, the Chinese central government adopted a dynamic clearing prevention and control strategy. Meanwhile, most local governments issued policies to incorporate normal epidemic prevention costs into the costs of construction projects. However, there are few provisions on how to determine the calculation standards for these costs. To accurately predict the normalized epidemic prevention costs of construction projects from dierent aspects, the relevant factors that aect epidemic prevention costs are investigated and an optimized neural network predictionmethod that can eectively eliminate abnormal data with a too large deviation is proposed. e results show that compared with the traditional backpropagation (BP) neural network and BP neural networks optimized by genetic algorithm, the error of the optimized neural network achieves a smaller error in predicting the normalized epidemic prevention costs of construction projects (the average error of the traditional BP neural networks is 6%). Meanwhile, among the factors that aect epidemic prevention costs, total investment, project category, and construction scale have the greatest impact. Based on the research results, this paper proposes pricing suggestions and corresponding management solutions for the epidemic prevention costs of construction projects, which will be helpful to project managers.
Introduction
Normalized control in the post-pandemic era is an inevitable trend because the global coronavirus disease 2019 (COVID-19) epidemic situation remains unclear [1]. To date, there are a limited number of antiviral agents or vaccines for the treatment of COVID-19 [2]. To prevent the spread of the COVID-19 epidemic, the governments of di erent countries implemented a series of strategies to address national conditions [3]. In accordance with the overall decisionmaking and deployment requirements of the Chinese central government, the current general strategy is to "Prevent input from the outside and rebound from the inside," and the general policy is to achieve "Dynamic zero COVID- 19." Based on this, regular epidemic prevention and control strategies are carried out. As a result, this policy will inevitably increase the cost of engineering projects [4]. is paper aims to provide a method to predict the cost of epidemic prevention under the guidance of a dynamic zeroclearing policy. For this study, a total of 61 data sets were obtained by investigating the factors that in uence projects' regular epidemic prevention costs, and they were based on actual case data and project site investigation data (real engineering project data). en, prediction results were obtained using neural networks in machine learning to process the relevant data. Some studies use a genetic algorithm (GA) to optimize a back propagation (BP) neural network since the common BP neural network can easily lead to a partially optimum solution. e effect of this method is good. However, in the prediction of the normal epidemic prevention costs of construction projects, the onsite prevention and control efforts of various projects are not completely consistent. Additionally, epidemic severity, epidemic prevention policy, and the internal and external environment of the project area affect the costs of epidemic prevention. Considering the above reasons, an optimized neural network is used in this study. Compared with the ordinary BP neural network optimized by a genetic algorithm, the proposed optimized method can effectively eliminate data with a large deviation and then obtain more accurate prediction results. rough verification, it is found that the error of the optimized neural network is much smaller than that of the BP and ordinary BP neural networks optimized by other algorithms. In addition, total project investment, project category, and construction scales are the three most sensitive factors that affect epidemic prevention costs. is paper provides a reliable method to predict the costs of epidemic prevention for domestic construction projects under the background of dynamic zero clearance and proposes reasonable and feasible suggestions on cost collection. e paper is organized as follows: Section 2 summarizes the relevant work, to accurately predict the normalized epidemic prevention costs of construction projects influenced by several aspects, the relevant factors that affect epidemic prevention costs are investigated in Section 3, and an optimized neural network prediction method that can effectively eliminate abnormal data with a too large deviation is proposed in Section 4. e prediction method proposed in this paper is tested through real project cases obtained by investigation in Section 5. Section 6 discusses the applicability and precautions of the proposed method. Finally, Section 7 concludes this paper.
e Impact of COVID-19 on the Construction Industry.
Some studies have investigated the impact of COVID-19 on the construction industry in Ghana, the United States, the United Arab Emirates, and the UK [5][6][7][8]. In terms of the construction industry, many migrant workers live on construction sites, and they usually have the characteristics of high mobility and high intensity [9]. Considering this, they are identified as a typical susceptible population, and if the epidemic spreads among them, the situation will be difficult to control. Epidemic prevention and control is one of the core social responsibilities of construction enterprises under such circumstances [10], and the Occupational Safety and Health Administration (OSHA) provides guidance for construction employers and workers [11].
us, regular epidemic prevention and control costs must be considered. Health and safety (H&S) technologies have received increasing attention. e construction industry continues to adapt to the changing COVID-19 landscape, and H&S guidelines have been recommended to minimize the spread of the virus and enable construction sites to return to normal conditions [12]. According to the "Guidelines for the regular prevention and control of COVID-19 in housing construction and municipal infrastructure construction sites" (Quality Letter (2020) No. 489) [13], published by the General Office of the Ministry of Housing and Urban-Rural Development (MOHURD), the cost of epidemic prevention arising from normalized epidemic prevention and control can be included in construction costs. Meanwhile, according to the statistics, a total of 31 provinces, municipalities, and autonomous regions across China have promulgated policies to adjust project pricing during epidemic prevention and control. ese policy documents clearly state that epidemic prevention costs can be included in project costs. However, different provinces differ greatly in their calculation and collection of these costs. Project managers often rely on personal judgment or experience to evaluate the cost because the industry has not yet established a recognized calculation method and charge rate.
ere are also few studies on COVID-19 quarantine costs in academia. In the existing literature, there is no quantitative study on the collection or prediction of normal epidemic prevention costs for construction projects. In China, it was only mentioned in a study by Wang Feng's team in 2000, which explained that the cost of epidemic prevention has a strong relationship with the number of people in epidemic prevention stations [14]. Few international studies have mentioned these issues (which may be related to foreign quarantine measures). However, theoretical research on this aspect is urgently needed to support engineering practices. e study results can guide project managers to measure early-stage budgets or process settlement and final settlement.
Main Forecasting Methods and
eir Applications. Software computing methods conquered other classical models in the short-term estimation of pandemics [15,16]. Mangoni and Pistilli developed a generalized SEIR model to make predictions on the COVID-19 outbreak using the Italian data [17]. Based on this, neural networks and deep learning are classical methods in the prediction field, and various scientists have tried to make predictions using different methodologies. Different neural network prediction methods or models are widely used in the prediction of COVID-19 events [18][19][20][21][22][23][24]. e deep learning methods have shown promise in healthcare prediction challenges involving electrocardiogram data [25]. An artificial neural network with rectifying linear unit-based technique was implemented to predict the number of deaths, recovered, and confirmed cases of COVID-19 in Pakistan [26]. Wieczorek et al. constructed a neural network model for predicting the COVID-19 outbreak and reported an accuracy of above 99% in some countries [27]. Xu et al. introduced a new method based on a deep learning system to screen coronavirus COVID-19 pneumonia, and they aimed to develop an early examination model to recognize COVID-19 pneumonia from Influenza-A viral pneumonia and health conditions with lung section images [28]. Sabir et al. evaluated the mathematical system for the novel COVID-19 dynamics using the neuro-swarm heuristic solvers via artificial intelligent algorithms [29], and they presented numerical simulations of the influenza disease nonlinear system (IDNS) using the stochastic artificial neural networks (ANNs) supported by Levenberg-Marquardt back propagation (LMB) [30]. Based on the dynamics of COVID-19, they presented a novel design of intelligent solvers with a neuro-swarm heuristic integrated with an interior-point algorithm (IPA) for numerical investigations of the nonlinear fractal system [31]. Zeroual et al. conducted a comparison of some learning methods to predict the number of new cases and recovered cases [32].
Identification of Influencing Factors
Epidemic prevention and control fees are specially used for the increased wages of personnel, prevention and control materials, the wages of workers in isolation, commuting vehicles, and other related inputs of temporary facilities, which are similar to other construction costs.
erefore, according to the pricing base and relevant policies combined with brainstorming and expert interviews, this paper aims to reveal the factors that affect the costs of normalized COVID-19 epidemic prevention. e composition of regular COVID-19 epidemic prevention costs is closely related to specific projects, and the influencing factors of different projects also vary. However, due to the lack of relevant research data, the following aspects were considered to identify the factors that affect the costs of COVID-19 prevention. ese factors include the pricing base of partial costs in project investment estimation, industry management policy, brainstorming of the research group, and rounds of expert consultation.
Pricing Base.
In the entire life cycle of a project, different stages have different precision requirements for the project costs. Investment estimation is generally adopted in the early decision-making stage of a project. Limited by the depth of the scheme design, the calculation of investment estimation mainly adopts the method of charging a base fee multiplied by a rate and includes survey and design expenses and a construction premium. Tiered pricing is another method that includes a project supervision fee and a bidding agency fee. e base rates of these charges generally include the project cost, construction and installation cost, total land area, and construction building area.
Industry Management Policy.
Among the policies at the national and local levels, many suggestions are given for the collection of regular COVID-19 epidemic prevention expenses. After categorizing these policies, the relevant influencing factors were determined and are presented in Table 1.
Brainstorming and Expert Consultation.
Members of the research group performed several rounds of brainstorming and interviewed front-line engineering management experts. e factors that affect the costs of regular COVID-19 prevention identified in the first two groups were refined and supplemented by the four categories listed as follows: the first category is the project itself, such as project types, the content of construction, site area, and construction period. e second category is related to the implementation of a specific subject situation, the registration of qualifications, the number of individuals in the management team, the number of workers, and so on. e third category is related to the pressures of the COVID-19 outbreak, such as local epidemic infection numbers during the construction period and the overall domestic epidemic situation. Under high outbreak pressures, the construction costs will increase because of the inactive labor market. at is, workers are forced to stay at home and cannot go to the construction site [33]. e fourth category is related to management measures (e.g., ensuring a smart construction site situation), regular epidemic prevention and control efforts (e.g., checking body temperature, wearing a face mask, and keeping a safe social distance) [34], and using technologies (e.g., information technology solutions, video-conferencing apps, and wearable sensing devices) [35,36].
Many more factors affect the cost of COVID-19 prevention, including national culture [37], governmental efforts and a positive public response [38], and public employment services and labor market policy responses [39][40][41][42]. Combining the findings presented in the above three groups, 12 factors that affect the costs of COVID-19 prevention on two levels were determined ( Table 2).
Construction of an Optimized
Neural Network 4.1. Ideas for Optimization. Neural networks, especially a BP neural network, have advantages in prediction and can improve the judgment and prediction accuracy of a model [43]. However, a single BP neural network can easily produce a local optimum solution during the network training process [42]. To overcome this defect, some scholars have used a genetic algorithm to optimize their method. e genetic algorithm was proposed by Professor John Holland in 1960, and it provides a solution for optimizing and searching. Its principle is to imitate the survival of the fittest in natural populations [44]. Empirical analyses have found that a BP neural network optimized by the genetic algorithm has a higher evaluation accuracy and stronger generalization ability than the traditional BP neural network, thus being more suitable for evaluation and prediction research [45,46]. However, in specific application scenarios, although using a genetic algorithm to optimize the BP neural network can overcome the local optimum defect, and the R 2 in the training process is more stable [47], the ordinary BP neural networks optimized by other algorithms suffer from a certain degree of overfitting [48]; thus, the erroneous data in the sample cannot be eliminated, resulting in a better learning effect but poor prediction ability. Considering the particularity of the epidemic prevention scheme of each construction project, there are subjective factors in the cost prediction, and the physical relationship between the factors is weak. In this study, a model based on an optimized neural network was constructed according to the related literature [49]. e model first optimizes the collected sample values to exclude invalid data that are greatly affected by subjective factors and then imports the remaining data into the neural network.
e specific research process of this model is shown in Figure 1: (i) Data collection, screening, analysis, and normalization processing: appropriate parameters are selected to quantitatively characterize the above influencing factors. Several groups of effective data for modeling and analysis are obtained by screening and judging, and normalized processing is carried out. (ii) Invalid data elimination: considering that the overfitting phenomenon is easy to occur in the construction process of the traditional neural network, which leads to a poor prediction accuracy of the output model. is modeling process will optimize the normalized sample values in the previous step and eliminate invalid data that are greatly affected by subjective factors to improve the prediction accuracy. e optimization steps are as follows: (A) divide the sample values collected in the previous step into a training set and a prediction set and build a BP neural network prediction model; (B) calculate the deviation between the predicted results and the actual results of all sample values and take the corresponding samples with the top 5% of the error order as invalid samples and eliminate them, thus obtaining the samples to construct the neural network. (iii) Take the samples extracted from Step ii as objects, divide them into a training set and a prediction set, and build a prediction model based on the neural network. See Section 5.3 for specific parameters of the neural network. See Section 4.2 for the prediction process of the optimized neural network.
Optimization Process.
Each method has its scope and limitations. e traditional BP neural network has the problems of a slow convergence speed and easily producing local optimum solutions. erefore, optimization methods have been investigated. A genetic algorithm has been used to improve the BP neural network, and the sample value optimization operation is added before the model is run.
Influence factor
Source of policy Project type "Guidelines for the regular prevention and control of COVID-19 in housing construction and municipal infrastructure construction sites" (MOHURD) Total investment "Guidelines for the regular prevention and control of COVID-19 in housing construction and municipal infrastructure construction sites" (MOHURD) Construction scale "Guidelines for the regular prevention and control of COVID-19 in housing construction and municipal infrastructure construction sites" (MOHURD) Construction period "Guidance on contract implementation and price adjustment of housing and municipal works under the impact of COVID-19" Number of individuals in the management team "Notice on pricing adjustment of construction projects during normal epidemic prevention and control in Hubei province" "Guidance on further management of construction contracts during the prevention and control of COVID-19 in Sichuan province" Number of workers "Notice on pricing adjustment of construction projects during normal epidemic prevention and control in Hubei province" "Guidance on further management of construction contracts during the prevention and control of COVID-19 in Sichuan province" Specifically, it is assumed that there are n learning samples, each of which contains k factors. From the first to the n-th sample, the samples are removed to form a new sample set. N sample sets are extracted in total, and each sample set contains n-1 samples. en, the n sets of data are imported into the BP neural network for training, and the average error of the n sets of data is calculated. e above operation is repeated 0.2n times (rounding), and the error is accumulated. e learning samples with the first 5% cumulative error are eliminated, and only 95% of the remaining valid data are retained.
e key steps in the application of the neural network method are selection, crossover, and mutation, and the specific operation steps are as follows in Algorithm 1.
Chromosome Determination.
e genetic algorithm is adopted to optimize the BP neural network. First, the chromosome length is determined, and the initial population is constructed by randomly generating chromosomes.
e chromosome includes the two parameters of weight and threshold, and the calculation of length is shownas follows: where R represents the number of factors, S 1 represents the amount of input data, and S 2 represents the amount of output data.
Selection.
e error calculated by weight and threshold is the most important index to measure the quality of the BP neural network, and the genetic algorithm is used in the initial error backhaul process. e algorithm selects suitable individuals for the next generation by calculating the fitness of different individuals, and the fitness function is shown as follows: where max (f ) represents the maximum value of the fitness function in the population and min (f ) represents the minimum value of the fitness function. e gap of the fitness function calculated by formula (2) is much larger than that constructed by the reciprocal of the fitness. erefore, in the latter iteration, weak-dominant individuals are easier to be saved. e choice is based on roulette, that is, the smaller the error is, the easier the individual to be saved. 3. Crossover. Similar to biogenetics, the crossover of the genetic algorithm is achieved by selecting two individual codes of population. e algorithm needs to set a crossover probability. When the random value is less than the crossover probability, the crossover operation will be performed. e specific method consists of selecting one of the chromosomes in the paternal line and determining its position in the maternal chromosome. en, one position is selected in the maternal chromosome, and its position is determined in the paternal chromosome. e two are exchanged until all the exchanges are completed. A crossover probability of 0.7 was selected in this study.
Mutation.
e mutation operation was performed to explore the solution domain. If there is no mutation in the population, it will produce certain inertia, premature convergence, and stop developing, which offers a better direction. e mutation operation consists of the following steps. is study set the mutation probability, executed the mutation operation when the random number was lower than the mutation probability, randomly generated a twodigit number, found the position of the two-digit number in the chromosome, and then exchanged them. e mutation probability was selected as 0.1 in this study.
Backhaul Neural Networks.
When the genetic algorithm is finished, its result cannot be directly used due to the limitation of its coding accuracy. e threshold and weight are returned to the BP neural network to continue training until the training goal is achieved.
Sensitivity Analysis.
According to the properties of forwarding transmission and the reverse adjustment of the BP neural network, combined with the algorithm of the BP neural network, the weight coefficient between the output neurons can be obtained, and then the most significant factor that affects epidemic prevention costs can be determined. Since the weight coefficient does not directly reflect the size of the influencing factor, to obtain the relationship between the input and output vectors, it is necessary to analyze the weight of the output results. Based on neural network weights, formulas (3)-(5) are used to obtain the absolute influence coefficient of different factors, that is, the sensitivity of the different factors: (i) Correlation significance coefficient: Input: Data collected D (a, b) Output: Trained models h � generate h groups of different choices u � number of outliers deleted m � first deleted outlier sequence number Step 1: Data normalization processing Step 2: Training of models with normalized data for g � 1 to h for every D (g, m) do delete the m-th data data D (g,:) import into neural network Output: Output fitting results m � m+1 while (Calculate all values) end Step 3: Calculate the average avg of h groups of data Step 4: Pick out the top 5% and remove Step 5: Import remaining data into neural network Step 6: Execute genetic algorithm Genetic algorithm coding do Genetic algorithm crossover Genetic algorithm variation Genetic algorithm selection While (the maximum number of iteration is reached!) Output: weight and threshold Step 7: training BP neural network Output: final result ALGORITHM 1: Algorithm of the proposed model. 6 Mathematical Problems in Engineering (ii) Correlation coefficient: (iii) Absolute influence coefficient: where i is the input vector, j is the output vector, and k is the hidden layer neuron. e weight coefficient between the input and hidden layers and the weight coefficient between the hidden and output layers are represented as well.
Selection and Record of Input Variables.
e 12 influencing factors mentioned in Table 2 cannot be completely used as the input variables of the neural network due to the difficulties in obtaining some data. According to the analysis of the research team, the three factors, namely the project construction content, the overall domestic epidemic situation, and the normalized epidemic prevention and control efforts, are not suitable for use as input variables. Especially in the early decision-making stage of the project, the overall domestic epidemic situation and the normalized prevention and control of the epidemic situation cannot be determined and should be abandoned. Each index remains unchanged, and the units and recording methods of the relevant influencing factors are shown in Table 3.
Descriptive Statistic.
is study received data for more than 100 projects. After repeated screening and judgment, 61 items of valid data were obtained. e 61 samples were observed, and it was found that the item distribution was uniform, which met the requirements of the subsequent prediction model. e project types include buildings, municipal engineering, landscape greening engineering, and decoration engineering distributed in urban and rural areas. After the processing of standardized data, the proportion of epidemic prevention costs was calculated by dividing the epidemic prevention costs by the total investment, and some descriptive statistics were obtained according to project type, as shown in Table 4. Among the 61 items of valid data that were collected, there were four types of projects, of which only two were landscape projects. It is of little significance to calculating the confidence interval, so only the confidence interval of housing, municipal, and decorative projects was calculated. rough observation, it was found that the epidemic prevention costs of decoration projects account for the largest proportion of the total investment, and the epidemic prevention costs of municipal projects account for the smallest proportion of the total investment.
5.3.
Programing. MATLAB 2019b software (Math Works. Inc, Commonwealth of Massachusetts, the United States of America) was used to implement the above-mentioned optimized neural network, and the data were imported into the neural network. e relevant parameters were set as follows: the first line is "net. train Param. show � 9"; the second line is "net.trainParam.epochs � 1000"; the third line is "net.trainParam.goal � 1e − 28"; the fourth line is "net.trainParam.lr � 0.1." Specifically, the first line represents the number of fitting times, which indicates that the iteration will stop if convergence is not achieved nine times; the second line in the code represents the maximum number of iterations for the model, which means that the model can be iterated up to 1000 times, but the output does not have to reach the maximum number of iterations; the third line represents the learning target set by the model. e accuracy set here is 10 −28 , indicating that when the number of iterations of the model exceeds the set value or the accuracy is lower than 10 −28 , the model will stop training; line 4 represents the learning efficiency, which is set to 0.1. e setting of learning efficiency cannot be too large; otherwise, it will affect the stability of the model. e number of hidden layers and neurons in the BP neural network directly affects the training accuracy and speed. Usually, the hidden layer is set to 1 layer. ere is no unified approach for setting the number of neurons in the hidden layer. Too many neurons in the hidden layer can lead to overfitting, while too few neurons can lead to underfitting [50]. A new empirical formula was used to determine the number of neurons in the hidden layer, which is shown as follows: where N i represents the number of neurons in the input layer, N o is the number of neurons in the output layer, and α is the constant term. e number of neurons in the hidden layer was determined according to formula (6), where N i � 9, N o � 1, and N s � 55. After repeated tests, when α was set to 0.25, that is, when the number of hidden layers was 22, the obtained training effect was the best.
Operating Result.
e data were imported into the program, the number of samples selected for training was 55, and the number of samples used for testing was six. First, the sample value was optimized and the outliers were eliminated. After removing different data points one by one, six groups of different training results were obtained, as shown in Figure 2. e comparison between the test result and the real value indicates that the program works well and shows good fitting performance.
As shown in Figure 3, there were obvious errors in the 10th, 48th, and 54th groups of data, and those observations are so high. is study analyzed these three data in detail, and the relative error of the 10th group of data is due to the small amount of total investment, while the relative error of the 48th and 54th groups of data is due to the small pressure of epidemic prevention and control in remote areas away from dense crowds.
After elimination, the remaining data were imported into the neural network for learning. e iterative curve is shown in Figure 4. It can be seen from Figure 4 that the model was trained after 73 iterations, and the prediction accuracy of the model was as high as 1.0011 × 10 −12 .
Although it did not reach the set learning goal, the error was within the acceptable range. In addition, the slope of the regression function was close to 45°, and the fitting degree was 1, confirming the prediction accuracy of the model. e six groups of data used for testing were imported into the neural network, and a diagram of the fitting of the model was obtained. e results were compared with those obtained by the BP and BP neural networks optimized by GA. As shown in Figure 5, the optimized neural network performed significantly better than both the BP and ordinary neural networks in the prediction of epidemic prevention costs, and the basic and actual values were consistent. To better reflect the calculation results, the prediction errors of different methods were counted, and the results are presented in Table 5. Although the BP neural networks optimized by GA achieved a better performance than the BP neural network, its average error is larger than that of the BP neural network due to the error of learning samples. However, there is a large gap between the predicted value and the actual value of both the BP and BP neural networks optimized by GA, so it is not a good prediction tool. In comparison, the optimized neural network achieved good results in the prediction of epidemic expenses, with an average error of only 13.14%, indicating that it can be used as a prediction tool in the decision-making stage.
ere are many other machine learning approaches, including support vector machines (SVM) and random forest (RF) [51][52][53], each of which has its areas of applicability [54]. To further investigate the optimization performance of the proposed method, it was compared with other machine learning methods, including RF and SVM methods. e same six datasets used for testing were imported into the neural network, and the comparison results are shown in Figure 6. It can be seen from Table 6 that the results are the same as those obtained in the previous comparison, and the optimized neural network has advantages in comparison with SVM (whose average error is 181.09%) and RF (whose average error is 247.42%). erefore, the proposed method is worthy of further exploration and application.
Sensitivity Value Calculation.
e sensitivity of the influential factors was calculated according to formulas (3)-(5), and the program was repeated five times, through which an average value was obtained. en, the relevant influencing factors were traced back, and the sensitivity was added to rank the influencing factors of epidemic prevention costs. e results are shown in Table 7. e order of the sensitivity of the influencing factors is the total investment, project category, total construction area, the number of construction workers, construction period, the number of Mathematical Problems in Engineering managers, the number of outbreaks during the project construction period, and it belongs to the smart site and the qualification level of the construction subject. ese results can provide good guidance for subsequent engineering practices. Taking the project decision-making stage as an example, in the process of investment estimation, epidemic prevention costs can be preliminarily determined based on the total investment of the project and be adjusted appropriately in combination with the project category. In the subsequent project implementation stage, when the design scheme is completely determined, the construction organization design is continuously improved, and the labor involved in the construction can be determined, while the investment estimation in the decision-making period can be further corrected based on the number of construction workers involved in the construction. It should be noted that the number of COVID-19 outbreaks during the construction period of the project only ranks in seventh place in terms of the sensitivity of the influencing factors. It is believed that domestic construction sites are currently managed in a closed manner, and there is no serious site aggregation epidemic (except for the site aggregation epidemic of the Qingyun Lanwan Project, Zhonglou District, Changzhou City, on March 14, 2022, there were no reports of site aggregation epidemics). erefore, the epidemic prevention and the control of domestic construction projects are better, and the impact of this factor is not significant.
Results.
e total investment, project type, and total construction area were important factors that affect normalized epidemic prevention costs. After the training and calculation of the optimized artificial neural network method, the factors with high sensitivity were identified as total investment, project type, total construction area, the number of construction workers, and construction period. In engineering practice, the epidemic prevention costs of different types of engineering projects can be preliminarily determined according to the confidence interval. e construction administrative department may issue different pricing standards. ere are differences in the sensitivity of the factors affecting the costs of normalized COVID-19 epidemic prevention. e total investment of the project is an index with a high degree of quantification and strong project attributes. e index or the construction investment index can be used as the calculation base, the project category as the main adjustment factor, and other factors as reference factors to calculate the cost of epidemic prevention. Taking a citizen center project as an example, 0.14% ∼ 0.28% of the total investment of the project can be taken as the value range of epidemic prevention costs, which can be adjusted appropriately considering the impact of public building types and the construction scale.
6.2. Discussion. e epidemic prevention and control fee should be identified as the total price of measure fees. After the Ministry of Housing and Urban-Rural Development issued a document to clarify that epidemic prevention costs arising from the prevention and control of the COVID-19 epidemic can be included in project costs, provinces and municipalities responded positively and issued corresponding policy documents. However, the analysis of the documents revealed that the provisions of different regions are different. Under the current situation of normalized prevention and control, it is suggested that epidemic prevention and control fees should be further identified as a measure fee and added to the total price of measure fees, and relevant documents should be issued regarding the collection and calculation of the total price of measure fees to facilitate the reference, application, and implementation of front-line engineering managers. Meanwhile, the proportion of normalized COVID-19 epidemic prevention expenses should be calculated according to the project type. Under normalized epidemic prevention and control, although local governments have introduced policies and regulations that need to include epidemic prevention costs within engineering costs, they have not specified relevant rates. rough the descriptive statistics applied to 61 items of valid data, the average and confidence interval of the proportion of different types of epidemic prevention costs were calculated according to project types, which is significant for engineering practice.
Conclusions
In this paper, an optimized BP neural network prediction model is proposed and applied to predict normalized COVID-19 epidemic prevention costs, and the average prediction error of the optimized neural network was significantly reduced. e general neural network does not eliminate abnormal data, which leads to a good learning effect but increases error rates. Considering a real-life engineering application, a new neural network prediction method optimized with MATLAB was constructed. e collected samples were optimized to eliminate the abnormal data, and then the remaining data were imported into the neural network. In the six groups of the test data, the error was only 13.14%. More data analysis methods should be used to assist the project management decision-making process. Construction project management, especially construction site management, accumulates a large amount of first-hand data, which can guide engineering practice by eliminating the links and laws between the data. In this study, the optimized neural network prediction model was used to explore the prediction of normalized epidemic prevention costs. e application and comparison of various machine learning methods, such as the optimized artificial neural network with support vector machine and random forest methods, can be further explored to establish a theoretical model that is more suitable for the engineering practice and assist in scientific decision-making in engineering management. In addition, each method has its applicability and advantages and disadvantages. When using specific methods, attention should be paid to the matching between data requirements and method requirements. As far as this method is concerned, construction project managers also need to consider the computational complexity when using this method. In future work, we will explore how to simplify the calculation to serve the project management better.
Data Availability
e case data of this study were obtained from the actual investigation of the research group. e data that support the findings of this study are available from the corresponding author upon reasonable request.
Conflicts of Interest
e authors declare no conflicts of interest.
Authors' Contributions
Huadong Yan was responsible for conceptualization, formal analysis, investigation, and writing of the original draft; Jingchun Feng contributed to project administration; Xu Chen provided software. All authors have read and agreed to the published version of the manuscript. | 8,478.4 | 2022-08-24T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Nonlinear thermal radiation in flow induced by a slendering surface accounting thermophoresis and Brownian diffusion
Abstract.Our attention here in this research is on scrutinizing the nonlinear convection characteristics in a flow induced by a slendering surface. Flow expression is developed through electrically conducting Williamson nanomaterial. Nonlinear forms of stretching and free stream velocities are imposed. Consideration of nonlinear thermal radiation, non-uniform heat generation/absorption, Joule and convective heating aspects describe the phenomenon of heat transfer. The zero-mass condition for concentration is also considered. The compatible transformations produce strong nonlinear differential systems. The problems are computed analytically utilizing the bvp4c procedure. Heat transfer rate and drag force are also explained for various physical variables. Our analysis reveals that the heat transfer rate augments via larger radiation parameter and Biot number. Moreover, larger Brownian motion and thermophoresis parameters have opposite characteristics on concentration field. For the verification of the present findings, the results of the presented analysis have been compared with the available works in particular situations and reasonable agreement is noted.
Introduction
The flow investigation in the region of boundary layer via a stretchable sheet is a fascinating research field because of its significance in several utilizations in engineering systems and industries, like wire drawing, glass fiber production, hot rolling, metal spinning, contraction process of metallic plates and aerodynamic extrusion of plastic sheets. In view of such applications, Sakiadis [1] initially modeled the problem of boundary layer approximations by stretchable surfaces. Afterwards, several attempts regarding stretchable flow have been reported (see [2][3][4][5][6]). Besides this, when the temperature difference between the surface and ambient fluid is significantly large, the nonlinear density temperature (NDT) variations influence the flow and heat transfer distributions. Therefore in the buoyancy force expression, nonlinear density temperature variations cannot be neglected. Information related to such phenomenon can be seen in refs. [7][8][9][10][11][12].
Undoubtedly, energy generation is the most important issue in industrial requirements. Thus scientists are focused to improve heat transport rate of systems which comprise chemical plants, power stations, air conditioning and petrochemical industry. In this regard, several techniques were proposed but are not appropriate because of less thermal conductivity. Consequently thermal scientists established energy materials known as nanomaterials comprising nanoscale particles [13]. The tiny-scale particles in a base fluid are termed as nanofluid. A nanofluid is an improved type of fluid comprising nanometer-sized particles (diameter < 100 nm) or fibers suspended in the ordinary fluid. Such feature of high thermal conductivity has been noticed by Masuda et al. [14]. Choi et al. [15] disclosed that the thermal conductivity of the fluid can be improved up to twice that of the base fluid by the addition of a small quantity (< 1% by volume) of nanoparticles to common heat transport liquids. Nanoliquids have a wide range of applications in microelectronics, microfluidics, solid-state lighting, transportation, medical applications, biomedical industry, detergency, power generation, etc. Besides this, the interaction of magnetic field with nanoliquids has numerous potential applications and may be utilized to deal with problems like chilling of nuclear reactors by liquid sodium and induction flow meter, which relies on the potential difference of the fluid in the direction normal to the motion and to the magnetic field. A number of articles on magneto nanofluids are available in the literature and some are given in refs. [16][17][18][19][20][21][22][23][24][25].
The consideration of the nonlinear thermal radiative fluid flow by a stretchable surface is an area of potential interest for researchers due to its demands in several engineering and physical processes. Applicable areas of such processes are solar-power technology, propulsion devices, nuclear plants, combustion chambers, aircraft and chemical processes at high operating temperatures. Especially the thermal radiation aspect plays an important role in regulating the heat transport process in the polymer processing industry. Keeping this in mind, the effect of nonlinear thermal radiation on the Sakiadis flow were firstly reported by Pantokratoras and Fang [26]. Cortell [27] addressed the importance of thermal radiation effect in the stretchable flow of a viscous liquid. Khan et al. [28] developed the series solutions to analyze the nonlinear thermal radiation effect on the Burgers nanofluid flow with convective heating and heat generation/absorption. Hayat et al. [29] investigated the simultaneous characteristics of nonlinear convection and magnetohydrodynamics (MHD) in the Walter-B nanofluid flow induced by a nonlinear stretchable surface in the presence of nonlinear thermal radiation and convective heating. Nonlinear convection and nonlinear thermal radiation impacts on the Maxwell nanofluid by convectively heated stretched sheet are reported by Mahanthesh et al. [30].
The intention here is to investigate the feature of convective heating and zero-mass flux conditions in nonlinear convective flow of a magnetic Williamson liquid. The flow is generated due to a slendering surface. The phenomenon of nanoparticles occurs due to consideration of thermophoretic and Brownian motion. Nonlinear thermal radiation, heat generation/absorption and Joule heating are incorporated in the energy transport expression. The governed mathematical expressions are calculated through a numerical algorithm [31][32][33][34]. The significance of arising non-dimensional variables is discussed through graphs and tables.
Mathematical formulation
Let us scrutinize the two-dimensional (2D) nonlinear convective flow of a magneto Williamson nanomaterial towards a slendering surface at y = A 1 (x + b) 1−n 2 . The considered nanoliquid model includes the salient characteristics of thermophoresis and Brownian motion. The nanofluid is conducted electrically through the non-uniform magnetic field (B(x) = B 0 (x + b) n−1 2 ) operated perpendicularly to the flow trend (see fig. 1). The supposition of a small magnetic Reynolds number leads to negligible characteristics of electric field. Besides this, the nonlinear thermal radiation, heat generation/absorption, Joule and convective heating effects are also considered in the heat transfer process. In view of these assumptions the governing expressions for the present situation are [29,35]: for conditions [29] in which (u, v) are the velocity elements along the (x, y) axes, respectively, ν = ( μ ρ f ) the kinematic viscosity with μ the fluid kinematic viscosity and ρ f the base fluid density, α * = k Letting [29] eq. (1) is trivially satisfied and other equations are reduced to (11) in the following forms: Here the Weissenberg number (W e), Hartman number (M ), thermal buoyancy parameter (λ), Grashof number in terms of temperature (Gr x ), local Reynolds number (Re x ), nonlinear thermal convection parameter (β t ), nonlinear solutal convection parameter (β c ), ratio of concentration to thermal buoyancy forces (N ), Grashof number in terms of concentration (Gr * x ), temperature ratio parameter (θ f ), radiation parameter (R), Prandtl number (Pr), heat generation (δ > 0), heat absorption (δ < 0), thermophoretic parameter (N t ), Eckert number (Ec), Brownian motion parameter (N b ), Biot number (γ), Schmidt number (Sc) and wall thickness parameter (α) are stated as follows: The surface drag force (C f ) and local Nusselt number (Nu) are The following relations will be satisfied for surface shear stress (τ w ) and surface heat flux (q w ), i.e.
Computational procedure
This subsection reports the numeric solutions of developed nonlinear systems (12)- (14) subject to boundary conditions (15) through the bvp4c technique.
Testing the numerical scheme
The validation of the employed technique with the available published results [36] in a limiting manner is elaborated in this subsection through table 1. It is noticed that our results are in reasonable agreement with [36].
Analysis
The salient features of distinct physical variables like Weissenberg number (W e), Hartman number (M ), thermal buoyancy parameter (λ), nonlinear thermal convection parameter (β t ), nonlinear solutal convection parameter (β c ), ratio of concentration to thermal buoyancy forces (N ), temperature ratio parameter (θ f ), radiation parameter (R), Prandtl number (Pr), heat generation parameter (δ > 0), heat absorption parameter (δ < 0), thermophoretic parameter (N t ), Eckert number (Ec), Brownian motion parameter (N b ), Biot number (γ), Schmidt number (Sc) and wall thickness parameter (α) on the non-dimensional temperature (θ), nanoparticle concentration (φ), surface drag force (C f Re 1/2 x ) and Nusselt number (Nu x Re −1/2 x ) are illustrated and interpreted in figs. 2-13 and tables 2 and 3. Figure 2 displays the nature of the temperature field with N b . Here the strong Brownian diffusion leads to an upsurge in the temperature and the associated thickness of the thermal boundary layer. Small elements in the thermophoresis mechanism are dragged aside from the warm surface to the chilly one. Thus a huge amount of nanoparticles will move apart from the warm surface, which increases the liquid temperature. The radiation impact on the temperature (θ) is exhibited in fig. 3. Clearly the ascending values of the thermal radiation factor (R) enhance the temperature field. In fact, radiation is the diffusion of heat energy from one section to another. Thermal radiation yields peripheral heat energy to augment the temperature field. Figure 4 shows the effect of the Prandtl number (Pr) on the temperature (θ). Since Pr is contrariwise proportional to the thermal diffusivity, escalating values of Pr yield a weaker thermal diffusivity. Larger Prandtl liquids produce weaker thermal diffusivity and lower Prandtl liquids have stronger thermal diffusivity. Such weaker thermal diffusivity yields a decrease in temperature and its associated boundary layer thickness. The variation of temperature (θ) for different values of θ f is plotted in fig. 5. It is found that the temperature (θ) and thickness of thermal boundary layer increase through larger θ f . This happens due to the fact that when we increment θ f , then T f upsurges, due to which more heat is transferred to the fluid and so the temperature profile increases. Figure 6 exhibits the effect of γ on the temperature (θ). It is observed that both the temperature and associated thickness of thermal layer are higher via larger γ. Physically the heat transfer coefficient is directly related to the Biot number, which is enhanced for larger γ. This enhancement in the heat transfer coefficient leads to higher temperature. Figures 7 and 8 show the influence of heat source or heat generation (δ > 0) and heat sink or heat absorption (δ < 0) on temperature (θ). It is observed that the temperature increases throughout the boundary layer region as (δ > 0) increases; however, a reverse scenario is noticed for heat sink or heat generation (δ < 0). This is because the heat generation or heat source parameter (δ > 0) provides more heat into the fluid, which leads to an intensification of the temperature and associated thermal boundary layer. The variation of Ec on the temperature (θ) is plotted in fig. 9. Here the temperature (θ) and the associated thickness of the thermal layer are increasing functions of Ec. Physically, an increase in Ec means that the heat energy is stored in the fluid due to the frictional () or drag forces. As a result, the fluid temperature (θ) enhances. Figure 10 is drawn to have a good knowledge of the Schmidt number (Sc) on nanoparticle concentration (φ). As the Schmidt number is inversely proportional to the mass diffusion, so a larger Sc shows a decay in the nanoparticle concentration (φ) and its linked thickness of boundary layer. The behavior of Pr versus nanoparticle concentration (φ) is addressed through fig. 11. Clearly, a larger Pr leads to a decay in the nanoparticle concentration (φ) and the corresponding layer thickness. Figure 12 shows the variation of N t on nanoparticle concentration (φ). Here rising values of N t yield an increment in nanoparticle concentration (φ). Physically, larger N t creates an additional force and makes nanoparticles pass from the hottest to the coolest region. Therefore nanoparticle concentration (φ) and its linked boundary layer are higher for N t . The variation in the nanoparticle concentration (φ) as a result of the changes in N b is shown in fig. 13. It is found that increasing values of
Conclusions
Here we explored the interaction of convective heating in a hydromagnetic Williamson nanoliquid induced due to a slendering surface. The nonlinear thermal radiation, heat generation/absorption and Joule heating are also considered in the heat transport expression. The effect of emerging variables on dimensionless quantities is illustrated in detail.
The following key points are drawn from this work: -A larger Pr corresponds to a decay in temperature and thermal boundary layer thicknesses, whereas a reverse situation is noticed via larger γ. This reverse behavior is due to weaker thermal diffusivity, in the case of the Prandtl number, and to a larger heat transfer coefficient, in the case of the Biot number. -Both temperature distribution and associated thermal layer are enhanced through larger Eckert number (Ec), heat generation parameter (δ), radiation parameter (R) and temperature ratio parameter (θ f ); however temperature distribution and related thermal layer are lower for heat absorption parameter (δ). -The increasing thermophoretic parameter increases the profiles of temperature and concentration.
-Consideration of higher Schmidt number (Sc) and Brownian motion factor (N b ) yields lower concentration.
-The values of surface drag force are higher due to the increasing values of the Hartman number (M ); however the drag force is reduced for larger Weissenberg number (W e), thermal buoyancy parameter (λ), nonlinear thermal convection parameter (β t ) and wall thickness parameter (α). -Larger values of thermophoretic and Brownian motion parameters lead to an enhancement in the temperature profile due to higher thermal conductivity fluid. -The local Nusselt number enhances for higher values of Nt, δ and Pr.
Open Access This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | 3,189.2 | 2017-06-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Compressed gas domestic aerosol valve design using high viscous product
Most of the current universal consumer aerosol products using high viscous product such as cooking oil, antiperspirants, hair removal cream are primarily used LPG (Liquefied Petroleum Gas) propellant which is unfriendly environmental. The advantages of the new innovative technology described in this paper are: i. No butane or other liquefied hydrocarbon gas is used as a propellant and it replaced with Compressed air, nitrogen or other safe gas propellant. ii. Customer acceptable spray quality and consistency during can lifetime iii. Conventional cans and filling technology There is only a feasible energy source which is inert gas (i.e. compressed air) to replace VOCs (Volatile Organic Compounds) and greenhouse gases, which must be avoided, to improve atomisation by generating gas bubbles and turbulence inside the atomiser insert and the actuator. This research concentrates on using “bubbly flow” in the valve stem, with injection of compressed gas into the passing flow, thus also generating turbulence. The new valve designed in this investigation using inert gases has advantageous over conventional valve with butane propellant using high viscous product (> 400 Cp) because, when the valving arrangement is fully open, there are negligible energy losses as fluid passes through the valve from the interior of the container to the actuator insert. The use of valving arrangement thus permits all pressure drops to be controlled, resulting in improved control of atomising efficiency and flow rate, whereas in conventional valves a significant pressure drops occurs through the valve which has a complex effect on the corresponding spray.
INTRODUCTION
There are a number of technical challenges in replacement of the consumer aerosol valve using conventional propellant (such as butane) with safe gases such as air and nitrogen.These challenges have limited their application in the market, although they have environmental advantages: i. Insufficient atomisation power, leading to the spray having a large droplet size and inferior spray pattern.This becomes Significant drop off in spray 'power' as the can is depleted due to the reduced volume of liquid in the can to be sprayed causing a corresponding decrease in pressure.ii.Consumers notice a further reduction in spray performance as well as not having full recovery of the product.
The valve to be designed in this investigation should ideally overcome or reduce both of these problems and this done by exploiting a phenomenon known as effervescence or "bubbly flow".Bubbly flow comes about when a small proportion of compressed gas within the can is injected directly into the passing flow of product within the valve assembly.Effervescent is the process of various actively introducing gas bubbles into a liquid flow, immediately upstream of the exit orifice, thereby forming a two-phase flow.These are of interest due to their potential for using a small flow of atomising gas to produce a very fine spray [1 and 2].Researchers and engineers have studied their use for application including household aerosols [3 and 4].The technique has not been applied in commercial aerosols because even at the low value of Gas/Liquid mass Ratio (GLR) used (around 1%), can pressures drops too quickly if using the compresses gas in the can to atomise.Also dispensing the gas and liquid simultaneously and producing the required flow, is itself complex.In addition, effervescent atomising prediction for modelling drop size was recently made by researchers on high viscosity material such as gelatinized starch suspension [5 to 7].Moreover, Asumin [7] designed atomiser inserts using inert gases for domestic aerosols which will be discuses in details in the next Section.
The word "domestic" and "consumer" has been used throughout this paper interchangeably as normal practice which provides a same connotation.The inventive steps of the corresponding valve designs were initially filed with a number of the interlocking patents [8 to 12].The overall aims of this study are to design consumer aerosol valves using inert gas propellants (i.e.compressed gas, nitrogen, etc) generating "bubbly flow" inside the flow passage upstream of the atomiser insert.Thus by providing the correct geometry of orifices and mixing chamber, the flow becomes highly energised and turbulent.Specifically the prime objectives of this investigation are as follows: • To produce sprays that look, feel, spray and perform like current consumer aerosols • Replace butane or other Liquefied Gas Propellants (LPG) with safe inert gas propellants (i.e.air, nitrogen etc) • Step-change in performance over current compressed gas technology • Cover all aerosol formats including bag-on-valve aerosol • No cost or manufacturing penalties and also utilised standard components or standard component sizes • Constant discharge flow rate and drop size through the life of the can • Easy filling and no requirement for VPT (Vapour Phase Tap) Conventional aerosol valves use a hole in the housing which is called VPT (Vapour Phase Tap) to allow the propellant gas into the liquid flow upstream of the valve.However, making a bubbly flow through a valve system is not ideal when VPT is used since a considerable pressure drop transpires through the valve.
The novel consumer aerosol valve designed and demonstrated in this study [13], using inert gas such as compressed air, Carbon dioxide (CO 2 ) or Nitrogen (N 2 ) propellants, have been applied to a wide variety of continuous aerosol valve applications using high viscous products (e.g.antiperspirant, olive oil, gels, hair removal cream, etc).
1.1 PREVIOUS WORKS Some published works related to domestic aerosols using compressed gas are currently available in which this section intends to highlights these findings.These works include studying on to the atomiser insert designs [6, 7, 14 and 15] and also the previous study on the consumer aerosol valving arrangement using compressed gas [16 and 17].
In relation to a new atomiser insert design for domestic aerosol valves working with inert gases Asumin [6 and 7] divided the work into two different phases which were "Liquid Phase" and "Two-Fluid Phase".Figure 1 shows the geometry of the atomiser insert and the characteristics of the bubbly flow at the downstream end of the flow channel combine to give a number of turbulent bubble-laden jets impacting on the sharp edges (6).Therefore, when the jets are developed, it makes the fluid (liquid and gas) travels along the orifice channel (4) and formed flow separation from the wall of the first part of orifice (4).The length of orifice channel ( 4) is such that the flow re-attaches to the wall at a downstream region thereof.The separation and re-attachment is a highly fluctuating phenomenon which is very beneficial to the atomisation into droplets of the jet emerging from the exit of orifice channel (4).The result from the device is a fine liquid spray.Furthermore, the fluctuations at the exit of expansion chamber passageway (3) provide a different hissing sound which is considered as "attractive" to users of aerosols since such a sound is expected from current liquefied gas propellant aerosols.Yuka [14] also worked on the design of an atomiser insert using compressed gases.His designed comprised an aerosol can in which it filled up with an aerosol composition discharged.A switching mechanism which are aerosol products provided with a discharge member attached to the aerosol can, and is switched to a discharge mode which discharges the aerosol composition with misty state, and a discharge mode which is liquid drop-like and discharges the high viscous compositions (e.g.olive oil, yellow bees wax, liquid paraffin, etc).As can be seen in Figure 2, a switching mechanism (1) switches connection between route of a push button (2) and routes (3,4) of a mist generating nozzle (5) and a droplet nozzle (6), to discharge aerosol composition in mist form or droplet form.The mist generating nozzle is inserted into a stem (7) of a valve (8) in the aerosol can (9).A buffer (10) reduces the flow viscosity of the aerosol composition through the droplet nozzle.
As is shown in Figure 2, when stem is in open position, there is at least a 90° bend in the upstream flow path part of the valve.Indeed, this relatively large change in direction of the flow is an inevitable reason of the pressure loss through the flow passage.This is in contrast with the design of the "low loss" valve that presented in this paper which has no convoluted passages to the direction of flow and thus causing no pressure loss within the valving arrangement which results in better spray performance.
In 1992, Satoshi and Akira [15] also carried out study on designing an atomiser insert design for spraying highly viscous products.These investigators reported that their design could also be used to continuously dispense, even, highly viscous solutions by dividing, with a moveable bulkhead, inside a container into a first chamber for housing the liquid and a second chamber for housing pressure applying agent and placing a cock on a dispensing tube for communicating the first chamber with outside as shown in schematically in Figure 3.As can be seen in Figure 3, when a lever (1) of a lid (2) is laterally directed, a through hole (3) of the lid is also laterally directed, so that an upper tube (4) and a lower tube (5) of a twoway cock (6) are shut by a side of the lid.When highly viscous stock solution is to be dispensed, the lever is vertically directed whereby the through hole of the lid coincides with the upper tube and the lower tube so that a first chamber (7) with the highly viscous stock solution is communicated with outside.Therefore a piston (8) rises by pressure of pressure applying agent in a second chamber (9), and the highly viscous solution is dispensed from an opening end of the upper tube.In this case the upper tube, the through hole and the lower tube make a direct tube with its inner face smooth, so that unnecessary viscosity resistance is small.Therefore a drop in pressure approximately uniformly occurs over the lower tube, the through hole and the upper tube thereby dispensing the solution smoothly.
It is not clear from the work of these authors that whether or not the upstream and downstream fluid flow path sections, being moveable relative towards each other, with operation of the actuator mechanism, are to open the valving arrangement.The design of "low loss" valving arrangement in this investigation is, however, opened by relative movement which to allow upstream and downstream flow path sections to come into register with each other.Again, this is a key to obtain consistent spray performance for the corresponding products when using inert gases.
There was also a study on the design of continuous valve using inert gases by Dunn and Weston [16] in 1990 in which the valve was attempted to improve the fineness of sprays generated by an inert gas.The main objective of this design was to bleed the gas into the liquid achieving two fluid atomisation and thus "bubbly flow" to have increase in liquid breakup and provide fine sprays.Figure 4 shows the "Flow Discharge Valve" which was granted in 1990 by Dunne et al.This valve is regulating the flow of a liquid product from an aerosol canister (1) which is pressurised by a permanent gas propellant comprises a tubular valve stem (2) formed with a liquid orifice (3) and a gas orifice (4) leading into a mixing chamber (5).Downstream of the chamber is at least one restrictor (6) through which the mixture is forced to pass to produce a chocked or sonic flow, which results in the mixture expanding to form a foamy mixture.
As can be seen in Figure 4, the valving arrangement proposed by these authors, comprised of a number of restrictors in the liquid passage that are to transmit the liquid from the first passage to a mixing area and also conveying separately the pressurised gas from the second passage from the liquid into the mixing area.The severe flow blockage introduced by the Dunne et al [16] designs would provide unacceptable low flow rates unless very much higher can pressures than those in current use as well as being unsuitable for spraying high viscous products.
Smith and Gallien [17] also reported on the design of an invertible spray valve utilising container containing same [17].As shown in Figure 5, this design is mainly related to an improved spray valve, e.g. an aerosol valve, a tilt valve, a pump spray valve, or a trigger spray valve, for use in dispensing product from a container.Specifically, the valving arrangement includes a valve body with a lower portion consist of a ball chamber having a gravity-responsive ball within enables the valve to be used with either end up.
Briefly, the valve shown in Figure 5 comprised of a housing (1), defining a longitudinal axis, with a circular side wall (2) extending down beyond a floor (3) of the body to define a socket.Into this socket is frictionally engaged an attachment (4) having a circular upper end and a nipple (5) at its lower end.The attachment is partitioned into a primary product passage, communicating with a product outlet extending through the floor of the valve body, and a ball chamber, the lower of which is provided with a valve seat with a bypass opening communicating with the primary product passage.A ball chamber passage (6) is formed in the ball chamber above the valve seat (7) and a ball ( 8) is normally seated, via gravity, on the valve seat when the container is in a normal upright position.When the container is inverted, the ball drops away from the seat and permits passage of product through the ball chamber passage, through the bypass opening into the primary product passage and up into the valve body for discharge.At least one of a ball chamber longitudinal axis and a ball chamber plane is inclined relative to the longitudinal axis defined by the remainder of the valve to alter the degree of permissible tilt of a container containing the valve before the ball becomes unseated.
As can be seen in Figure 5, the liquid passage experiences at least two twist passages (i.e. through the dip tube and also around the ball).These restricted and intricate passages cause severe pressure drop coefficient in the valving system and will be detrimental to the direction of the flow and that subsequently affect the required spray performance.In comparison, the new design of the valve used in the present investigation does not include such restrictors or complicated flow routes which can therefore provide better atomisation quality and flow rate "constancy".
NOVEL AEROSOL VALVE DESIGN USING HIGH VISCOUS PRODUCTS
This section introduces a novel domestic aerosol valve called "Low Loss" valve for use in continuous spraying high viscous products (up to 400 cP) such as hair removal cream, antiperspirants and cooking oil.This uses the concept of completely removing all restrictions on the liquid flow between the dip tube and the actuator-insert assembly so that there are no blockages caused by small orifices, except of course that of the atomiser insert.
Figure 6 shows the prototype design of the "Low Loss" valve in which there is a light stainless steel spring behind the ball to push the ball back and seal the liquid inlet passage to the stem when the stem is in closed position."Low Loss" valves that are proposed here, when the valve is fully opened there is no change in liquid passage direction and also no changes in cross sectional area neither of the liquid passage nor for bubbly flow if gas is injected into the liquid upstream of the valve.For pipe systems the equivalent to a low loss valve is a ball valve for which a cylindrical hole in the ball has the same internal diameter as that of the pipe so that when the valve is opened the fluid flow experiences no restrictions and the valve has an extremely small pressure drop coefficient.The design for evaluation and spray testing was chosen on the bases of: • Relatively simplicity and thus low cost • Novelty and thus ease of IPR protection • Perceived high chance of good reliability.
APPARATUS AND METHODS OF DATA PROCESSING
This Section generally discusses all experimental apparatus used and the test procedures they were used in.The author's work used almost entirely unsteady sprays from conventional metal aerosol cans and also a special commercially available glass pressurised reservoir.
VALVE MOUNTING
There were two different types of aerosol containers for mounting valves which were used in this investigation.A commercially available glass aerosol research container (the "glass can") was available for more trials this valve because it was more convenient to use and could be used to measure liquid flow rate by the weighting method.The "Glass Can" has 100ml volume capacity and it was used to model as a conventional can with pressure up to 10bar.The valve assembly could easily be used again and again with easy refilling and repressurising.In the later stages commercial aluminium and tinplate cans (see Figure 7), of various volumes and pressure ratings, were used for testing the valves in real conditions.In these cases it was found that once a valve was crimped in a cup and onto a can the valve could not be dismantled for maintenance and cleaning.
Crimping method
Crimping method was one of the major apparatus in this investigation in which aerosol valve components attach together and into mounting valve cups, and subsequently into the cans in some cases.This machine uses collets to expand and push the metal of the valve cup under the curl of the can.The machine includes of a filling chamber for propellant and collets for crimping and "swaging" the assembled valve into a can.Collets move into the mounting cup and spread to a specific diameter and depth.
FILLING METHOD
One of the most major methods in aerosol filling is the "Gasser Shaker" in which the can is vacuumed and the assembled valve is crimped to the can and then the propellant is injected into the can with a plainly shakes [18 to 20].In this investigation, when the assembled valve was used in the aluminium can or tinplate can, this method was used to fill the can with an inert gas. Figure 7 shows the method of filling in this investigation.The sample can is vacuumed and there is no liquid into the can.The "brass can" is filled with required liquid and it is pressurised.When the valve is opened, the liquid into the brass can is pushing into the trial can till the required ratio is gained.Then the valve is closed.Subsequently, the can is pressurised as shown in Figure 7 with an inert gas.Then the pressure is checked with the pressure gauge.
EXPERIMENTAL ERRORS 3.4.1 Droplet size
The laser family and its family of light scattering instruments are accepted as benchmark particle sizing devices and usually an accuracy of ±1.0mm for D (v,50) is reasonably assumed, provided that the spray meets certain conditions which include: • Obscuration of laser beam to be between 5 and 60% approximately: this was the case for the current measurements • Beam steering effects of vapour are either negligible or can be obviated by the "kill data" routine that removes its effects.
Liquid flow rate
Apart from when using the "brass can" reservoir, the liquid flow rate during spraying is measured by using a stopwatch to spray for periods of, usually, 10s or 20s, and weighing the can and its contents before and after this period.Error contributions are: • Time duration is measured to within ±0.5s approximately.In addition there are unknown transient effects because spraying start up and shut down when pressing and releasing the actuator to activate the valve, cannot be truly instantaneous.• The weight is measured to within ±0.1g, a typical sprayed mass being 5-10gr in 10s • The measured Liquid Flow rate is estimated to be accurate to within ±10% at the worse.
Other error sources
The above errors should usually be random and would manifest themselves as scatter in data.When measurements were taking there are other potential sources of error that are more systematic for a given set of data.For example if the spray is positioned so that it does not project centrally across the laser beam of the laser instrument, there would be systematic errors as the can is evacuated with it remaining in the same position.During the experiments the development device nature of some of the valves led to slight jamming of the stem and, as mentioned in the appropriate sections, this can affect the spray and the measured flow rate.
PRESSURE LOSS COEFFICIENT MEASUREMENT
Referring to Figure 8 the valve to be tested is mounted vertically with the outlet C (at top).The inlet B (at bottom) is connected to 3-5mm internal diameter flexible tubing using adaptor fittings if required.The length of tube linking the valve with the pressure measurement position A should not exceed 0.5m.It is essential that the pressure drop measured is representative of the valve itself and the pressure drop should not be influenced by additional loss creating components that may form part of an aerosol delivery device or by the supply conduit to the valve.If such components, that do not form part of the valve, cannot be removed, their contribution to the pressure drop is taken into account by the procedure described below.
The outlet and inlet of the valve is supplied with water, via a flow meter, from a steady supply source at 15-25°C, and this water can be clean mains water but is preferably distilled water.The flow meter should be capable of providing measurements of water volume flow rate with 0.02 ml/sec accuracy, or better, and should cover at least the range from 0.2ml/sec to 2ml/sec.At point A there is a junction at which a pressure measurement instrument is connected.This is preferably an electronic transducer type of device, designed for use with water, and should have an accuracy of 1.0mbar (100Pa) or better with a range from zero up to at least 5bar (5kPa).The outlet for the water at point C should be at the same height as point A. In order to compare different valves a common liquid volume flow rate Q should be used at the valve, and a flow rate Q = 1.0 ml/sec, is used, this being representative of that found in the stem in many consumer aerosol devices.In order to calculate a characteristic velocity V for a valve, the internal diameters of the inlet B and outlet C should be measured.If these are not equal then the smaller value, C, should be used to calculate the representative crosssectional area, where A has unit m2 and D has unit m.The flow rate is Q, and the characteristic velocity, V, can be calculated from this relationship.
Applying conversions from metres to mm and from m 3 /s to ml/s, it is conveniently found that: To carry out a test the valve is fully opened and the test flow rate is set up.When steady conditions have been established the pressure P 1 is recorded.It is important to ensure that there are no bubbles or airlocks in the flow path or in the valve.The test should be repeated at least 5 times and an average value of P 1 should be used.In order to remove the effects of pressure drops caused by other features of the flow between points A and C, that are not part of the valve, a second test should be carried out.As shown schematically in Figure 9 the valve is removed however the supply conduit to the valve is retained.For a conventional aerosol valve, as shown in Figure 8, the valve housing is kept in place and connected to the water supply; however the valve stem, spring, sealing gasket and metal aerosol cap (into which the valve housing is normally crimped) are removed.A second test is carried out at the same flow rate as for the first test and a pressure P 2 is recorded.The representative pressure drop for the valve is then found from ΔP = P 1 -P 2 .
The loss coefficient C of the valve is found by dividing this pressure drop by the dynamic head of the flow at the valve, the dynamic head being ½ ρV 2 where ρ is the density of the water, so: Where ΔP has units bar, ρ has units kg/m 3 , and V has units m/s.As examples of actual testing using this procedure by the Inventors: 1.
A new low loss cylindrical valve, with cross section similar to that shown in Figure 8 and with conduit and exit each of 1mm diameter, was tested and yielded a loss coefficient: C = 3.40 2.
A conventional valve was tested of the type used with liquefied propellant hairspray aerosols.This had a single outlet for the stem with diameter 0.5mm.The characteristic diameter was the internal diameter of the stem which had D = 1.8mm.This test yielded a loss coefficient: C = 1750 3.
A conventional valve, similar to that in, was modified by drilling six holes of 0.5mm diameter as stem inlets, and also widening the channels through which the liquid must pass inside the valve.Tests with this modified conventional valve yielded a loss coefficient: C = 35.1.
The result obtained using this testing procedure the specify that an aerosol valve may be termed a Low Loss valve if it achieves a value of loss coefficient, that is less than or equal to 10, and preferably, less than or equal to 5.
RESULTS AND DISCUSSIONS
Ideally the new consumer aerosol valve design should be capable of performing in a similar way to current conventional (liquefied gas propellant) aerosol valves, and certainly have better spraying performance for a wide range of high viscous products.The spray performance can be best described by characteristics describing drop size, liquid flow rate, constancy of drop size and flow rate during can lifetime, and the capability of fully evacuating the can of liquid.The required performance should be achievable using existing commercially available cans and ideally using 12 bar cans which would be filled at 9 to 10bar.
This Section presents a spray performance of the "Low Loss" valve with using olive oil and describes the results of this test.Also, it shows a comparison of olive oil spray performance of "low loss" valve and a conventional domestic valve which was provided from a major company in which because of the strict confidentiality imposed the authors cannot mention its name.Furthermore, this Section provides some qualitative spray performance with using some different high viscous products to show the capability of this new design valve.The sprays were characterised using the laser instrument.The downstream distance between the atomiser insert and the laser beam was kept at 15cm.This downstream distance was selected as being the furthest downstream that could be used without the risk of the spray impingement on the lens.All images were also captured using a digital still camera which provided qualitative information and also data on cone angle.
At this stage it is apparent that some consistent definition is required in order to quantify the "constancy" of liquid flow rate and droplet size in order to give meaningful comparisons between various aerosols (here, by aerosols means that the combination of can-productvalve-insert).It was apparent that simply taking the difference between the first measured value of liquid flow rate (full can) and last value (empty can), and dividing by the fist value, although seemingly the obvious definition for consistency, was not ideal because the initial value could occasionally suffer from effects such as the initial priming of the valve, and more importantly the final value often included "spluttering" effects as the can emptied.Thus it is here proposed to use the 90% and 10% points in the can emptying results: these are arbitrary choices made by examining many sets of results.Thus the definition of flow rate constancy (C Q ) and drop size constancy (C D ) are: Figure 10 shows the result fusing 50% fill ratio of olive oil in a 250 ml metal can pressurise with carbon dioxide to 10 bar and using a 0.75 mm Aqua insert.Such a large insert was found to be necessary to permit full opening of the spray cone.As is shown, there is a steady decrease in pressure with about 17% of the can gas injected into the valve mixing chamber.Also the discharge flow rate decreased steadily with constancy of C Q = 33%.The particle size shows that there is a good drop size constancy with C D = 19%.Drop size is high with D (v,50) around 450mm but this is acceptable for the coating process that the oil spray is used for.
Figure 11 shows the spray performance result of "low loss" valve filled with 50% fill ratio of olive oil in a 250 ml metal can pressurise with nitrogen to 10bar and using a 0.Moreover, Figure 13 shows the comparison of the spray performance results between "low loss" valve and "control valve" which was provided from a company with specific interest in the constancy of discharge flow rate in respective of the particle size distribution.The results were obtained using these valves filled with 50% olive oil and pressurised with nitrogen to 10bar at initial and using a conventional actuator which was supplied from the company.As is shown, "low loss" valve has a very smooth constancy of discharge flow rate with C Q = 24% but using "control valve" shows that the discharge flow rate decreased more rapidly with about C Q = 35%.However, looking at Figure 14, shows that spray angle of "low loss" decreased about 33% from beginning of the can to the end of the pack life, but this is about 50% with using "control valve".
As discussed before, "low loss" valve is suitable for spraying the high viscous products like hair removal and olive oil and the particle size of these products will be higher than using water or ethanol based products.Therefore the companies which were cooperated with the author, interested to see the images of this valve during the spray performances.A major problem with the conventional consumer aerosol valve(s) using high viscous product is the liquid hole on the stem could block out due to the crystallisations imposed by the formulation on the actuator or insert.Hence the valve completely malfunctions.Figures 14 to 16 demonstrate that the "low loss" valve is functioning similar to those using antiperspirants products (i.e.conventional "Sure Roll On" or "Soltan").Figure 15: Spray image of "low loss" valve using hair removal cream and pressurised with compressed air up to 12bar with using 0.75mm Aqua insert Figure 16: Spray image of "low loss" valve using Sure Roll-On and pressurised with compressed air up to 12bar with using 0.75mm Aqua insert
CONCLUSION AND FUTURE WORK 1.
Consumer aerosol valves design has not changed significantly for a many decades and new domestic aerosol valve design will be required if inert gas propellants can replace liquefied gas propellants.The challenge of this replacement is inert gas propellants have relatively little atomising energy and also sufficient power as the can empties.a.This makes obtaining fine sprays relatively very difficult.b.In addition flow rate and drop size may vary unacceptably during can life time when current conventional valves are used.
2.
The new aerosol valve design presented in this paper has been successfully addressed the modification of flow rate with bleeding inert gas from the can into the stem to assist atomisation with constructing of "bubbly flow" in a mixing chamber upstream of the actuator cap and insert.This concept is totally different with using VPT (Vapour Phase Tap) which has been used in conventional aerosol valves for many years.a.The conventional VPT arrangement passes a two-phase flow through small valve stem orifices and a conventional path which causes pressure losses upstream of the insert and thus reduces flow rate and gives non-optimal atomisation.b.The new valve arrangements do not suffer from the above restrictions.
3.
The requirement for as steady flow rate and drop size as possible, during the pack life of an aerosol, has been quantified successfully using the new definitions of "Constancy" parameters for liquid flow rate, C Q , and volume median drop size, C D .Use of these parameters permits quantifying the performances of valve-insert combinations and comparing performances with conventional valves and products.4.
The reason for the achievement of such good constancy is not fully understood and requires a thorough fundamental study: a.It involves complex interactions as the bubbly mixing chamber flow passes through the insert and results in changes of pressure differences set up between mixing chamber, internal can volume and external atmosphere, as a can is emptied.5.
"Low Loss" valves that use an unconventional method of shutting off and opening the flow such that there is essentially no pressure loss even for a bubbly flow passing through the valve.i.This valve is more bulky than conventional valves and has two additional components.ii.However this investigation has shown that the valves spray viscous liquids and suspensions such as olive oil and hair removal creams, which cannot be sprayed well or with good constancy by current compressed gas aerosols.
FUTURE WORKS
Fundamental study of the formation and properties of the "bubbly flow" systems possibly including the use of "scale up" experiments could be part of the future study.In addition, the application of CFD to the flow in the can-valve-insert system needs further investigation.Moreover, further work could include the understanding of how the properties of the two phase flow leaving an insert affect atomisation quality and also on how the internal insert geometry affects the spray.Explore the use of the valves in bag-in-can or bag-on-valve systems could also provide wide applicability of the new valve that presented throughout this paper.
Figure 6 :
Figure 6: Prototype design of new domestic aerosol valve called "Low Loss"
Figure 8 :
Figure 8: Schematic diagram of pressure loss coefficient measurement with using a valve
Figure 9 :
Figure 9: Schematic diagram of pressure loss coefficient measurement without using a valve
DFigure 10 :Figure 10 :
Figure10shows the result fusing 50% fill ratio of olive oil in a 250 ml metal can pressurise with carbon dioxide to 10 bar and using a 0.75 mm Aqua insert.Such a large insert was found to be necessary to permit full opening of the spray cone.As is shown, there is a steady decrease in pressure with about 17% of the can gas injected into the valve mixing chamber.Also the discharge flow rate decreased steadily with constancy of C Q = 33%.The particle size shows that there is a good drop size constancy with C D = 19%.Drop size is high with D (v,50) around 450mm but this is acceptable for the coating process that the oil spray is used for.Figure11shows the spray performance result of "low loss" valve filled with 50% fill ratio of olive oil in a 250 ml metal can pressurise with nitrogen to 10bar and using a 0.75 mm Aqua
Figure 11 :
Figure 11: Spray performance of "low loss" valve with 50% fill ratio of olive oil and pressurised with N 2 to 10bar at initial with using 0.75mm Aqua insert
Figure 12 :
Figure 12: Spray image of "low loss" using olive oil and 0.75mm Aqua insert
Figure 13 :Figure 14 :
Figure13: Comparison of discharge flow rate between "low loss" valve and a "control valve" using olive oil pressurised with N 2 and using a conventional actuator
Figure 17 :
Figure 17: Spray image of "low loss" valve using Soltan and pressurised with compressed air up to 12bar with using 0.75mm Aqua insert | 8,216.6 | 2014-12-01T00:00:00.000 | [
"Engineering"
] |
Theoretical Analysis of Effective Thermal Conductivity for the Chinese HTR-PM Heat Transfer Test Facility
The Chinese high temperature gas-cooled reactor pebble bed module (HTR-PM) demonstration project has attracted increasing attention. In order to support the project, a large-scale heat transfer test facility has been constructed for pebble bed effective thermal conductivity measurement over the whole temperature range (0~1600 ◦C). Based on different heat transfer mechanisms in the randomly packed pebble bed, three different types of effective thermal conductivity have been theoretically evaluated. A prediction of the total effective thermal conductivity of the pebble bed over the whole temperature range is provided for the optimization of the test facility and guidance of further experiments.
Introduction
Increasing attention has been paid to the high temperature helium-cooled pebble bed reactor (HTR) due to its high efficiency, high levels of passive safety, and potential usage for hydrogen production [1,2].The Chinese high temperature gas-cooled reactor pebble bed module (HTR-PM) demonstration project, oriented by the Institute of Nuclear and New Energy Technology of Tsinghua University (INET), has been installed at the Shidaowan plant in Shandong Province, China and is scheduled to go online in 2017 [3,4].
HTR-PM has a cylindrical pebble bed core with a diameter of 3 m and a height of 11 m, and thousands of spherical fuel elements are randomly packed inside.In engineering practice, values of the effective thermal conductivity of the pebble bed core of HTR-PM at different temperatures are essential parameters required in safety analysis and thermal calculation of the reactor [5].The effective thermal conductivity has been closely related to the HTR safe characteristic.When a loss-of-coolant accident happens, the residual heat must be removed from the core in time, which directly depends on the effective thermal conductivity of the pebble bed [6][7][8][9].A relevant experiment named SANA-1 [1][2][3][4][5][6][7][8][9][10][11][12] designed by the Research Center Juelich has historically been carried out for validation of the afterheat removal capacity of HTR.The size of the pebble bed and the highest test temperature (below 1000 • C) in SANA-1 were limited.The temperature range of SANA-1 was unable to cover the whole temperature range of the safety analysis of HTR-PM (0~1600 • C).
In order to support the HTR-PM project, Tsinghua University has designed and constructed a full-scale heat transfer test facility for pebble bed equivalent conductivity measurement (TF-PBEC) over the whole temperature range (0~1600 • C) [13,14], as shown in Figure 1.The function of the TF-PBEC is to create a full-size pebble bed to simulate the complicated thermal transfer condition in the real reactor core of HTR-PM.The TF-PBEC is an integrated experimental system and has the structure of the internal heating type resistance furnace.About 70,000 machined graphite spheres with a diameter of 60 mm were randomly packed in an annular test zone bounded by inner and outer walls inside the TF-PBEC to simulate the fuel packing structure, as shown in Figure 1 [5].The radius of the inner wall of the pebble bed is 500 mm and that of the outer wall is set to be 2000 mm.The height of the pebble bed is set to be 1000 mm.The temperature distribution in the pebble bed under helium atmosphere conditions will be measured to determine the effective thermal conductivity values of the pebble bed with temperature up to 1600 • C. In order to optimize the design of the facility, a theoretical prediction of the effective thermal conductivity of the pebble bed over the whole temperature range is needed.This paper carries out detailed theoretical analysis on the effective thermal conductivity of the pebble bed in the HTR-PM heat transfer test facility, which is necessary for the design and optimization of the test facility and further experiments.In the SANA experiment, the effective thermal conductivity of a graphite pebble bed at the temperature range of 0~1000 • C was measured.Due to the similarity of the two reactors (Chinese HTR-PM and Germany HTR-Module), the graphite pebbles used in the TF-PBEC are similar to those in the SANA experiment.Hence, the experimental data used for analysis in this paper is from SANA's data set.PBEC is to create a full-size pebble bed to simulate the complicated thermal transfer condition in the real reactor core of HTR-PM.The TF-PBEC is an integrated experimental system and has the structure of the internal heating type resistance furnace.About 70,000 machined graphite spheres with a diameter of 60 mm were randomly packed in an annular test zone bounded by inner and outer walls inside the TF-PBEC to simulate the fuel packing structure, as shown in Figure 1 [5].The radius of the inner wall of the pebble bed is 500 mm and that of the outer wall is set to be 2000 mm.The height of the pebble bed is set to be 1000 mm.The temperature distribution in the pebble bed under helium atmosphere conditions will be measured to determine the effective thermal conductivity values of the pebble bed with temperature up to 1600 °C.In order to optimize the design of the facility, a theoretical prediction of the effective thermal conductivity of the pebble bed over the whole temperature range is needed.This paper carries out detailed theoretical analysis on the effective thermal conductivity of the pebble bed in the HTR-PM heat transfer test facility, which is necessary for the design and optimization of the test facility and further experiments.In the SANA experiment, the effective thermal conductivity of a graphite pebble bed at the temperature range of 0~1000 °C was measured.Due to the similarity of the two reactors (Chinese HTR-PM and Germany HTR-Module), the graphite pebbles used in the TF-PBEC are similar to those in the SANA experiment.Hence, the experimental data used for analysis in this paper is from SANA's data set.
Heat Transfer Mechanisms in Pebble Bed
As above, the effective thermal conductivity of the high temperature pebble bed reactor, which is usually used to simulate the heat transfer in the reactor core under normal operating or severe accident conditions, is derived by integrating all the relevant heat transfer mechanisms in the pebble bed into a single representative conduction process.
More specifically, the pebble bed in the upcoming experiment is a randomly packed bed filled with a stagnant gas.Four different heat transfer mechanisms occur, namely: (1) conduction through solid spheres; (2) conduction through the stagnant gas phase that fills voids in-between spheres; (3) conduction through contact areas between adjacent spheres; and (4) radiation between sphere surfaces, as shown in Figure 2.
Heat Transfer Mechanisms in Pebble Bed
As above, the effective thermal conductivity of the high temperature pebble bed reactor, which is usually used to simulate the heat transfer in the reactor core under normal operating or severe accident conditions, is derived by integrating all the relevant heat transfer mechanisms in the pebble bed into a single representative conduction process.
More specifically, the pebble bed in the upcoming experiment is a randomly packed bed filled with a stagnant gas.Four different heat transfer mechanisms occur, namely: (1) conduction through solid spheres; (2) conduction through the stagnant gas phase that fills voids in-between spheres; (3) conduction through contact areas between adjacent spheres; and (4) radiation between sphere surfaces, as shown in Figure 2.
Effective Thermal Conductivity Analysis
In the pebble bed, the heat flux is to be transported simultaneously along three different paths, namely: (1) solid conduction-surface radiation-solid conduction process; (2) solid conduction-gas conduction-solid conduction process and (3) solid conduction-contact area conduction-solid conduction process.Hence, the total effective conductivity is considered to consist of these three different types of effective conductivity, which must be evaluated separately.
Solid Conduction + Surface Radiation + Solid Conduction
Zehner and Schluender [11] proposed a cell model in 1970, which can be used to describe this type effective thermal conductivity.A unit cell consists of two half spheres with point contact and void between spheres.The pebble bed in the cell model is formed by orderly arrangement of this kind unit cells and heat transfers in the pebble bed by radiation of sphere surfaces and conduction in spheres.Breitbach and Barthels [6] noted the cell model did not consider the radiation from gaps outside the cell and improved the formula.In this paper, we adopt this developed formula, as shown below: where r e is the first type effective thermal conductivity due to the solid conduction-surface radiation-solid conduction heat transfer process; is defined as the radiation exchange factor; is the porosity of the pebble bed; r is the pebble emissivity; B is the deformation parameter related to the porosity, which was given by Zehner and Schluender as accurate prediction, which is adopted in the following analysis.
Effective Thermal Conductivity Analysis
In the pebble bed, the heat flux is to be transported simultaneously along three different paths, namely: (1) solid conduction-surface radiation-solid conduction process; (2) solid conduction-gas conduction-solid conduction process and (3) solid conduction-contact area conduction-solid conduction process.Hence, the total effective conductivity is considered to consist of these three different types of effective conductivity, which must be evaluated separately.
Solid Conduction + Surface Radiation + Solid Conduction
Zehner and Schluender [11] proposed a cell model in 1970, which can be used to describe this type effective thermal conductivity.A unit cell consists of two half spheres with point contact and void between spheres.The pebble bed in the cell model is formed by orderly arrangement of this kind unit cells and heat transfers in the pebble bed by radiation of sphere surfaces and conduction in spheres.Breitbach and Barthels [6] noted the cell model did not consider the radiation from gaps outside the cell and improved the formula.In this paper, we adopt this developed formula, as shown below: where λ r e is the first type effective thermal conductivity due to the solid conduction-surface radiation-solid conduction heat transfer process; is defined as the radiation exchange factor; ε is the porosity of the pebble bed; ε r is the pebble emissivity; B is the deformation parameter related to the porosity, which was given by Zehner and Schluender as . However, Hsu et al. found that B = 1.364( 1−ε ε 1.055 ) led to a more accurate prediction, which is adopted in the following analysis.Λ = λ s 4σT 3 d is the dimensionless solid conductivity.σ is the Stephan-Boltzmann constant.λ s is the heat conductivity of the pebble, which is temperature dependent.d is the diameter of the pebble.
The first type effective thermal conductivity calculated by Equation ( 1) is shown in Figure 3, using parameters of the SANA experiment under helium condition, which can be found in [12].The pebbles used in SANA experiment were also graphite spheres with the same diameter in our experiment.As we can see, this type effective thermal conductivity grows quickly with temperature rising due to the radiation effect.Although the heat conduction ability of the pebble itself declines with rising temperature, the whole heat transfer ability of the pebble bed increases quickly due to the sharp enhancing of the surface radiation of the pebbles.
Appl.Sci.2017, 7, 76 4 of 9 solid conductivity. is the Stephan-Boltzmann constant.s is the heat conductivity of the pebble, which is temperature dependent.d is the diameter of the pebble.
The first type effective thermal conductivity calculated by Equation ( 1) is shown in Figure 3, using parameters of the SANA experiment under helium condition, which can be found in [12].The pebbles used in SANA experiment were also graphite spheres with the same diameter in our experiment.As we can see, this type effective thermal conductivity grows quickly with temperature rising due to the radiation effect.Although the heat conduction ability of the pebble itself declines with rising temperature, the whole heat transfer ability of the pebble bed increases quickly due to the sharp enhancing of the surface radiation of the pebbles.
Solid Conduction + Gas Conduction + Solid Conduction
In this part, we consider the second type effective thermal conductivity caused by heat transfer through solid spheres and gas phase with stagnant flow.Point contact will be assumed and thermal radiation is neglected.
The packed pebble bed consisting of mono-sized spheres with the presence of a stagnant fluid can be considered as a porous media filled with still working medium.The second type of effective thermal conductivity then can be considered as the prediction of the stagnant thermal conductivity of the porous media.Zehner and Schlunder presented an empirical correlation for the stagnant thermal conductivity [15].The accuracy of the correlation has been confirmed by tests done by V. Prasad et al. [16]: where g e is the second type effective thermal conductivity due to the solid conduction-gas conduction-solid conduction heat transfer process; f is the heat conductivity of the stagnant gas, which is temperature dependent; is the ratio of the thermal conductivity of the solid phase and the surrounding fluid matrix.
Solid Conduction + Gas Conduction + Solid Conduction
In this part, we consider the second type effective thermal conductivity caused by heat transfer through solid spheres and gas phase with stagnant flow.Point contact will be assumed and thermal radiation is neglected.
The packed pebble bed consisting of mono-sized spheres with the presence of a stagnant fluid can be considered as a porous media filled with still working medium.The second type of effective thermal conductivity then can be considered as the prediction of the stagnant thermal conductivity of the porous media.Zehner and Schlunder presented an empirical correlation for the stagnant thermal conductivity [15].The accuracy of the correlation has been confirmed by tests done by V. Prasad et al. [16]: where λ g e is the second type effective thermal conductivity due to the solid conduction-gas conduction-solid conduction heat transfer process; λ f is the heat conductivity of the stagnant gas, which is temperature dependent; λ is the ratio of the thermal conductivity of the solid phase and the surrounding fluid matrix.
The second type of effective thermal conductivity calculated by Equation ( 2) is shown in Figure 4, using parameters of the SANA experiment under helium conditions.As we can see, this type of effective thermal conductivity only has a slight increase with rising temperature.The heat conduction ability of the helium increases with rising temperature, while the heat conduction ability of the graphite spheres declines.Hence, the total heat transfer ability increases slowly.
The second type of effective thermal conductivity calculated by Equation ( 2) is shown in Figure 4, using parameters of the SANA experiment under helium conditions.As we can see, this type of effective thermal conductivity only has a slight increase with rising temperature.The heat conduction ability of the helium increases with rising temperature, while the heat conduction ability of the graphite spheres declines.Hence, the total heat transfer ability increases slowly.
Solid Conduction + Contact Area Conduction + Solid Conduction
Contact area is the contact region between two adjacent spheres, usually caused by external pressures acting onto the spheres or their own weight in a packed bed.It is closely related to the elasticity of the sphere's material and usually increases in size with external load.It has been pointed out by many researchers that it is quite important to take the contact area thermal conduction into account at high solid-to-fluid thermal conductivity ratios ( ) [11].Hence, in this part, we consider the third type of effective thermal conductivity caused by heat transfer through a packed pebble bed with finite contact areas.
The contact area between two adjacent spheres can be calculated by a model presented by Kaviany using Hertzian deformation [17].The third type of effective thermal conductivity caused by finite contact area conduction can be described by the following correlation: where c e is the third type effective thermal conductivity due to the solid conduction-contact area conduction-solid conduction heat transfer process; is the radius of the contact area between two spheres; p is the Poisson ratio; s E is the Young modules; f is the collinear force acting on the spheres; R is the radius of the graphite sphere; S is the constant related to the structure of packed bed; A N and L N are the number of spheres per unit area and unit length in the
Solid Conduction + Contact Area Conduction + Solid Conduction
Contact area is the contact region between two adjacent spheres, usually caused by external pressures acting onto the spheres or their own weight in a packed bed.It is closely related to the elasticity of the sphere's material and usually increases in size with external load.It has been pointed out by many researchers that it is quite important to take the contact area thermal conduction into account at high solid-to-fluid thermal conductivity ratios (λ ≥ 10 3 ) [11].Hence, in this part, we consider the third type of effective thermal conductivity caused by heat transfer through a packed pebble bed with finite contact areas.
The contact area between two adjacent spheres can be calculated by a model presented by Kaviany using Hertzian deformation [17].The third type of effective thermal conductivity caused by finite contact area conduction can be described by the following correlation: where λ c e is the third type effective thermal conductivity due to the solid conduction-contact area conduction-solid conduction heat transfer process; is the radius of the contact area between two spheres; µ p is the Poisson ratio; E s is the Young modules; f is the collinear force acting on the spheres; R is the radius of the graphite sphere; S is the constant related to the structure of packed bed; N A and N L are the number of spheres per unit area and unit length in the packed bed, respectively.Kaviany [17] studied three different close-packing ordered arrangements (simple cubic packing, face-centered cubic packing, and body centered cubic packing) and provided values of the structural parameters.Based on the study, a randomly packed bed can be considered depending on porosity.
The third type effective thermal conductivity calculated by Equation ( 3) is shown in Figure 5, using parameters of the SANA experiment under helium atmosphere.As we can see, the type effective thermal conductivity shows a slight decline when temperature rises due to the graphite material characteristics.
packed bed, respectively.Kaviany [17] studied three different close-packing ordered arrangements (simple cubic packing, face-centered cubic packing, and body centered cubic packing) and provided values of the structural parameters.Based on the study, a randomly packed bed can be considered depending on porosity.
The third type effective thermal conductivity calculated by Equation ( 3) is shown in Figure 5, using parameters of the SANA experiment under helium atmosphere.As we can see, the type effective thermal conductivity shows a slight decline when temperature rises due to the graphite material characteristics.Because there are three heat transfer processes that coexist in the packed bed, the total effective thermal conductivity of the pebble bed can be considered as the summation of the above three types of thermal conductivities: the calculated total effective thermal conductivity and the measured values in SANA experiment under helium conditions are shown in Figure 6.As we can see, the calculated effective conductivity and the measured data fit well.Among these three heat transfer mechanisms, heat transfer by radiation plays a dominating role as temperature rises, while heat transfer through gas conduction and contact area conduction contributes to the major portion in the lower temperature zone.Because there are three heat transfer processes that coexist in the packed bed, the total effective thermal conductivity of the pebble bed can be considered as the summation of the above three types of thermal conductivities: the calculated total effective thermal conductivity and the measured values in SANA experiment under helium conditions are shown in Figure 6.As we can see, the calculated effective conductivity and the measured data fit well.Among these three heat transfer mechanisms, heat transfer by radiation plays a dominating role as temperature rises, while heat transfer through gas conduction and contact area conduction contributes to the major portion in the lower temperature zone.The calculated total effective thermal conductivity and the measured values in SANA experiment under nitrogen conditions are shown in Figure 7.As we can see, the total effective thermal conductivity under nitrogen conditions is a bit lower than that under helium conditions.However, the difference only comes from the second type of effective thermal conductivity due to different heat conduction properties of these two gases.The first and third type of effective thermal conductivities are the same under the two conditions, which are only related to the material and structure of the pebble bed.The calculated total effective thermal conductivity and the measured values in SANA experiment under nitrogen conditions are shown in Figure 7.As we can see, the total effective thermal conductivity under nitrogen conditions is a bit lower than that under helium conditions.However, the difference only comes from the second type of effective thermal conductivity due to different heat conduction properties of these two gases.The first and third type of effective thermal conductivities are the same under the two conditions, which are only related to the material and structure of the pebble bed.The calculated total effective thermal conductivity and the measured values in SANA experiment under nitrogen conditions are shown in Figure 7.As we can see, the total effective thermal conductivity under nitrogen conditions is a bit lower than that under helium conditions.However, the difference only comes from the second type of effective thermal conductivity due to different heat conduction properties of these two gases.The first and third type of effective thermal conductivities are the same under the two conditions, which are only related to the material and structure of the pebble bed.The TF-PBEC conducted by Tsinghua University aims to cover the whole temperature range of the safety analysis of HTR-PM (0~1600 • C). Figure 8 gives the predicted total effective thermal conductivity of the pebble bed under helium conditions over the whole temperature range, which is useful for the design of insulating layers and heating power of the facility.Relevant parameters are used following the SANA experiment due to the similarity of the pebble bed.The TF-PBEC conducted by Tsinghua University aims to cover the whole temperature range of the safety analysis of HTR-PM (0~1600 °C). Figure 8 gives the predicted total effective thermal conductivity of the pebble bed under helium conditions over the whole temperature range, which is useful for the design of insulating layers and heating power of the facility.Relevant parameters are used following the SANA experiment due to the similarity of the pebble bed.However, it should be noted that we use a new fitting formula to describe the temperature-dependent heat conductivity of the pebble materials, as listed below.
λ s = 141.10858× e (−t/382.46023)+ 44.0461 (5) Figure 9 gives the comparison showing the old and new formula of the heat conductivity of the pebble material over the whole temperature range.As we can see in the picture, the star shows the experimental data of the heat conductivity of the pebble material.The circle shows the heat conductivity trend according to the recommended formula in the SANA report, while the inverted triangle shows the heat conductivity trend according to the new fitting formula.As we can see, the recommended quartic polynomial in the SANA report [12] gives a reasonable result below 1000 • C.However, it gives an illogical result in the high temperature region.
Conclusions
Pebble beds, which are formed by the random packing of a large number of particles, have broad applications in systems involving heat transfer [18,19].Due to its complex microstructure, heat transfer through pebble bed contains several different mechanisms.
The effective thermal conductivity is used to represent the macroscopic heat transfer ability of the pebble bed.Three different types of effective thermal conductivity have been theoretically evaluated for the large-scale heat transfer test facility built for the HTR-PM.Results show that heat transfer by radiation plays a dominant role in the high temperature region, while heat transfer through gas conduction and contact area conduction occupies a major portion in the lower temperature region.The first and third type of effective thermal conductivities are only related to the pebble bed itself and the summation of the two can be considered as the effective thermal conductivity of the pebble bed under vacuum conditions.The prediction of the total effective thermal conductivity of the pebble bed over the whole temperature range is provided for the optimization of the test facility and guidance of further experiments.
Figure 1 .
Figure 1.Structure diagram and pebble bed of the test facility for pebble bed equivalent conductivity measurement (TF-PBEC).
Figure 1 .
Figure 1.Structure diagram and pebble bed of the test facility for pebble bed equivalent conductivity measurement (TF-PBEC).
Figure 2 .
Figure 2. Heat transfer mechanisms in packed pebble bed.
Figure 2 .
Figure 2. Heat transfer mechanisms in packed pebble bed.
Figure 3 .
Figure 3. Effective thermal conductivity due to the solid conduction-surface radiation-solid conduction heat transfer process.
Figure 3 .
Figure 3. Effective thermal conductivity due to the solid conduction-surface radiation-solid conduction heat transfer process.
Figure 4 .
Figure 4. Effective thermal conductivity due to the solid conduction-gas conduction-solid conduction heat transfer process.
Figure 4 .
Figure 4. Effective thermal conductivity due to the solid conduction-gas conduction-solid conduction heat transfer process.
Figure 5 .
Figure 5. Effective thermal conductivity due to the solid conduction-contact area conduction-solid conduction heat transfer process.
Figure 5 .
Figure 5. Effective thermal conductivity due to the solid conduction-contact area conduction-solid conduction heat transfer process.
Appl. Sci. 2017, 7 , 76 7 of 9 Figure 6 .
Figure 6.Total effective thermal conductivity and the experimental data of the pebble bed in SANA experiment under helium conditions.
Figure 6 .
Figure 6.Total effective thermal conductivity and the experimental data of the pebble bed in SANA experiment under helium conditions.
Figure 6 .
Figure 6.Total effective thermal conductivity and the experimental data of the pebble bed in SANA experiment under helium conditions.
Figure 7 .
Figure 7.Total effective thermal conductivity and the experimental data of the pebble bed in SANA experiment under nitrogen conditions.
Figure 7 .
Figure 7.Total effective thermal conductivity and the experimental data of the pebble bed in SANA experiment under nitrogen conditions.
Figure 8 .Figure 9
Figure 8.Total effective thermal conductivity of the pebble bed for helium conditions over the whole temperature range.However, it should be noted that we use a new fitting formula to describe the temperaturedependent heat conductivity of the pebble materials, as listed below. /382.46023141.10858 e 44.0461 t s
Figure 8 .
Figure 8.Total effective thermal conductivity of the pebble bed for helium conditions over the whole temperature range.
Figure 9 .
Figure 9.Comparison of the old and new fitting formula of the heat conductivity of the pebble material. | 6,144.6 | 2017-01-12T00:00:00.000 | [
"Engineering",
"Physics"
] |
Assessing the usefulness of a visual programming IDE for large-scale automation software
Industrial control applications are usually designed by domain experts instead of software engineers. These experts frequently use visual programming languages based on standards such as IEC 61131-3 and IEC 61499. The standards apply model-based engineering concepts to abstract from hardware and low-level communication. Developing industrial control software is challenging due to the fact that control systems are usually unique and need to be maintained for many years. The arising challenges, together with the growing complexity of control software, require very usable model-based development environments for visual programming languages. However, so far only little empirical research exists on the practical usefulness of such environments, i.e., their usability and utility. In this paper, we discuss common control software maintenance tasks and tool capabilities based on existing research and show the realization of these capabilities in the 4diac IDE. We performed a walkthrough of the demonstrated capabilities using the cognitive dimensions of notations framework from the field of human–computer interaction. We then improved the tool and conducted a user study involving ten industrial automation engineers, who used the 4diac IDE in a realistic control software maintenance scenario. Based on lessons learnt from this study, we adapted the 4diac IDE to better handle large graphical models. We evaluated these changes in a reassessment study with automation engineers from seven industrial enterprises. We derive general implications with respect to large-scale applications for developers of IDEs that we deem applicable in the context of (visual) model-based engineering tools.
Introduction
Visual programming languages can improve the communication and collaboration between software developers.In comparison with textual languages, visual representations can stimulate more active discussions and improve the memorability of design details.Visual representations thus better Communicated by Shiva Nejati and Daniel Varro.
gered Function Block Diagram is defined in IEC 61499 [7].Algorithms in textual language can be integrated in these models.The languages defined by each standard are targeted at automation engineers, not software engineers.Both textual and visual languages are available: While the former resemble general-purpose low-level programming languages such as C, the latter are mostly block-based and focus on visualizing the data and event flow.All languages abstract the control logic from hardware and low-level communication.As requirements for control software are derived from electrical and mechanical diagrams, the DSLs are optimized for matching the mental model of automation engineers.Several challenges arise when developing industrial control software: automated production systems are produced as one-of-a-kind and are tailored to the needs of the customer.Hence, reusing control software is challenging and requires extensive support for managing variability [8].Furthermore, the life cycles of the physical equipment by far exceed those of the software: Control software thus has to be evolved over decades [9].The requirements for modern automated production systems lead to growing complexity of control software.The resulting challenges include an increasing number of interacting (cyber-)physical components and complex communication between components.Furthermore, unwanted physical effects become more relevant due to the high accuracy which is required [10].
As software modeling and engineering tools should support engineers, their usability is essential.Regarding the usability of tools targeted at industrial automation, it has to be considered that automation engineers tend to have a strong background in electrical and mechanical engineering, but not in software engineering.As the control application is typically finalized only at the factory or plant during commissioning of the machine, various user groups are involved in control software development.Software engineers create basic software modules.Mechanical or electrical engineers compose these modules into applications [9].4diac IDE [11] is a modeling tool for developing control software according to the standard IEC 61499.In the last years, the open source community of 4diac IDE spent significant effort in improving the usability of this IDE (see, e.g., [12]) to increase the acceptance of the tool and the modeling language among industrial experts.In several workshops with different industry partners, we identified usability issues and adapted the tool step by step.In the presented studies, we evaluate the impact of these efforts and identify further possibilities for enhancing the tool support.
IDEs for block-based software, like 4diac IDE, have been evaluated mostly with a focus on creating new software, but not on software maintenance.Only little empirical research exists regarding their usefulness in practical environments and for industrial users.Usefulness regards a tool's utility, i.e., to what degree its functionality allows users to do what is needed, and its usability, i.e., how well users can exploit the offered functionality [13,14].Assessing usefulness requires studying users and their behavior qualitatively [15].
Empirical studies can increase the acceptance of tools in industry, as they help engineers to select and adapt tool capabilities that are relevant for their application context [16].Following this goal, this paper provides the following contributions: (i) we discuss common control software maintenance tasks and tool capabilities based on existing research.(ii) We show the realization of these capabilities in 4diac IDE and assess them in a walkthrough using the cognitive dimensions of notations (CD) framework [17].(iii) Based on the findings of this assessment, we conducted a usefulness study involving ten industrial automation engineers from our industry partner, who used 4diac IDE in a realistic control software maintenance scenario.(iv) We further improved the tool based on what we learned in the study and then conducted an extended reassessment study involving automation engineers from seven different manufacturing companies.(v) Based the results of both studies, we discuss lessons learned and derive general implications for developers of IDEs for visual languages.
Our findings demonstrate how the usefulness of modelbased engineering concepts, i.e., as implemented in a blockbased, graphical programming IDE which abstracts from hardware and communication infrastructures, can successfully be investigated using a multi-phase approach including a walkthrough and a user study.Also, we claim that our lessons learned could be applied in the context of other (visual) model-based (software) engineering tools as well.A further goal of this paper is to enhance the transfer of know-how between the research community in model-driven engineering and industrial automation.
This paper is an extension of a conference paper [18], which described the results of the initial usefulness study with our industry partner.In addition to describing our extended reassessment study, we also expanded the related work and the background significantly and now provide more details on the domain-specific modeling language IEC 61499.The remainder of this paper is structured as follows: We first describe related work, the domain and tool analysis, as well as the used framework, after which we present our research approach.We then describe 4diac IDE, the standard IEC 61499, and the cognitive dimensionsbased walkthrough.We show the design and the results of the industry partner study, before discussing the extended reassessment user study (including our adjustments to the study setup and adaptations of 4diac IDE that we implemented based on the learning of the initial industry partner study).We conclude by discussing lessons learned and threats to validity.
Related work
We discuss related work investigating the usefulness of visual programming or modeling environments.A study covering programming editors that support Scratch and related languages has revealed usability flaws that are relevant for any visual language, but was conducted only with students of a single discipline [19].Some results from this study can be transferred to any block-based language, such as the difficulty to search within a program and to navigate through a graphical diagram.Several studies evaluated the programming languages that are relevant for industrial automation.In a multimodal usability study of the IEC 61499-IDE Eclipse 4diac, various editors and views were evaluated from the perspective of a broad user group.The study compared several approaches for evaluating the tool usability, including an expert review, a survey, a laboratory experiment, and a fully remote and asynchronous approach.The study suggests combining several methods to cover all relevant aspects.For instance, usability expert reviews cannot address domain-specific issues.Evaluating the usefulness of mature features therefore requires involving domain experts.The study focused on high-level development tasks and therefore did not cover the handling of large-scale applications [12].Obermeier et al. [20] have compared a Function Block Diagram to a modeling approach that utilizes simplified variants of UML diagrams.Their study focused on new applications and was performed with a large group of participants consisting of both students and industrial practitioners.The experiment showed a clear advantage for the modeling approach, which was designed to address weaknesses of general-purpose modeling languages.The results, furthermore, show that experienced users benefit more from the modeling notations than novice users.In [21], the visual language SFC was compared to the UML activity diagrams and statecharts with a focus on process technology.The analysis evaluated example programs in these languages regarding the possibilities for modifying, understanding, and modularizing them.It showed that activity diagrams are best suited for designing flexible and modular sequences.The study evaluated the languages based on the cognitive effectiveness, but does not include the experimental results with domain experts.We conclude that existing user studies do not evaluate the usability of handling large-scale automation software in a realistic maintenance scenario.
Domain and tool analysis
We distill challenges for automation software engineering and typical tasks from the literature (Sect.3.1) to create a realistic study setting.Based on identified maintenance scenarios, we analyze relevant language concepts (Sect.3.2) and discuss available modeling tools and their capabilities (Sect.3.3).
Challenges and typical tasks for maintaining large-scale automation software
General development tasks for control software include creating new block types, instantiating these types, and creating hierarchical structures.Additionally, engineers need to model the hardware configuration and assign software parts to their respective device [12].During the development of production systems, even in late stages such as commissioning, frequent adaptations of the software are needed to address changing requirements [22].In addition to creating new software, maintaining existing software is highly relevant, particularly in automation engineering where industrial production plants have life cycles of several decades.During this time, the software is typically updated more frequently than the hardware, roughly every six to twelve months [9], to allow adaptations of the production process when additional products are manufactured and to benefit from technological advances [22].Common tasks for control software maintenance have been discussed in the literature.For instance, typical programming tasks for machine and plant automation were identified by Obermeier et al. [23] to form a basis for usability studies in the domain.The authors performed a hierarchical task analysis to refine their task descriptions that cover requirements elicitation, identifying interfaces to the environment and implementing the actual system functionality.Implementing control software can be further detailed into the system initialization phase, the standard operation, and the error handling.The authors furthermore suggest an adequate task complexity to limit the number of programming errors, while still receiving sufficient feedback.Legat et al. [24] discuss tasks for evolving automation software.Relevant scenarios (S) include (S1) adding new components that operate in parallel to increase the capacity, (S2) introducing redundancy to improve reliability, (S3) adding new variants of supply material, or (S4) replacing mechanical submodules.Another typical workflow involves reusing legacy control software (S5).First, code fragments that are suitable for reuse are identified.Adaptions are typically needed to ensure that the code is sufficiently abstract and can be parameterized if variability has to be considered.Finally, the code fragment is stored in a library for reuse [25].
We consider the described challenges and typical tasks for designing the user study to ensure a realistic setting.Based on the identified scenarios for developing and maintaining control software, we describe the domain-specific modeling language IEC 61499 [7] for control software, as well as existing IDEs that support the relevant tasks based on this language in the next subsections.
The domain-specific modeling language IEC 61499
IEC 61499 proposes a block-based modeling language for the domain of distributed control software engineering.Typical application domains are automation tasks for production systems, buildings, and energy grids.
The modeling language includes a platform-independent application model with the control software, and a system configuration model that captures the hardware and its network.Figure 1 shows the relation of these two models.The mapping between them defines the execution container of each application part.This ensures that the application model is independent of the hardware, which is a key advantage of IEC 61499 over programming languages that are already established in the domain.
So-called Function Block types (FB types) encapsulate basic functionality of the software.FB types and their instances are comparable to class definitions and their instantiated objects.Defining such component types is essential for scenarios that involve multiple instances of a single type (e.g., for S1, S2).The standard IEC 61499 defines a set of FB types, which constitute a library with the most important functionalities.Additionally, developers can extend this library with custom FB types to cover any domain, thus allowing to create variants of components (as in S3).The DSML supports concepts that are known from object orientation, including abstract interface definitions, types and instances, and encapsulation.
Control software is developed for repeated execution, which is realized in IEC 61499 via an event-based execution model.An event arriving at the interface triggers the execution of the FB instance.Each FB has a well-defined interface with input and output pins.Typical events are initialization events when starting the device, or indication events Fig. 2 Interface of a custom Function Block type MotorCtrl, which communicates with its environment via event pins (red), data pins (colored based on data type), and adapters (green).Adapters group event and data pins into a single connector and allow bidirectional communication when detecting updated sensor values.If necessary, subsequent application parts are triggered by sending one or more output events.The internal state of an FB instance persists between executions. Figure 2 shows an example interface for an FB type controlling a motor, which is executed upon receiving a sensorUpdate-event.Data values at the interface parameterize the component (for S5).
The internal functionality of custom FB types can be implemented either in a visual or in a textual notation.The modeling language is strongly typed, i.e., an event type or data type is assigned to each pin of an FB.An application (cf.Fig. 1) consists of FB instances and connections for the events and data.
IEC 61499 has dedicated elements to structure models hierarchically: Subapplications (subapps) aggregate a number of FB instances that are wired into a network (cf.Fig. 3).Subapps form components that can implement variants (realizing S3) or are maintained as submodules (supporting S4).Like FB types, communication with the surrounding network is restricted to the interface pins.A subapp encapsulates a part of an FB network and only aims at structuring the software without affecting its execution.Any number of hierarchical levels is possible because subapps may themselves contain instances of both FBs and other subapps.Unlike FBs, subapps may be distributed across devices.Adapters group several interface pins, i.e., event and/or data pins.They establish bidirectional point-to-point connections between two FBs.
The IEC 61499 standard defines the abstract syntax in EBNF and recommendations on the concrete syntax are presented as figures.The execution semantics for IEC 61499models was improved in the second edition of the standard to reduce ambiguities and is defined in natural language.Interoperability between tools and devices from various vendors is a core goal of the standard, which therefore also includes an XML-format for exchanging IEC 61499-models.
IDEs for control software
IEC 61499 defines a domain-specific modeling language (DSML) well-suited to support developers and maintainers -NxT Technology IDE from nxtControl GmbH [28] is a commercial tool environment for IEC 61499.A variety of editors allows editing the application, the device configuration, individual FB types, or the HMI (humanmachine interface) in a single tool.The FB behavior can be implemented in the high-level programming language Structured Text.Various libraries with elementary building blocks are available.The tool is extensible with custom plug-ins [28].The IDE comprises various views.
A tree view shows the contents of external libraries and the currently edited IEC 61499-system.Each element can be opened in a multi-page editor, where edits are performed in a table view or a visual diagram.-ISaGRAF Workbench from Rockwell Automation [29] includes support for IEC 61499 in their commercial PLC programming environment.IEC 61499 can be integrated with views for the HMI [27].-FBDK from Holobloc Inc. [30] was the first development environment for IEC 61499.The Java-based tool consists of a graphical editor and a tree view explorer with the libraries and the available elements.Each element is opened in a separate editor tab.Additionally, a text viewer shows the created standardized XML format, where selected elements from the graphical editor are highlighted.FBDK 11.0 can be downloaded freely.-4diac IDE is an open source project that is hosted by the Eclipse foundation [11].This IDE supports various operating systems, as it is based on the Eclipse platform.This platform also allows users to build their own extensions with Eclipse plug-ins.The provided libraries are mostly limited to standardized blocks.
-FBME is hosted on GitHub as an open-source project and is built on top of the JetBrains MPS language workbench [31].Currently, an alpha version is available.It can be extended with Java plug-ins.The graphical editors are based on projectional editing [32].
Although interoperability and portability are core goals of the IEC 61499 standard, the data exchange possibilities between different IEC 61499 tools proved to be limited in the experiments [26].While all the IDEs support creating IEC 61499-models, the offered tool support for user activities during development and maintenance varies.For instance, some IDEs assist the developer by providing feedback on the implementation or offering dedicated features such as refactoring and auto-completion (e.g., for S4).Both 4diac IDE and NxT Technology IDE have dedicated refactoring features (cf. the ones described in [33]).4diac IDE also offers static code analysis [34] including code metrics [35].Syntax highlighting and code completion for textual algorithms (can be used for S1, S2) in Structured Text are available in both 4diac IDE and NxT Technology IDE.Only FBDK supports various languages for implementing algorithms, including graphical ones, but it does not offer sophisticated editing support.Some tools also extend the language: For instance, NxT Technology IDE supports comment areas and special automation components that include templates for HMIs.4diac IDE supports aggregating FB instances without affecting the library, which is recommended for large-scale applications to group submodules (supporting S4) [36].FBDK demonstrates features that could enhance future editions of the standard IEC 61499.For instance, it includes a prototype for a new concept of managing libraries (supporting S5).FBDK first demonstrated the feasibility of IEC 61499 concepts and has been developed since 1996.FBME is a new editor and only available as an early prototype.It demonstrates support for verification mechanisms in the context of IEC 61499, such as visualizing counter-examples [37].We use 4diac IDE as a tool environment for our study, as it is based on widely applied Eclipse technologies for creating DSML editors.Some of the identified usability flaws originate from the underlying Eclipse platform.They may thus also affect other modeling tools that are based on the same framework.As 4diac IDE is available as an open-source project, we can extend it with plug-ins and provide bug fixes.
be applied "to discover useful things about usability problems" [17] and are applied by researchers as well as product designers."The CD framework is not an analytic method.Rather, it is a set of discussion tools for use by designers and people evaluating designs" [17].Performing an evaluation with the CD framework consists of classifying the intended activities (of the users with the software or artifacts), analyzing the cognitive dimensions, and deciding whether the requirements for the activities are met.
The CD framework mainly considers the notations or interaction languages and how well they support the intended activities."The notation is what the user sees and edits: letters, musical notes, graphic elements in CAD, code in a development environment" [17].Therefore, a set of "dimensions" is considered, each describing an aspect of an information structure that is reasonably general.
Below, we outline the cognitive dimensions [17] that are used as an underlying framework for this study: Viscosity is the resistance to change.High viscosity describes that reaching a single goal requires many user actions.We can differentiate two kinds of viscosity: Repetition viscosity means that many actions of the same type are required.Knock-on viscosity means that conducting an action requires further actions to restore consistency.For instance, a user may need to use multiple dialogues to change a displayed value.Visibility is the ability to view components easily.Systems that "bury information in encapsulations" reduce visibility.For example, too deep program hierarchies can decrease visibility.Premature commitment is required when the order of operations is constrained.While such an order may be intended for some user actions, premature commitment can have a negative impact on usability.Choosing a tree path to find an item is an example for premature commitment because users have to select a specific top-level node first to navigate to the correct lower-level node or leaf.Hidden dependencies are present when important links between entities are not visible, such as a change of a value of one entity having unexpected effects on (values of) another entity.Role-expressiveness means that the user can readily infer the purpose of an entity based on its formatting, layout, icon, or looks.Furthermore, the user intuitively understands how to manipulate the entity.Error-proneness is high when the notation invites mistakes and the system provides little protection, such as a text field for specifying a number that also accepts letters.Possible protection includes a validity check, which informs the user that only numeric values are allowed.Abstraction describes the available kinds of abstraction mechanisms.Systems require abstractions, but systems with too many abstractions are potentially difficult to learn.For example, hyperlinks are a well-understood abstraction.Secondary notation is additional information outside the formal syntax.Examples are comments in a programming language and color usage.Secondary notation can help users, but also have a negative impact, e.g., on visibility.Closeness of mapping describes whether the representation of an entity is closely related to the domain.Does the notation represent the result it shall describe?For example, in a high-level programming language, the closeness might be lower than in a domain-specific language.A trade-off with abstraction can be observed.Consistency demands that similar semantics are expressed in similar syntactic forms.Usability is easily compromised when similar information is obscured by presenting it in different ways.For example, tools should always represent the same action with the same icon.Diffuseness describes the verbosity of language.For example, large icons and long words reduce the available working area and thus increase diffuseness.Hard mental operations have a high demand on cognitive resources, such as tasks that require users to remember information without proper tool guidance.Provisionality is the degree of commitment to actions.Does the system support provisional actions such as keeping potential design options or incomplete versions?Progressive evaluation requires that work-to-date can be checked at any time.Users benefit from knowing their progress and from information on the current stage of a process.
Multiple trade-offs between dimensions can be observed.For example, reducing viscosity (i.e., reaching a goal in less steps) may require additional abstractions (e.g., automating certain steps), which can then introduce hidden dependencies (e.g., steps automated in the background are unclear to the user).Trade-offs have to be discussed based on the concrete user activities and the discussed information artifact.
Research approach
Our study investigates two research questions on the usefulness of control software development tools' capabilities as implemented in the tool 4diac IDE: RQ1 What is the usability of the tool capabilities for maintaining an unknown and complex software?RQ2 What is the utility of the tool capabilities for maintaining an unknown and complex software?For that we assume the following maintenance of (legacy) control software setting inspired mainly by the scenarios S4 and S5: maintenance work is required in an existing plant.The motor of one conveyor belt is broken and needs to be replaced.As an identical model is no longer available, a newer version of the motor has to be installed.Therefore, adaptations to the control software are required to reflect the changed control interface.
Regarding RQ1, we assessed the tool capabilities implemented in 4diac IDE from the perspective of industrial end users, guided by the CD framework and Nielsen's usability dimensions [14].Regarding RQ2, we investigated whether users can successfully perform maintenance tasks using 4diac IDE and how they perceive the usability of the tool.We also collected the perceived opportunities and risks [38] of using 4diac IDE in practice.The goal of our user study was to cover the key capabilities of IDEs for visual languages with respect to handling large and complex applications.
In this section, we provide an overview of performing user studies based on the CD framework.Figure 4 provides an overview of the study process.In the following sections, we discuss our usefulness study of 4diac IDE in detail.
Preparation and initial assessment
We first analyzed existing tools and the literature to distill common tasks and tool capabilities for end users in typical control software maintenance scenarios (cf.Sect.3.1).We designed four maintenance tasks based on these scenarios to reflect realistic practical settings.We followed the guidelines by Ko et al. [39] to select the tasks to be conducted by our subjects.
We also discussed how the required capabilities are realized in the language IEC 61499 (Sect.3.2) and in the available tool environments (Sect.3.3).Assessed tool capabilities and their realization in 4diac IDE are discussed in Sect.6.We then assessed 4diac IDE using the CD framework [17] to reveal usability flaws that could bias the study with industrial end users (Sect.7).Specifically, we performed a walkthrough of 4diac IDE based on typical control software maintenance tasks and tool capabilities, to reveal usability issues requiring tool improvements before the actual study with engineers.We addressed potential showstoppers by adapting 4diac IDE.A showstopper is an unexpected behavior of the tool that can prevent completing the study successfully.For instance, we identified issues that could result in an inconsistent project state.
Study design and pilot study
We first defined the study method based on our findings from the CD assessment, following the guidelines for conducting empirical studies described by Runeson and Höst [40] and Ko et al. [39].Based on the selected study system and the maintenance tasks to be conducted by subjects (cf.Sect.5.1) we defined the experimental setting, the data sources and collection methods, as well as the data analysis and reporting process (cf.Sect.5.4).Before the actual study, we conducted pilot experiments to reveal potential flaws in the designed tasks or bugs that could influence the results of the study.We tested the task difficulty ourselves and in test runs with PhD students from our department.Furthermore, we conducted a pilot study with two students from the computer science field, who have never used 4diac IDE before and with an industrial automation expert.Based on their feedback, we made minor adaptations to the study method, e.g., we reduced ambiguities by rephrasing text in the instructions given to users before the study.To increase the participation of industrial users, our goal was to ensure that subjects could complete all tasks in less than one hour, while still covering the key activities.
To ensure similar basic knowledge of the language IEC 61499 and 4diac IDE, we created a video1 (7 minutes long) in which we outlined both.The video shows the version of 4diac IDE that was used in the study.We did not explain the features required in the study in detail to also assess the discoverability of features in a complex IDE such as 4diac IDE.Using a prerecorded video ensured that all subjects received the same information prior to the study.
Study process and data collection
All subjects were asked to watch the introductory video to the tool before the study.The study was conducted in a remote setting via a video conferencing tool (Zoom or Skype).If the subject agreed, the session was recorded including the screen with all mouse movements and the audio (eventually we could record 16 of 17 sessions).We conducted the following process separately with each subject.
Briefing
The moderator first explained the goals and purpose of the study to the subject and requested their consent for participating in the study.Also, the moderator asked whether the subject had watched the introduction video and whether there were any open questions.
We asked the subject to activate the webcam, so that we could observe the subject, e.g., facial expressions, when performing the tasks.The subject had to share their screen in the call so that all their actions could be observed.As a last step, the moderator assisted the subject in starting 4diac IDE and importing the 4diac IDE project that contained the study system.The next phase of the study started as soon as the control application was opened in the graphical editor of 4diac IDE.
Tasks
Each subject performed the tasks described in Sect.6.2.The moderator read each task aloud and supported the subject on request.The moderator also encouraged subjects to explore the tool themselves before offering advice.We asked each subject to "think aloud" [13], i.e., to describe what s/he was doing and to comment on any concerns.One scribe documented the think-aloud statements.Another scribe watched the subject, who performed the maintenance tasks, and took additional notes on interesting observations beyond the thinkaloud protocol.
Data collection
After the subject had completed all tasks, the moderator performed semi-structured interviews on utility and usability [38] with each subject, covering questions on the results of the cognitive dimensions assessment (cf.Sect.7).Regarding usability, we asked questions such as "How did you like the tool capabilities for restructuring the application" or "How did you like the possibility to view the contents of a subapp within its context?".These interview questions allowed general discussions on tool capabilities.Finally, the subject got a link to access a usability questionnaire that was created with the tool LimeSurvey and hosted on a server of our university.The questions focus on gathering quantitative data and are based on Nielsen's usability attributes [14].They cover the five tool editors and views of 4diac IDE presented in Fig. 5, i.e., the Application Editor, the Outline, the System Explorer, the Properties View, and dialogues.We phrased the attributes as questions, e.g., "How easy was it to learn working with 4diac IDE?".Regarding utility, we asked questions [38] such as "What opportunities do you see for your company when using this tool in daily business?".We also collected demographic information, such as education and work experience.As the questionnaire is filled in asynchronously, subjects have time to reflect on their answers and provide new insights, for instance, in the text boxes.The templates that we used for writing think-aloud and observer protocols, the usability questionnaire, the list of tasks, as well as the questions of the interview are available online [41].
Data analysis and reporting
All think-aloud protocols and observer notes were stored on a cloud storage hosted by our university.Using an open coding technique [15], one researcher related all statements to the activities and tool capabilities.This work was checked by two other researchers.A total of over 900 think-aloud statements and 370 observations were recorded by the scribes.Per subject, we collected about 10 pages of material.In a joint session, all authors assigned the identified statements to the cognitive dimensions.We could directly relate many thinkaloud statements with the cognitive dimensions discussed in Sect. 4. We discussed the interpretation of all think-aloud statements and observer notes as well as the answers given by study subjects in the interviews to derive implications on usability and utility (Sects.8 & 9) and also general implications for tool developers (Sect.10).As we related interview questions regarding usability with activities and cognitive dimensions, we can discuss the subjects' answers in the light of the CD framework.
To address usability issues that were identified in this study, various bug fixes and tool improvements were implemented.All changes were published as part of the opensource project Eclipse 4diac.
Industry partner study
The initial industry partner study was conducted with version 1.14.0RC1 of 4diac IDE.Ten experienced automation engineers were nominated by our industry partner as subjects in the study.The subjects had an average of 14 years of experience in control software development, ranging from 1 year to 30 years.They have been working for their current employer between 4 and 30 years, on average 13.6 years.All subjects have an educational background in engineering, eight of them from a college or a university.One subject served as a pilot subject to reveal problems in the setup and to reduce the number of issues and misunderstandings during the study.Seven subjects had participated in at least one workshop on developing control software with IEC 61499 and already had at least basic knowledge of software development based on this standard, as well as basic experience using the tool 4diac IDE.Three subjects used 4diac IDE for the first time during the study.All ten subjects from the industry partner were male, but two female students participated in the pilot study.The study results from the expert pilot subject are included in the analysis.
Extended reassessment study
In a follow-up user study, version 2.1.0RC2 of 4diac IDE was evaluated, which included the tool improvements that were implemented after the industry partner study.Three male students participated in a pilot study, two of them had prior experience in 4diac IDE.During the pilot study, no major issues of 4diac IDE were identified.We therefore conducted a user study with seven industry experts (one of them female), each from a different company.Two of the companies are control system vendors, two special machine builders, one a plant builder, and two automation software vendors.They are covering a broad spectrum of production automation subdomains.All subjects have experience in control software development, ranging from 2 years to 17 years, on average 12.7 years.They have been working for their current employer for 0 to 17 years, on average 4.8 years.All subjects have an educational background in engineering from a university.Six subjects have previous experience in developing control software with IEC 61499.While two of them have only little prior experience with 4diac IDE, others have up to 10 years of experience using the tool (cf.Sect.11).One subject used 4diac IDE for the first time during the study.
This extended reassessment study aimed at collecting feedback from a broader audience, as the participants of the industry partner study may have evaluated the tool influenced by their common company culture.Furthermore, it allowed evaluating the tool improvements that were created to address usability flaws in 4diac IDE 1.14.0RC1.This is particularly relevant for identifying trade-offs of these design choices: Features addressing one cognitive dimension may negatively affect other dimensions.For these trade-offs, expert feedback is particularly valuable.
Development environment under test
Eclipse 4diac [11] implements a tool environment for the visual modeling language that is defined in the industrial standard IEC 61499 [7] and is targeted at domain experts, i.e., automation engineers.The environment includes a modeling tool (IDE) and a runtime for executing models on various platforms.In this section, we discuss 4diac IDE and tool capabilities that are relevant for common control software maintenance tasks.
The 4diac IDE
Eclipse 4diac is an open-source environment for modeling systems based on IEC 61499.It includes both an IDE and a runtime environment for executing IEC 61499-applications.The IDE comprises: -A graphical and textual editor for developing FB types, -A graphical application editor for instantiating types and creating FB networks, -A graphical editor for the hardware configuration, -A tool for launching and managing runtimes on a PC, -Basic support for monitoring and testing.4diac IDE is developed in Java and Xtend as a set of plugins for the Eclipse platform.It uses technologies that are commonly applied for editors of DSLs: an EMF metamodel, XText parsers, and the graphical editing framework (GEF3) [42].
4diac IDE is structured into five views (cf.Fig. 5).The System Explorer on the left lists all projects and their contents.Each project comprises (i) the IEC 61499-system model and (ii) a project library containing FB types that are either defined in the standard or created by a developer for the project.An overview of the system model is provided as a tree that shows all applications with their full hierarchy and all instances of FBs, as well as the system configuration with all devices and resources.This tree view allows the developer to get an overview of the system model, which provides information on the hierarchy between FBs.The connections between FB instances are only shown in the graphical editor.From the System Explorer, users can navigate to the corresponding location in the graphical Application editor.In this editor, the network of FBs is shown, new instances can be added, and connections can be added or reconnected.Individual items can be modified in the Properties View.When an FB instance is selected, its settings are shown in several tabs including the instance name, the descriptions of an instance, and its interface.Information that is defined by the type is provided as read-only.This includes the type description, version information, and the interface of the FB.The Outline allows navigating and orienting in large applications as it provides an overview of the full drawing area.Some editing operations, such as creating new types, involve Dialogues.
Figure 6 shows the adaptations to 4diac IDE, which address usability flaws that were identified in the industry partner study.
Industrial automation engineers can particularly benefit from advanced IDE capabilities when working with large-scale applications.The tool needs to provide high performance for navigating through applications with thousands of instances.Furthermore, information has to be well accessible, so that information about the environment of a component can be discovered and inconsistencies can be detected easily.The variability of production plants furthermore results in high requirements and challenges for reusing application parts.Vendor-neutral solutions are thus preferable to fully benefit from the hardware-neutral design of applications in IEC 61499.
Assessed tool capabilities
The goal of this study is to evaluate the usability and utility of tool capabilities for large-scale applications.Typical tasks of industrial developers were defined based on the scenarios S4 and S5 described in Sect.3.1.Automation software engineering involves different groups of developers: While users with a computer science background create basic software components (FBs), automation engineers design applications by connecting these FBs [9].Developing new FB types is outside the scope of our study, which focuses on editing (maintaining) existing application models using a pre-defined library of FB types.Based on our analysis, we choose the following tasks for our user study.The tasks follow a maintenance scenario to create a realistic setting for the study.We used an IEC 61499 application of a workstation where a robot assembles parts that arrive on a conveyor belt (adapted from [43]).We extended it to cover three identical parallel stations (referred to as Left-, Middle-, and RightCappingStation). Furthermore, we introduced additional hierarchies to better serve the purpose of the study.For the study we mimicked a situation where a broken motor in the station is replaced by the product of a competitor, which requires adapting the respective parts of the control application.For instance, the subjects had to find the location of the motor in the software, save the current version to the library, and replace it with an updated software for the new motor.
A video demonstrates the tasks in 4diac IDE2 .
Orienting in an unknown application
Industrial automation engineers frequently have to perform maintenance tasks directly on-site at the machine.In this situation, the engineer has to quickly navigate through a partly unknown control application.In our study, we therefore ask subjects to (i) find the application part controlling the motor that will be replaced, (ii) find all other motors in the application and where they are located, and (iii) follow an event connection to identify which application part is triggered next.
Creating/removing hierarchies
Following a workflow for reusing legacy software [25], subjects group existing control code in a subapp and name it with a valid IEC 61499 identifier.After adding another block to the subapp, they need to manually inspect that all connections are properly updated as well.Finally, subjects remove a needless grouping: In the study system, we created a subapp that only contains a single block.Operations that modify hierarchy levels are typical for refactoring an application and structuring it based on the physical composition of the automation system.
Working with the library
Subjects save their newly created subapp (from task 2) to the library for later reuse in other projects.They add a provided file that defines a subapp for the new motor to the library from outside the tool.Subjects then replace their subapp with an instance of this new type, while keeping the connections intact.
Editing
Subjects can only perform edits in an untyped subapp.Hence, they need to detype their instance of the motor, i.e., convert the typed instance into an untyped one.In this task, we analyze general editing features.We ask subjects to extend the interface with ten parameters to test the editing capabilities also for larger amounts of data.Furthermore, they add three FB instances of existing types from the library, edit an event connection, and add several new connections.Finally, parameters of constant type are added to three inputs.
Cognitive assessment of 4diac IDE
As preparation for the user study, we assessed the capabilities of 4diac IDE (regarding the four tasks described above) using the CD framework (cf.Sect .4 and Ref. [17]).The CD framework differentiates four basic types of user activities: incrementation, transcription, modification, and exploratory design [16].Maintaining software in an IDE for a visual programming language is related to exploratory design as it combines incrementation with modification without knowing the desired end state in advance: Adding new elements (e.g., FB instances, connections) to the application can be considered an incrementation, as it adds further information without altering the existing application structure.Adding/removing hierarchies or adjusting the block position can be considered a modification.Each user activity involves usability trade-offs regarding one or several cognitive dimensions.For example, a high viscosity, i.e., the resistance to change, is harmful for modification and exploration activities, but has less impact on the one-off tasks performed in transcription and incrementation.For each of the four maintenance tasks described in Sect.6.2, we analyzed how well 4diac IDE addresses the relevant cognitive dimensions.Some dimensions are relevant for all activities and thus crosscut our structure.For example, for the premature commitment dimension, the following questions need to be considered: Are there strong constraints on the order in which the tasks must be accomplished?Are there decisions that must be made before all the necessary information is available?Can those decisions be corrected or reversed later?Specifically, our aim was to reveal potential showstoppers for each task, which could inhibit the successful completion of the user study.We tested and analyzed 4diac IDE based on our defined tasks and the CDs to reveal such errors, but also considered prior experiences with users.In our discussion below, we highlight such cases with the keyword FX, indicating that we fixed and improved 4diac IDE before involving industrial users.The label OK highlights dimensions that we considered sufficiently supported according to the CD framework.We did not focus our user study on these dimensions.All other paragraphs, highlighted with ST, describe dimensions which we need to investigate more closely in our user study by refining our research method accordingly.
For each maintenance activity, we explain important tool capabilities and the affected cognitive dimensions.
Summary Boxes highlight important findings.
Orienting in an unknown application
The System Explorer (cf.Fig. 5) view shows all elements of a block diagram as a tree.Each element can be selected and opened in the graphical editor.Selected elements are highlighted and their attributes are shown in the Properties view.The Outline (cf.Fig. 5) provides an overview ("minimap") of the diagram that is opened in the graphical editor and also allows navigating in the diagram.
CD assessment
For orienting in unknown applications, developers can rely on Secondary Notations that 4diac IDE provides: instance names, instance comments, as well as type names and type descriptions indicate the functionality of an FB type or the role of its instance (OK).Blocks, however, do not visually represent their behavior and therefore have a low Role Expressiveness.For example, an FB adding two numbers could be represented graphically with a mathematical symbol (ST).Hierarchical compositions (subapps) increase Abstraction (OK), but reduce the Visibility of application parts (FX): when application parts are structured into a subapp, they cannot be viewed in their context anymore.Only the contents of a single (sub-)application is shown in the graphical editor.Several graphical editors can, however, be opened in parallel and arranged freely to compare application parts (Juxtaposability, OK).In large (sub-)applications, we identified that a selected FB instance and its connections are difficult to find, although they are highlighted by a border (Diffuseness, FX).Finally, the IDE does not provide any navigation along connections, although developers may need to follow a signal path across hierarchical levels (Hard Mental Operations, FX).
Viewing models.Understanding a hierarchical model requires viewing the contents of abstracted parts in their context.
Tool improvements
After the assessment, we enhanced the graphical editor with additional mechanisms to navigate along dependencies and between hierarchies.When a pin is selected, users can quickly navigate to all connected pins that are listed in a selection dialogue.For subapps, we added a quick link to access the FB network they contain and vice versa.A new feature for expanding a subapp (cf.element 1 in Fig. 5) allows viewing its contents as part of the surrounding FB network and thus increases visibility.We improved the highlighting of selected FB instances with a blue overlay, which resembles the highlighting for text.A transparent overlay is shown already upon hovering over FBs and pins, thus better visualizing the objects that are available for user interaction (cf.elements 1 and 2 in Fig. 5).
Creating/removing hierarchies
Subapps and adapters group elements to a hierarchical structure.Developers can design new subapps either bottom-up with a self-defined interface, or create them top-down from an existing network of FBs.For the latter, the tool infers the required interface from the existing connections.FB instances can be added to alter the scope of a subapp.If required, 4diac IDE updates the subapp interface automatically.A subapp can also be flattened, i.e., deleted and replaced by its contents.
CD assessment
Untyped subapps (cf.Sect.7.3) have a low Viscosity as they can be easily created from a network of FBs (OK).As the subapp interface is updated automatically, also the Error-Proneness is reduced (OK).However, the reverse operation of moving an FB instance to the parent network is not supported (Premature Commitment, FX).Adapters group connections to a single communication link between two FB instances, which reduces Diffuseness of the application diagram (OK).However, the limited accessibility of the abstracted pins also reduces Visibility (FX).Compound data types ("Structs") could group data connections and increase Abstraction, but are not supported in 4diac IDE (FX).
Structuring models.Efficiently creating and removing hierarchy levels supports developing well-structured models.
Tool improvements
We added a feature to move FB instances from a subapp to the surrounding FB network while automatically adjusting the subapp interface.A tabular editor was developed for creating Structs and we improved the data type selection.As dropdown menus were not suitable for a large number of data types, we substituted them with an autocomplete field and an optional selection dialogue.For all compound data types, we added a link to the type editor to quickly access the abstracted pins.
Working with types
Typed subapps are defined in IEC 61499 and are stored in the library together with FB types.Additionally, 4diac IDE supports untyped subapps that are used only in a single location.Their functionality resembles anonymous classes in objectoriented languages.Users can save an untyped subapp as a type for later reuse, or detype a typed subapp to perform changes in a single instance (i.e., convert a typed subapp into an untyped one).They can also replace any instance with another type.If the pin names are identical, connections are automatically updated.
CD assessment
Detyping a subapp removes the connection to its type definition, thus turning it into a clone of the original type.Editing the type will not affect this instance anymore, leading to Error Proneness, as users may forget to also modify the untyped copy (ST).Graphically, untyped subapps strongly resemble typed ones (Role Expressiveness, FX).If a pin is renamed in a type, connections to this pin in its instances are lost, resulting in the need for manual changes (Viscosity, ST).
Reusing functionality.
Storing parts of a model in the library helps reusing the functionality, but edits may be required for some use cases.
Tool improvements
We implemented new features to automatically update or unmap all types in an editor to reduce the number of manual operations.FB instances now have an icon indicating their type.We also redesigned our icons and added a new icon for typed subapps to better differentiate them from untyped ones.
Editing
4diac IDE supports adding FB instances to the application from the System Explorer and from the Palette.Both views are by default next (left and right) to the graphical editor (cf.Fig. 5).Connections can be added via drag and drop between pins, if the data types are compatible.
CD assessment
The Viscosity for changing the layout of an application is very high, as all affected FB instances and connections have to be adjusted individually (FX).This high viscosity may increase the Enforced Lookahead when developers have to know the target program structure early.For adding new FB instances, the respective type has to be selected outside the graphical editor, which may constitute a Hard Mental Operation (FX).Editing the interface of an untyped subapp has a high viscosity because the tables do not support copy/paste or keyboard navigation (FX).Adding or changing connections is difficult due to the lack of visual feedback that would illustrate which parts a user can interact with (Visibility, FX).
Inserting blocks.The frequent task of inserting blocks should be supported directly in the graphical editor and with sophisticated search features.
Tool improvements
We created a dedicated in-place field for searching types and inserting instances directly in the editor.It can be quickly accessed by double-clicking on the diagram background.We added a selection and hover feedback to connections to improve the user experience for editing connections.Furthermore, we added support for the Eclipse Layouting Kernel3 to allow for automated placement of FB instances and connections.We greatly improved all tables: They support keyboard navigation, have consistent columns, and allow copy and paste also between tables.Furthermore, the automated suggestions for a newly added row were improved and now depend on a selected row.
Cross-cutting aspects
4diac IDE always enforces correct models, which impacts Provisionality and Premature commitment.We decided to study these dimensions in more detail in our user study (ST).
To prepare for the study, we identified and fixed several general issues, especially regarding Consistency (FX).Specifically, we revised our menu entries to be consistent with those of the Eclipse platform.Where IEC 61499-specific terms are introduced (e.g., a System), they are now used consistently also within dialogues.Considering our graphical editors, we learned that the framework GEF3 [42] does not handle zooming nor scrolling correctly.For instance, new FBs were always inserted at the top of the diagram.This was a showstopper we had to fix (FX) for handling large applications, because the reported location did not consider whether the user had adjusted the view via zooming or scrolling.We also considered the results of our initial cognitive dimensions marked as ST or FX (cf.Table 1).In our second study, we sought detailed feedback whether our changes resolved the original usability issue.
Industry partner study results and discussion
In our discussion, we report the results from the ten industrial experts who participated in the industry partner study (including the industrial expert who served as a pilot sub- Graphical representation of FBs ( 2)
Opportunities
Platform-independent software (4) Better meet requirements for development (4)
Threats
Currently not all required features are supported (4)
Small development community (3)
Long-term support ( Customers request specific hardware (3) ject).We focus on results related to dimensions that required further analysis according to our cognitive assessment, i.e., that were marked as ST (we wanted to investigate these in more detail in the study) or FX (to evaluate the usefulness of our tool improvements).
Summary Boxes again highlight important findings.
The qualitative results regarding the strengths and weaknesses of 4diac IDE are summarized in Table 2.For each task, we present the detailed results and relate each aspect to the cognitive dimensions (cf.Table 1).
Orienting in an unknown application
All subjects relied on Secondary Notations such as names and comments to find the application parts that represent a motor.Two subjects asked for adapting the concrete syntax of a block to graphically represent its functionality (Role Expressiveness).Six subjects wanted to use a search feature for finding an instance by its name (Visibility).After identifying the required block type for a Motor, 5 subjects furthermore requested a direct link to all instances of a certain type (Hidden Dependencies).As 4diac IDE does not support an automated search, subjects had to navigate through the model manually, either via the tree view (System Explorer) or in the graphical editor.The implemented shortcuts for navigation helped subjects to quickly move across hierarchy levels, yet we observed that a move from one level to another still poses a Hard Mental Operation.Four subjects mentioned that the path from the root of the control software model to a block instance is not visible in the editor, which may have hindered orienting in the application.Six subjects specifically reported difficulties in identifying the current editing location (Hidden Dependency).
Navigating through hierarchies.Structuring models hierarchically is essential for large-scale models, but tools need to support the modeler in understanding hierarchical models with hints on the current editing location and with sophisticated navigation features.This complements the finding on viewing different parts of models simultaneously from the cognitive assessment (Section 7.1.1.).
We could not observe any major difficulties in selecting blocks or pins, despite our analysis in the cognitive assessment.Hence, the new selection feedback may have decreased Diffuseness.
Interaction feedback.Hover and selection feedback help exploring which elements of a model can be selected.Graphics frameworks may offer possibilities for implementing such feedback layers, which should be consistent to highlighting code in textual editors.
In 4diac IDE, each hierarchy level is opened in a separate editor tab, thus affecting Diffuseness.In the interview, we therefore asked subjects whether they had preferred navigating within a single tab.Seven subjects considered it important to view different parts of the graphical model side-by-side, but 5 subjects preferred a single tab as default mechanism.Concerning Abstraction, one subject positively remarked that the hierarchy structures the application and reduces the diagram size.Two other subjects, however, mentioned that the deep hierarchy of the demo application hindered understanding the application.The possibility to expand a subapp (i.e., show the content of a grouping in the context of the next hierarchy level) aimed at increasing the Visibility, but only 2 subjects used it actively.During the interview, the feature was, however, considered useful by 5 subjects.Further enhancements may be required to improve the utility of this feature.
Creating or removing hierarchies
All subjects positively remarked the low Viscosity of editing untyped hierarchical structures (untyped subapps) when aggregating blocks controlling a motor.We, furthermore, did not observe any major issues while moving FB instances across the hierarchy (Premature Commitment).The subjects had to create a new subapp from three FB instances.We observed that 2 subjects first created a subapp and then added the FBs, while 8 subjects used the dedicated feature that creates a subapp directly from a selection of a set of FBs.4diac IDE does not suggest a specific order of operations for this task (Premature Commitment).
For deleting a hierarchy level, subjects used various approaches: 5 found the dedicated refactoring feature, 3 moved the content and manually deleted the empty subapp, and 2 used cut and paste.The last approach does not automatically update the connections, which was requested as an improvement by 5 subjects throughout the study.As cut and paste stores connections within the same hierarchy level, this request can be related to the cognitive dimension Consistency.No subject expanded the subapp to increase Visibility, which would have allowed drag and drop of the contained FB.
Redundancy.Offering different ways to achieve a single goal improves explorability of a tool's features, such as a menu entry and a capability directly in the graphical editor.
In the interview, we asked subjects whether they liked the possibilities for restructuring an application.7 subjects confirmed that they liked the current refactoring features, but 5 of them requested further improvements.One subject criticized the automatically generated pin names, which may violate coding guidelines for a software project.As a result, each pin name would have to be adjusted manually (Viscosity).
Working with types
In this task, subjects had to save their subapp (i.e., the composed block) created in the previous task, to the type library.Within the respective dialogue, 5 subjects struggled to select the right target folder from the list of all projects in the workspace.Hence, 2 of them requested that the type library of the currently edited project should be preselected by default (Diffuseness).Five subjects had difficulties in creating the required folder in the type library using a file dialogue provided by the Eclipse platform.Specifically, they expected a dedicated button or context menu entry in the dialogue, while folders can only be added by specifying the new folder name as part of the save path, or before opening the dialogue (in the system explorer).We can relate this observation to the cognitive dimension Premature Commitment.We also observed difficulties in distinguishing the provided views: one subject tried to create the new folder in the Palette, which is used in many Eclipse-based modeling tools for adding elements to the diagram (Consistency).
Subjects also had to add a file from the file system to their project.This task can be completed via drag and drop or copy and paste from the file explorer to the type library.Nine subjects had difficulties in adding the file.One of them, who had no prior experience in 4diac IDE, could not complete this task without detailed instructions from the moderator (Hard Mental Operation).
Step-by-step.Provide dedicated dialogues for difficult tasks, but offer fast workarounds for expert users.
Next, we asked subjects to replace their own motor controller with an instance of the newly imported type.The automatically generated pin names do not match those of the imported type and, therefore, 4diac IDE does not automatically handle the connections.As connections are dismissed without a prior warning (requested by one subject), this targets the cognitive dimension Error Proneness.
Visualize changes.Users need to understand the consequences of actions and require support in fixing inconsistent models.Information loss must be prevented.
Editing
In this task, the typed subapp had to be converted to an untyped one, which could be completed by all subjects.When instructed to add a pin to the interface, 5 subjects attempted to double-click on the background of the table to create a new row.As a double-click allows adding new FB instances in the graphical editor, this can be classified as a Consistency issue.
Table editing.Users expect editing features for tables within the grid, rather than relying on external buttons.
Then, subjects had to add three new blocks.Most subjects used the in-place search field that was implemented to avoid a Hard Mental Operation.It is accessible via doubleclick, or via the context menu (preferred way of one subject).One subject, however, preferred the Palette for creating new instances.Another subject used drag and drop from the System Explorer.
Inserting blocks should be possible directly in the graphical editor, supported by advanced search features.This confirms the corresponding finding from the cognitive assessment (Section 7.4.1.).
Four subjects mentioned that they liked the automatic layout for applications.Whereas one subject requested that the tool automatically applies a new layout after adding a block (Viscosity), another subject preferred manually triggering this process.For 2 subjects, we observed difficulties in orienting in the application after applying a layout.
Although we adapted the way of handling and creating connections, we observed difficulties in performing this task.For instance, reconnecting requires first selecting a connec-tion, but subjects attempted to immediately drag the handle.As moving FB instances is possible without prior selection, this was regarded as an Inconsistency by 3 subjects.Also, for one subject, the weight of the connections was too small (Visibility).6 subjects furthermore requested that the routing for newly created connections must be improved (Viscosity).
Block layout.Quickly editing graphical diagrams requires sophisticated layout algorithms that are tailored to the needs of the users.We asked subjects to identify which variables are contained in the Struct named ctrl.A quick link for accessing its contents is available in many locations.As a result, all subjects found the type definition quickly, indicating a benefit of the implemented redundancy (Visibility).We also requested that subjects enter constant values for parameters.One subject expected better consistency between the Properties view and the graphical editor (Viscosity).
Update everywhere.Edits should be displayed in all views of the model instantly, not only after a new entry has been confirmed.This finding is related to the one on visualizing changes.
4 subjects furthermore considered the input validation insufficient (Error Proneness).
Cross-cutting aspects
We further identified issues that are relevant for all of our tasks.Subjects had difficulties with the naming of some context menu entries, especially where they considered several menu entries as potentially fitting for their current task (Role Expressiveness): For instance, Flatten subapp removes a hierarchy level and replaces a subapp with its contents, while Toggle SubApp Representation expands a subapp (2 subjects) The icon size was reported to be too small (2 subjects, Visibility).Four subjects had difficulties understanding the icons and 2 could not differentiate them (Consistency).
Extended reassessment study results and discussion
We adapted 4diac IDE based on feedback from the industry partner user study.Most adaptations required a trade-off between several cognitive dimensions.Further studies, also with industrial experts, are needed to evaluate the effect of these trade-offs on the usability and usefulness of the modeling tool.We report the results of a further cognitive assessment and an extended reassessment user study below.
Cognitive assessment and tool adaptations
In a new cognitive assessment, we evaluate all changes with respect to our maintenance tasks.We analyze which cognitive dimensions were improved (+), not affected (OK) or impaired (-).Some aspects require further investigation in the usability study (ST).
Orienting in an unknown application
The industry partner study revealed significant Diffuseness (+) for this task, as subjects had difficulties exploring the context of their current editing location.A breadcrumb widget (cf.Fig. 6) now displays the path to the currently open element that is displayed in the graphical editor (Visibility, +, Closeness of Mapping, OK).Each element is displayed with the same label and icon as in the tree view (Consistency, OK).
The breadcrumb widget provides an additional mechanism for navigating through the automation software, as each element of the path can be selected to jump to a higher level in the hierarchy.This constitutes a Hidden Dependency, ST, as selections in the breadcrumb widget replace the contents of the graphical editor.Additionally, a tree navigation can be opened starting from any hierarchy level.Users may have difficulties identifying that the arrows are buttons rather than decorators (Abstraction, ST).Subjects from our industry partner reported problems with the high number of open tabs (Diffuseness, +).Unfortunately, this tool adaptation required a trade-off, as navigating in a single editor reduces the Juxtaposability (-).Comparing two parts of an application is still feasible within a single tab, but additional parts of the same application cannot be viewed simultaneously (Visibility, -).
A new search dialogue was implemented to find elements more easily in large-scale graphical models.We used the infrastructure provided by the Eclipse platform for implementing the search capabilities for our modeling tool.The software model is traversed based on the provided search request: a search string is matched against instance names, type names, comments, and interface pin names.Identifying the appropriate search request for a task may constitute a Hard Mental Operation (ST).All changes are shown in Fig. 6.
Creating or removing hierarchies
Subjects from our industry partner remarked that automatically generated pin names may violate naming conventions.We therefore generate names based on the initial pin names, rather than adding a prefix (Viscosity, +).Furthermore, cut and paste now restores connections even across a hierarchy level.This allows using known shortcuts also for complex refactoring operations rather than dedicated context menu entries.As cut and paste has already retained connections within the same network, the extension improves Consistency (+).
Working with types
Replacing the type of a motor had a high Premature Commitment because invalid connections were dismissed.Following the recommendations of the industry partner study, error markers were introduced in 4diac IDE [33], which are displayed as red boxes (Abstraction, ST).The markers allow working with incomplete or inconsistent models and thus increase Provisionality (+).When replacing the motor subapp, manual work is typically required to restore connections after the pin names have changed (Viscosity).Rather than dismissing connections to a removed pin, an error marker is now displayed at the interface.Invalid connections with a type mismatch are also represented via error markers.In the user study, we need to investigate whether correcting errors poses a Hard Mental Operation for developers (ST).Error markers are removed automatically when the inconsistency was resolved to reduce Viscosity.
Editing
We improved the synchronization between the property sheets and the graphical editor to immediately visualize the results of an editing operation.Furthermore, the tool now validates all user inputs and informs about any invalid inputs to reduce Error Proneness (+).The tool tip also includes guidance on correcting the input, for instance, by listing the correct format.Unfortunately, there is a trade-off: Enforcing correct inputs negatively impacts Provisionality (-).
Cross-cutting aspects
We also renamed context menu entries to increase their Role Expressiveness.
Adjustments to the study setup
The main goal of the extended reassessment study is to validate new adaptations of 4diac IDE and also to get feedback from a wider range of participants.Therefore, we performed another iteration of the study method described in Sect. 5.The study material was only slightly adjusted.In particular, we added an interview question for the new error visualization.A new introductory video was created to show the updated version of 4diac IDE and was used to provide basic knowledge about IEC 61499 and 4diac IDE to the subjects before the study 4 .The study system (capping station) was Fig. 7 Learnability of the views in 4diac IDE, rated between very easy and very difficult Fig. Number of errors that occurred in 4diac IDE, rated between no errors and too many errors Fig. 9 Subjective satisfaction of using 4diac IDE, rated between very pleasant and very unpleasant used again.We furthermore informed all subjects that a new search feature was available.
Results
We grouped our findings for each task based on the cognitive dimensions.The quantitative results from the questionnaire are presented cumulatively for both user studies.Subjects rated the learnability (Fig. 7), the number of errors (Fig. 8), and their subjective satisfaction (Fig. 9).They also reported their perceived efficiency of the tool 4diac IDE (Fig. 10).
The questionnaire allowed subjects to provide a subjective categorization of the perceived tool usability and utility.It may therefore vary among subjects, how many errors are perceived as "too many" or what was "easy" or "difficult."However, in this kind of qualitative study this subjective feedback is very important.It indicates where users perceived issues or felt uncomfortable.These are the places where tool developers need to reconsider their implementation and derive measures.
The questionnaire was, furthermore, filled in by each subject individually.Subjects could choose not to provide an answer for a certain view (reported as "n/a").Subjects were sometimes unaware that they had used a view (e.g., Outline) or did not gather enough experience to evaluate this view w.r.t. a certain dimension.
What can be seen from the results of the questionnaire (cf.Fig. 7, 8, 9, and 10) is that the adaptations derived from the industry partner study (cf.Sect.8) lead to a significant improvement.
Orienting in an unknown application
Subjects frequently needed to identify the context when navigating through the application to find parts controlling a motor.All subjects used the breadcrumb widget (cf.Fig. 6) to identify the path to the currently open element at some point during the study.We, furthermore, observed that one subject, who had difficulties navigating through the application, used the breadcrumb widget very little.Another subject reported specifically that the breadcrumb widget helped orienting in the hierarchical application.Selecting an element in the breadcrumb opens it in the graphical editor, thus allowing the use of breadcrumb for navigating.Users can navigate either to a higher level (by clicking on the respective path element), or to a contained element (by opening a tree view).Six subjects preferred navigating to a contained element by double-click onto the subapp, but all subjects used the breadcrumb to navigate to the next higher level.Two subjects had difficulties identifying the currently edited path element in the breadcrumb.As selecting this element does not make sense, the button could be visualized differently (Abstraction), but currently is not.Four subjects encountered a Hard Mental Operation, when searching for the previous editing position.Although a back button is available in the toolbar, none of the subjects found this button without help from the moderator.It may help to add navigation buttons directly to the breadcrumb widget.We conclude that the breadcrumb helps visualizing Hidden Dependencies between the currently viewed software part and its context.
Where are we?Show the user the location of the currently edited element in the context of the full application (easily accessible, not hidden in a tree).This complements the finding on navigating hierarchies from the industry partner study.
Juxtaposability was affected when we introduced the breadcrumb widget, but two subjects stated that they preferred the new navigation within a single editor (less tabs reduce Diffuseness).None of the subjects in the extended reassessment study considered viewing more than two parts of an application a requirement for the IDE.Hence, the tradeoffs imposed by implementing the breadcrumb widget did not negatively affect the usability and usefulness of our tool.
The introduced search feature received positive feedback from all participants.The search dialogue allows customizing a search request.Most subjects did, however, not refine their search but kept the standard options.As a result, two subjects requested a feature to sort the results.Another subject remarked that it is difficult to differentiate various kinds of elements that a search request reveals.This may indicate that filter capabilities for the search results are required (Diffuseness).Two subjects, furthermore, requested that the path to a certain element is shown in the table listing the search results.This allows distinguishing search results with similar instance names (Visibility).Two subjects asked for a dedicated feature to find all instances of a type.While this is already possible by customizing the search dialogue, a subject remarked that this common search request should be pre-configured and easily accessible (Hard Mental Operation).In summary, providing a search feature has drastically changed the subjects' approach of finding a specific application part.We, however, conclude that the usability of search dialogues is essential to cope with large-scale software systems and needs to be further improved.
Searching models.Graphical modeling tools need sophisticated search features for orienting in larger models.Offer pre-configured search options for the most common tasks and provide sorting/filtering capabilities for efficiently analyzing the search results.The need for search features was already identified in the industry partner study, in the extended reassessment study we tested the respective tool adaptation.
Creating or removing hierarchies
Four subjects used expanded subapps for exploring the application (Visibility), but disliked that the automated layout algorithm had to be triggered after each expansion.Two of these subjects discontinued using the feature due to this high Viscosity.We also discussed the advantages and disad-Fig.10 Efficiency of using 4diac IDE vantages of subapps in the interview (Abstraction).Subapps can help handling large-scale applications, as the hierarchy reduces the size of individual diagrams (mentioned by 3 subjects).Four subjects remarked that the hierarchy reduces the complexity of the model.
Working with types
In the extended reassessment study, all subjects could save their subapp as a type without further Importing the file containing the provided typed subapp was not to all subjects.Although drag and drop from the file system explorer is possible, 2 subjects used a dedicated import dialogue (Hard Mental Operation).In the interview, one subject pointed out that typed subapps are an effective means of reducing code duplication in the model.S/he also remarked that the types facilitate understanding the application.
Editing
All subjects encountered an inconsistency in the model, which 4diac IDE visualizes using an error marker using the respective marker infrastructure of the Eclipse platform.When the source and/or destination of a connection is missing, a virtual pin is created instead.All subjects recognized the error based on the red color of the pin (Abstraction).Two of the seven subjects successfully reconnected the broken connection without any help from the moderator.One subject was surprised that the error marker disappeared automatically and reported that s/he wished to keep it for further reference (Premature Commitment).Another subject expressed doubts whether certain inconsistencies need to be fixed in order to run the model.The problems view lists all inconsistencies that are currently present in the projects.For FB types that are not instantiated in the project, the listed errors do not affect the execution of the IEC 61499 system (Diffuseness).For five subjects it was unclear that the error marker is not part of the FB interface, although it is represented as an interface pin (Role Expressiveness).Four subjects therefore compared the subapps to identify the differences between them.Only one of the subjects confidently identified the cause of the error (the name of the interface pin had changed).Two subjects asked for hints how to resolve the error in tooltips (Visibility).
Visualizing errors.Represent and handle errors in the model to allow for step-by-step improvements of incorrect models.Differentiate the severity of an error to allow prioritizing substantial flaws.The need for visualizing changes was already identified in the industry partner study, in the extended reassessment study we tested the respective tool adaptation.
All subjects used the context menu for editing their application.One subject criticized the order of items in the context menu.4diac IDE suggests many edits in the context menu to provide meaningful operations directly in the graphical editor.One subject had difficulties because of the large number of items in the context menu (Diffuseness).However, as the properties is not always visible, two other subjects required assistance when editing elements in the properties view.Editing the interface of an untyped subapp is not possible directly in the graphical editor, which was criticized by one of these subjects (Consistency).
Finally, subjects were confused when they accidentally navigated through the application with double-click.During editing, they did not expect to enter a different hierarchy level when double-clicking a pin, but rather the possibility to edit this pin (Visibility).
Cross-cutting aspects
4diac IDE comprises various views and editors.As a result, the display resolution has a significant effect on the Visibility of the application in the graphical editor.For a subject with a high screen resolution, the icons and the text were small and thus difficult to read.For a subject with a very low screen resolution, the views filled a large part of the screen space.As a result, the graphical editor only shows a few blocks at a time, hindering an efficient overview of the diagram.
Also in the extended reassessment study, various issues regarding Consistency within the tool could be observed.One of the novice subjects was confused that red error markers represent broken pins, while correct event connections are also colored red.
Table 3 provides an overview of the assessed activities, 4diac IDE's tool support, relevant cognitive dimensions, and the assessment results of both studies.It also previews whether the user study eventually revealed a dimension to be well supported (+), well supported but with potential for improvement (+/−), or not well supported and the tool needs to be improved (−).We rated three dimensions as better supported after the extended reassessment study.However, one dimension was also affected negatively.The measures taken after the industry partner study included a new breadcrumb editor, which negatively affected the Juxtaposability (cf.Sect.9.3.1).This trade-off was necessary and the study subjects of the extended reassessment study rated its impact as minor.
Lessons learned
We summarize eight lessons that we learned from our study and that we consider relevant for developers of any graphical editor for visual modeling or programming languages.These lessons generalize and discuss findings from our assessments and both user studies, including the findings that were described as summary boxes.
Beauty is in the eye of the beholder: Users need different options for arranging graphical diagrams.Like whitespace in textual languages, developers use the two-dimensional arrangement of blocks in visual languages to convey information (they use it as a secondary notation) [44].The memorability of software parts is also important for quickly navigating through the model.As a result, several users requested layout algorithms that do not alter the block position, which the users had manually defined before.In the extended reassessment study, two users, however, criticized that the diagram is not scaled automatically to the new dimensions.Obviously, these users would have preferred automatic layout.For visual programming IDEs, the requirements for the layout and arrangement of elements will vary depending on the user and the software and should therefore be customizable in the tool.For instance, some users may need to specify long parameter values and require more space between blocks, whereas others may prefer a compact representation to get a better overview of their diagram.When actions change the space requirements of the graphical diagram (e.g., when expanding submodules), the tool should automatically adjust the layout, however, only if users opt for automatic layout.
Lost&Found: efficient search features are essential Users benefit from advanced search capabilities when orienting in unknown diagrams.Search boxes should always be provided as an alternative to trees, drop-down menus, and lists to develop truly scalable IDEs.For instance, most subjects requested features for searching instances by their name.Furthermore, searches can reveal relations that are otherwise not directly accessible: in our context, users requested an overview of the used instances of a type, like the call hierarchy for functions that is common in textual editors.Dedicated dialogues for common kinds of search requests reduce the configuration effort and were therefore requested in the extended reassessment study.Furthermore, search requests for generic terms in large-scale software can help to get an overview of an unknown application.Detailed information about the found items should be provided in the search The results are rated as + (well supported), +/− (partly supported), or − (needs to be improved) result table to reduce the effort of navigating to each element.Sorting and filtering functions can help "searching within the search results."One at a time: offer one view per task Separation of concerns is essential to manage complex software [45].Subjects considered the System Explorer difficult to use, mainly because it is used as both a file explorer and for type management, while additionally showing the application structure.While developers are easily tempted to extend existing views with additional capabilities, it is often better to focus and ensure one view supports a single task only.Configurable perspectives as offered by the Eclipse platform can be used to arrange and customize multiple views and editors to align them with the complex development tasks that are specific to the respective engineering roles.
Living with inconsistencies: handle incorrect models gracefully Input validation should ensure correct and consistent models.However, a too strong focus on correctness may enforce a very strict order of user interactions, especially early in the development process [46].Where errors cannot be avoided, users should be informed unobtrusively, but near the editing position.It should be ensured that the error visualization does not obscure the model itself.Additionally, a list of errors can help getting an overview of the errors that are currently present in the model.Consider that users may be required to interrupt their work and continue later.For this use case, also incorrect models have to be visualized with best effort.This includes clearly presenting the cause of the error.Meaningful error messages help users to restore consistency in the model.Dedicated features should furthermore support users to return step by step to a correct and consistent model.Ideally, the tool recommends automated fixes to the developer or even applies non-critical fixes automatically.
Show me the way back: visualizing and navigating hierarchy IEC 61499 allows forming deep hierarchies by grouping elements into subapps.In our extended reassessment study, we offered a breadcrumb component to visualize the current position in the hierarchy of the currently edited modeling element.Additionally, the widget shows the full path to this element and allows navigating to any hierarchy level of the path, which was a capability used by all our subjects of the extended reassessment study.Navigating along the hierarchy requires constant feedback about the current editing location.Ensure that the active element is clearly marked in the visualization of the path.Buttons for navigating back to the previous location further improve the explorability of a model.
Get a new perspective: multistage usability studies are worth the effort The cognitive dimensions approach and the user study allowed us to improve the usability of our tool.The applied multistage process allowed us to receive feedback from several perspectives.Issues were reported in diverse stages.We identified various usability issues during the cognitive assessment, which allowed us to receive detailed feedback during the user study.Some aspects were observed during the practical tasks, discussed in the interview, and reported later again by the subject in the questionnaire.Other aspects only appeared in the questionnaire, which the subjects could complete in their own pace, and which therefore complemented our observations well.Not all issues would have been revealed if the study had only comprised a single stage.The extended reassessment study helped evaluating prior tool modifications, which often required a trade-off.Estimating the effects of such trade-offs on the user satisfaction can be difficult.
You cannot make everyone happy: universal versus tailored tools If our study has shown one thing, it is that a visual programming IDE such as Eclipse 4diac needs to be tailored to its target users to support them in their daily work (utility) and to make the IDE pleasant to use.However, especially in industrial automation, such tools have to adhere to standards [6,7].Tool developers have only limited freedom if they want to (fully) support the standard.Furthermore, the language and/or the standard that is supported by the tool evolves frequently.Tool developers need to consider this evolution, especially when deviating from the standard in any way.Balancing user needs vs. standards is a big challenge, especially, but not only in visual programming IDEs.Multistage usability studies combining walkthroughs and actual user studies can help (see lesson above), but the tailoring will remain challenging.
Expectations versus reality: finding out what users want and need Tailoring a visual programming IDE requires quite some effort, not only to actually adapt the tool, but to find out, what users want and also what they actually need.Unfortunately, what they want and what they need often does not match in practice.A mix of methods is required to analyze differences between wants and needs, e.g., tool walkthroughs like we did using the cognitive dimensions framework as well as user studies, in which you observe and let users think aloud and also combine that with interviews and questionnaires.The diversity of users does not make this process easier.One needs to find subjects that represent the different types of existing users.
Discussion and threats to validity
We briefly discuss whether and how our lessons learned compliment or contradict existing work before discussing the threats to validity of our research.
Our first lesson (manual layout) confirms existing work [44].Regarding Lesson 2 (efficient search), we think that it has not yet been investigated in detail, at least to the best of our knowledge, which kinds of different search means are required for visual modeling.Lesson 3 (one view per task) confirms the well-known importance of separation of con-cerns [45], but in our context additionally shows the need for configuring perspectives to different engineering roles.Lesson 4 (living with inconsistencies) is in line with seminal work [46,47] but not applied in state-of-the-art tools of the domain.Lesson 6 (multistage usability studies are worth the effort) confirms earlier findings [48] that usability studies should be preceded by (cognitive) walkthroughs and corresponding tool adaptations to yield more useful results with industrial participants.Walkthroughs increase the awareness of tool developers about important capabilities for improving the usability for end users.Regarding the last two lessons on tailoring tools and analyzing user needs, this has been discussed for a long time, e.g., already in HCI work from the early 1990s [49], but also more recently in different fields [50,51].
Overall, we conclude that our results and lessons learnt do not contradict but complement existing results and findings by adding experiences made in the domain of industrial automation to the body of knowledge of developing visual programming tools.
A threat to construct validity is the potential bias caused by the system created for the study.Although the control application is based on prior publications [36,52], the final system was created specifically for the study by one of the authors.However, our study does not focus on model details, but utilizes the model to evaluate the usefulness of the tool.We selected the model of the capping station as it was sufficiently large and it was expected to be intuitively understood by the domain experts, yet sufficiently different from their daily business to evaluate the tool rather than the model.
There are also threats to internal validity meaning that the results might have been influenced by our treatment.In the industry partner study, we had no direct influence on the selection of subjects.Instead, our industry partner nominated them based on our requirements (i.e., previous experience in automation engineering).We could therefore not ensure that the subjects represent a variety of departments.The number of subjects (ten) also may seem relatively small.However, they cover a range of different roles and a very wide range of work experience in the domain.Their average experience in developing control software is extensive (14 years).All subjects in the industry partner study were male.In the extended reassessment study, we selected the (seven) subjects ourselves, based on our own network.In this study, one female subject participated, who was an experienced user of 4diac IDE.Recruiting female automation engineers is challenging and we did not expect a significant influence on the results based on the gender of the subjects.We did not repeat the study with the same subjects, as we wanted to test the explorability of the tool in the context of our study.Again, seven might appear to be a low number.However, studies have shown that relatively few subjects can reveal a high percentage of the total usability issues [53].Both studies combined, 17 experienced automation engineers from eight different companies provided detailed feedback on the usefulness of 4diac IDE.Several subjects in our reassessment study had previous experience with 4diac IDE, which is a threat to the validity of our results.However, when comparing their answers with answers given by less experienced subjects in the industry partner study, we could not find any obvious differences.The fact that we only had subjects with experience in automation engineering could also be considered a threat to validity.While we might repeat our study with people from other domains in the future, we will also need to teach them some basic experience in automation engineering before conducting the actual study.
Regarding conclusion validity, there is a threat that the results are not based on statistical relationships or measurements but on qualitative data [15].Given that the main aim of the study was to investigate the behavior and opinions of users of a tool, qualitative research methods are though well suited.The analysis of the collected data still depends on our interpretation.The work was mainly performed by a single researcher but all results were carefully checked by two senior researchers.
With respect to external validity, we selected an example system that is representative regarding the size and complexity for the control software domain.Although the derived implications depend on our experiences of using and implementing 4diac IDE and especially its GUI, the capabilities are common in other IDEs for visual modeling languages, as discussed in Sect.3.3.The mapping to the CD framework also relates our results to HCI knowledge.
Our results are specific for our setting and our example system but can still be generalized to some degree, as our tool is similar to those in the same domain.It is furthermore based on open source infrastructure, which is frequently used by other modeling tools.Identified issues are thus potentially applicable to other Eclipse-based tools too.
Conclusions
Industrial experts in control software need to handle largescale applications for production lines and have to consider the long life cycle of the mechanical components, which also motivates frequent control software maintenance.In this paper, we assessed the usefulness of 4diac IDE, i.e., its usability and utility.We evaluated this visual programming IDE for large-scale automation software with regard to common control software maintenance tasks.We focused on difficulties specific to large applications such as navigating across hierarchy levels and structuring mechanisms, but also covered basic editing of application models.We first performed an initial assessment of the tasks in a walkthrough of the IDE following an approach based on the cognitive dimensions of notations framework and fixed discovered usability flaws.In a user study, ten industrial experts from one company performed these tasks.After further improving the tool based on their feedback, in a second user study, seven industrial experts from seven different companies provided feedback based on the original tasks, but using the updated version of 4diac IDE.This allowed evaluating the tool involving a wider audience with experts from several companies.Our results and lessons learned from the study are relevant for developers of visual modeling and programming tools.Identified capabilities are often not sufficiently supported in such tools.Furthermore, our results with an Eclipse-based tool are potentially applicable to other modeling tools using the same technology.As our improvements are available as part of the latest open-source release of 4diac IDE, they can serve as a good practice example for other project teams, together with this paper.As expected, the understandability and maintainability of models depends on the provided IDE features.For example, we learned that the layout quality affects how users understand visual models, but manually adjusting the graph layout is tedious.Users therefore benefit from an automated layout.However, they also need capabilities for customizing the layout algorithms to match their mental model.Orienting in a large application could be further facilitated with search features and reference lists.Our results and lessons learned complement existing results and findings by adding experiences made in the domain of industrial automation to the body of knowledge of developing visual modeling tools.In general, we conclude that advanced tools with advanced editing support can simplify working with large models and thus increase the benefits of the applied modeling language.Visual programming IDEs, however, should offer a clear benefit compared to drawing models on pen and paper.In future work, we will further investigate dimensions that we identified as insufficiently addressed by the tool.Hence, we will study both the adaptions to the language IEC 61499 as well as improvements for IDEs.
Fig. 1
Fig.1Core models defined in IEC 61499.The software is described in the block-based application model (top), where each block type (Function Block, FB) offers and encapsulates a certain functionality.For the execution, the application can be distributed across devices from the system model (bottom)
Fig. 3 FB
Fig. 3 FB Network (top) containing two submodules (subapps), MotorM1499 and Blocker.Each submodule itself contains a network of FBs (bottom).The adapters (green connections) group several event in-and outputs
Fig. 4
Fig.4 Overall process applied for conducting usefulness studies.The iterative process allows refining tool capabilities
Fig. 5
Fig. 5 Compilation of the main views and editors provided by 4diac IDE 1.14.0RC 1 for developing IEC 61499-based automation solutions: (1) expanded subapplication with hover feedback, (2) selection
Fig. 6
Fig. 6 New extensions in 4diac IDE 2.1.0RC 2 for effectively maintaining large-scale automation software: (1) breadcrumb widget for navigating and for displaying the path to the currently opened element,
Table 1
Main results of cognitive assessment (C.A.) of 4diac IDE
Table 2
Questionnaire: utility of 4diac IDE
Table 3
Main Results of the Industry Partner Study and the Extended Reassessment Study of 4diac IDE (Summary) | 21,169.4 | 2021-10-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Analysis of food system drivers of deforestation highlights foreign direct investments and urbanization as threats to tropical forests
Approximately 90% of global forest cover changes between 2000 and 2018 were attributable to agricultural expansion, making food production the leading direct driver of deforestation. While previous studies have focused on the interaction between human and environmental systems, limited research has explored deforestation from a food system perspective. This study analyzes the drivers of deforestation in 40 tropical and subtropical countries (2004–2021) through the lenses of consumption/demand, production/supply and trade/distribution using Extreme Gradient Boosting (XGBoost) models. Our models explained a substantial portion of deforestation variability globally (R2 = 0.74) and in Asia (R2 = 0.81) and Latin America (R2 = 0.73). The results indicate that trade- and demand-side dynamics, specifically foreign direct investments and urban population growth, play key roles in influencing deforestation trends at these scales, suggesting that food system-based interventions could be effective in mitigating deforestation. Conversely, the model for Africa showed weaker explanatory power (R2 = 0.30), suggesting that factors beyond the food system may play a larger role in this region. Our findings highlight the importance of targeting trade- and demand-side dynamics to reduce deforestation and how interventions within the food system could synergistically contribute to achieving sustainable development goals, such as climate action, life on land and zero hunger.
The three concentric circles form a simplified representation of the food system.The outermost circle, "Food System Activities," encapsulates food system activities and actors related to the production, processing, distribution, preparation, consumption and disposal of food 7 .The middle circle, "Food Environment," encompasses the "physical, economic, political and socio-cultural context in which consumers engage with the food system to acquire, prepare and consume food" 7 .The innermost circle, "Consumer Behavior," represents "the choices made by consumers, at household or individual levels, on what food to acquire, store, prepare and eat, and on the allocation of food within the household (including gender repartition, feeding of children)" 7 .This nested structure implies that food system activities and elements frame the conditions of the food environment, which, in turn, shapes consumer choices.The different types of food system outcomes are shown on the right.The diagram was adapted from Béné et al. 24 .
Figure S2 .
Figure S2.Percentage of gross tree cover loss (TCL) from Terra-i data between 2005 and 2021, relative to FAO-reported forest area in 2004 for countries included in the analysis (Source: https://data.worldbank.org/indicator/AG.LND.FRST.ZS?view=chart).
Figure S3 .
Figure S3.Spearman correlations in the time series database for production/supply, trade/distribution and consumption/demand variables at the global level.Blue cells represent a positive correlation while red represent a negative one.The larger the correlation, the darker the color.White cells mean nonsignificant correlations.The first row depicts the tested correlations for the response variable (tree cover loss).'Foreign invest' corresponds to cross-sectoral foreign direct investments (FDI).
Figure S4 .
Figure S4.Spearman correlations in the time series database for production/supply, trade/distribution and consumption/demand variables at the Africa level.Blue cells represent a positive correlation, while red represent a negative one.The larger the correlation, the darker the color.White cells mean nonsignificant correlations.The first row depicts the tested correlations for the response variable (tree cover loss).'Foreign invest' corresponds to cross-sectoral foreign direct investments (FDI).
Figure S5 .
Figure S5.Spearman correlations in the time series database for production/supply, trade/distribution and consumption/demand variables at Asia & Oceania level.Blue cells represent a positive correlation, while red represent a negative one.The larger the correlation, the darker the color.White cells mean non-significant correlations.The first row depicts the tested correlations for the response variable (tree cover loss).'Foreign invest' corresponds to cross-sectoral foreign direct investments (FDI).
Figure S6 .
Figure S6.Spearman correlations in the time series database for production/supply, trade/distribution and consumption/demand variables at the Latin America and the Caribbean level.Blue cells represent a positive correlation, while red represent a negative one.The larger the correlation, the darker the color.White cells mean non-significant correlations.The first row depicts the tested correlations for the response variable (tree cover loss).'Foreign invest' corresponds to cross-sectoral foreign direct investments (FDI).
Figure S7 .
Figure S7.Mean and standard deviation of driver variables in Africa.Consists of data for 17 countries.
Figure S8 .
Figure S8.Mean and standard deviation of driver variables in Asia and Oceania.Consists of data for 9 countries in Asia and 2 in Oceania.
Figure S9 .
Figure S9.Mean and standard deviation of driver variables in Latin America and the Caribbean.Consists of data for 12 countries.
Table S1 .
Countries included in the analysis.
Table S2 .
Final hyperparameter settings of all XGBoost models executed with random search procedure.Model indicates the geographic level (Asia = Asia & Oceania; LAC = Latin America and the Caribbean).Each run corresponds to the best-performing model selected from 100 models initialized with random parameter settings (with five runs per geographic level, which were averaged to yield final results).R² is the adjustment/performance measure.RMSE is the Root Mean Squared Error, a performance measure and selection criterion.The hyperparameters include eta (learning rate), max_depth (maximum tree depth), gamma (minimum loss reduction), minimum child weight, colsample_bytree (subsample ratio of columns), subsample (subsample ratio of training instances) and nrounds (number of boosting iterations). | 1,184.4 | 2024-07-16T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
Living kidney transplantation without perioperative anticoagulation therapy for a patient with heparin‐induced thrombocytopenia
Introduction Heparin‐induced thrombocytopenia is an antibody‐mediated acquired prothrombotic state induced by heparin exposure. The risk of thromboembolic diseases in kidney transplantation with heparin‐induced thrombocytopenia without perioperative anticoagulation has not been determined. Case presentation A 64‐year‐old male hemodialysis patient with heparin‐induced thrombocytopenia was referred to our hospital for living kidney transplantation. Anti‐heparin‐induced thrombocytopenia antibody was positive at the time of referral; however, it turned negative 4 months after heparin cessation during hemodialysis sessions. Living kidney transplantation by donation from his wife was performed using the standard technical procedure. Both heparinization and application of medical equipment containing heparin were avoided; however, no anticoagulant was administered intra‐ and postoperatively. The graft kidney functioned immediately, and no thromboembolic event related to heparin‐induced thrombocytopenia occurred. Conclusion Kidney transplantation without perioperative anticoagulation therapy after disappearance of anti‐heparin‐induced thrombocytopenia antibody is a well‐tolerated treatment option for patients with end‐stage kidney disease.
Introduction
HIT is an Ab-mediated acquired prothrombotic state induced by heparin exposure. The risk of HIT was reported in approximately 3% of patients who received heparin exposure. 1 While some cases of HIT in KTx were reported, the risk of thromboembolic diseases in KTx with HIT has not been determined. [2][3][4][5][6] There were a few case reports on successful KTx in patients with a history of HIT; however, these patients received anticoagulants, although it is not required during typical KTx. [4][5][6] We report a case of successful living KTx without perioperative anticoagulation in a patient with end-stage renal disease with a history of HIT.
Case presentation
A 63-year-old male patient with end-stage renal disease was referred to our hospital for living KTx by donation from his wife. He started HD 2 months before being referred to our hospital. Initially, unfractionated heparin was infused as anticoagulant during HD; however, clotting in the dialysis membrane had frequently occurred, and thrombocytopenia had been gradually exacerbated (Fig. 1). Serum anti-HIT Ab level was examined for the suspicion of HIT, and seropositivity of anti-HIT Ab led to the diagnosis of HIT type II. Unfractionated heparin was discontinued and changed to argatroban as the anticoagulant during HD. Thrombocytopenia gradually improved, and events of clotting in the HD membrane also decreased after changing the anticoagulant. Negative conversion of serum anti-HIT Ab was confirmed 4 months after unfractionated heparin cessation.
Preservation of seronegative status and absence of thrombotic complications were confirmed by enhanced CT, and then, living KTx between spouses was performed. Maintenance HD therapy and three-time DFPP against flowcytometry B-cell crossmatch positivity due to the presence of donor-specific anti-HLA Ab (mean fluorescence intensity was 1156) were conducted before KTx using argatroban. Both heparinization and application of medical equipment containing heparin were avoided during the operation. No anticoagulant was administered even during vessel suturing. The graft kidney functioned immediately, and maintenance HD was withdrawn (Fig. 2). No anticoagulant was administered postoperatively. Immunosuppression consisting of steroid, MMF, and Tac and induction therapy consisting of basiliximab and single dose of 200 mg rituximab were adopted for the patient. No thromboembolic adverse event has occurred, and graft function is well-maintained 1 year after transplantation.
Discussion
Almost all HD patients are exposed to heparin, which is used as an anticoagulant during each treatment session, and chronic intermittent heparin exposure is associated with developing anti-heparin Ab, which is observed in approximately 10% of these patients. 7 Although there is a high prevalence of anti-heparin Ab in HD patients, thromboembolic complications with thrombocytopenia do not always develop even in this status. 7 The pretest probability of HIT uses the four T's scoring system: degree of thrombocytopenia, timing of thrombocytopenia with respect to heparin exposure, occurrence of thrombotic complications, and absence of alternative explanations for thrombocytopenia. 5,8 Our patient did not present with systemic thromboembolic complications but had clotting in the dialysis membrane and more than 50% platelet count fall after heparin exposure without other detectable causes of thrombocytopenia. Both the presence of anti-heparin Ab with these typical clinical symptoms and recovery of thrombocytopenia after replacement of unfractionated heparin with argatroban support the diagnosis of HIT type II in this case.
We found 10 cases of HIT in KTx 3-6,9-14 ( Table 1). Seven of the 11 patients were diagnosed preoperatively on the basis of the thrombotic complications, while four patients were diagnosed during or after transplantation. All four patients who had not been diagnosed before transplantation and one patient who underwent a successful retransplantation after an initial HIT with graft loss developed thrombotic complications requiring anticoagulation therapy after KTx. Two of the five patients lost graft function due to thrombosis. Six patients diagnosed as having HIT before transplantation did not develop thrombotic complications except for one patient with antiphospholipid Ab syndrome. 11 The anti-HIT Ab titer became negative before KTx in all patients except in one who received heart and KTxs. These reports indicate that a history of HIT is not a contraindication for KTx and that the period of negative seroconversion of anti-HIT Ab may be the preferable time for transplantation. However, all these patients, except our patient, received anticoagulants during the peritransplantation period to minimize the risk of thromboembolic complications due to HIT despite the negative seroconversion of anti-HIT Ab. Furthermore, we could not find a case of other organ transplantation without anticoagulants during the peritransplantation period.
In patients strongly suspected of having HIT, heparin should be immediately replaced with an appropriate anticoagulant such as argatroban. [15][16][17] As far as appropriate treatments for HIT are performed, anti-HIT Ab is transient with a median time of disappearance from 50 to 80 days. 18 Our patient showed negative conversion of anti-HIT Ab without lifethreating thromboembolic complications 4 months after heparin cessation and administration of argatroban during HD sessions and then underwent KTx. While we avoided both heparization and application of medical equipment containing heparin during the peritransplant term to minimize the risk of thromboembolic complications due to HIT, we did not administer anticoagulants excluding argatroban during plasmapheresis before transplantation. There were several reasons why we determined the treatment policy. First, anticoagulation during the vessel clamping period in KTx is not usually necessary. Second, clinical episode of thromboembolic complications excluding clot formation in the dialysis membrane had not been observed, and absence of thromboembolic complication was confirmed by systemic enhanced CT examination. A past history of VTE is considered to be the most important risk factor determining VTE recurrence. 19 KTx without anticoagulants may be considered for recipients who are diagnosed with HIT without systemic thromboembolism. Lastly, non-heparin anticoagulant was considered to potentially facilitate perioperative bleeding. Argatroban is the only drug approved by Japanese health insurance as an anticoagulant for patients with HIT. No specific antidote is available for argatroban, and thus continuous administration of argatroban during the peritransplant term may make it difficult to control sudden massive bleeding. On the other hand, this is only a single case report. Further investigation including prospective large-sized clinical studies is required to establish anticoagulant management in KTx of patients with HIT. We successfully performed living KTx without perioperative anticoagulation in a patient with HIT. KTx without perioperative anticoagulation after disappearance of anti-HIT Ab could be a well-tolerated treatment option for patients with end-stage kidney disease.
Ethics approval and consent to participate
According to the Ethical Guidelines for Medical and Health Research involving Human Subjects in Japan, ethical approval is not required for case reports.
Consent for publication
Written informed consent was obtained from the patient for the publication of this case report and any accompanying test results. | 1,745.8 | 2020-02-20T00:00:00.000 | [
"Medicine",
"Biology"
] |
Data Augmentation for Bayesian Deep Learning
Deep Learning (DL) methods have emerged as one of the most powerful tools for functional approximation and prediction. While the representation properties of DL have been well studied, uncertainty quantification remains challenging and largely unexplored. Data augmentation techniques are a natural approach to provide uncertainty quantification and to incorporate stochastic Monte Carlo search into stochastic gradient descent (SGD) methods. The purpose of our paper is to show that training DL architectures with data augmentation leads to efficiency gains. We use the theory of scale mixtures of normals to derive data augmentation strategies for deep learning. This allows variants of the expectation-maximization and MCMC algorithms to be brought to bear on these high dimensional nonlinear deep learning models. To demonstrate our methodology, we develop data augmentation algorithms for a variety of commonly used activation functions: logit, ReLU, leaky ReLU and SVM. Our methodology is compared to traditional stochastic gradient descent with back-propagation. Our optimization procedure leads to a version of iteratively re-weighted least squares and can be implemented at scale with accelerated linear algebra methods providing substantial improvement in speed. We illustrate our methodology on a number of standard datasets. Finally, we conclude with directions for future research.
Introduction
Deep neural networks (DNNs) have become a central tool for Artificial Intelligence (AI) applications such as, image processing (ImageNet, Krizhevsky et al. (2012)), object recognition (ResNet, He et al. (2016)) and game intelligence (AlphaGoZero, Silver et al. (2016)). The approximability Bauer and Kohler, 2019) and rate of convergence of deep learning, either in the frequentist fashion (Schmidt-Hieber, 2020) or from a Bayesian predictive point of view (Polson and Rockova, 2018;Wang and Rockova, 2020), have been well-explored and understood. Fan et al. (2021) provides a selective overview of deep learning. However, training deep learners is challenging due to the high dimensional search space and the non-convex objective function. Deep neural networks have also suffered from issues such as local traps, miscalibration and overfitting. Various efforts have been made to improve the generalization performance and many of their roots lie in Bayesian modeling. For example, Dropout (Wager et al., 2013) is commonly used and can be viewed as a deterministic ridge ℓ 2 regularization. Sparsity structure via spike-and-slab priors (Polson and Rockova, 2018) on weights helps DNNs adapt to smoothness and avoid overfitting. Rezende et al. (2014) propose stochastic back-propagation through the use of latent Gaussian variables.
In this paper, following the spirit of hierarchical Bayesian modeling, we develop data augmentation strategies for deep learning with a complete data likelihood function equivalent to weighted least squares regression. By using the theory of mean-variance mixtures of Gaussians, our latent variable representation brings all of the conditionally linear model theory to deep learning. For example, it allows for the straightforward specification of uncertainty at each layer of deep learning and for a wide range of regularization penalties. Our method applies to commonly used activation functions such as ReLU, leaky ReLU, logit (see also Gan et al. (2015)), and provides a general framework for training and inference in DNNs. It inherits the advantages and disadvantages of data augmentation schemes. For approximation methods like Expectation-Maximization (EM) and Minorize-Maximization (MM), they are stable as they increase the objective but can be slow in the neighborhood of the maximum point even with acceleration methods such as Nesterov acceleration available and the performance is highly dependent on the properties of the objective function. Stochastic exploratory methods like MCMC have the main advantage of addressing uncertainty quantification (UQ) and are stable in the sense they require no tuning. Hyper-parameter estimation is immediately available using traditional Bayesian methods. DA augments the objective function with extra hidden units which allow for efficient step size selection for the gradient descent search. In some of the applications, data augmentation methods can be formulated in terms of complete data sufficient statistics, a considerable advantage when dealing with large datasets where most of the computational expense comes from repeatedly iterating over the data. By combing the MCMC methods with the J-copies trick (Jacquier et al., 2007), we can move faster towards posterior mode and avoid local maxima. Traditional methods for training deep learning models such as stochastic gradient descent (SGD) have none of the above advantages. We also note that we exploit the advantages of SGD and accelerated linear algebra methods when we implement our weighted least squares regression step.
Data augmentation strategies are commonplace in statistical algorithms and accelerated convergence (Nesterov, 1983;Green, 1984) is available. Our goal is to show similar efficiency improvements for deep learning. Our work builds on Deng et al. (2019) who provide adaptive empirical Bayes methods. In particular, we show how to implement standard activation functions, including ReLU (Polson and Rockova, 2018), logistic (Zhou et al., 2012;Hernández-Lobato and Adams, 2015) and SVM (Mallick et al., 2005) activation functions and provide specific data augmentation strategies and algorithms. The core subroutine of the resulting algorithms solves a least squares problem. Scalable linear algebra libraries such as Compute Unified Device Architecture (CUDA) and accelerated linear algebra (XLA) are available for implementation. To illustrate our approach, empirically we experiment with two benchmark datasets using Pólya-Gamma data augmentation for logit activation functions. For the deep architecture embedded in our approach, we adopt deep ReLU networks. Deep networks are able to achieve the same level of approximation accuracy with exponentially fewer parameters for compositional functions . Poggio et al. (2017) further show how deep networks can avoid the curse of dimensionality. The ReLU function is favored due to its ability to avoid vanishing gradients and its expressibility and inherent sparsity. Approximation properties of deep ReLU networks have been developed in Montufar et al. (2014), Telgarsky (2017), and Liang and Srikant (2017). Yarotsky (2017) and Schmidt-Hieber (2020) show that deep ReLU networks can yield a rateoptimal approximation of smooth functions of an arbitrary order. Polson and Rockova (2018) provide posterior rates of convergence for sparse deep learning.
There is another active area of research that revives traditional statistical models with the computational power of DL (Bhadra et al., 2021). Examples include Gaussian Process models (Higdon et al., 2008;Gramacy and Lee, 2008), Generalized Linear Models (GLM) and Generalized Linear Mixed Models (GLMM) (Tran et al., 2020) and Partial Least Squares (PLS) . Our method benefits from the computation efficiency and flexibility of expression of the deep neural network. In addition, our work builds on the sampling optimization literature (Pincus, 1968(Pincus, , 1970 which now uses MCMC methods. Other examples include Ma et al. (2019) who study that sampling can be faster than optimization and Neelakantan et al. (2017) showing that gradient noise can improve learning for very deep networks. Gan et al. (2015) implements data augmentation inside learning deep sigmoid belief networks. Neal (2011) and Chen et al. (2014) provide Hamitonian Monte Carlo (HMC) algorithms for MCMC. Duan et al. (2018) proposes a family of calibrated data-augmentation algorithms to increase the effective sample size.
The rest of our paper is outlined as follows. Section 2 provides the general setting of deep neural networks and shows how DA can be integrated into deep learning using the duality between Bayesian simulation and optimization. Section 3 describes our data augmentation (DA) schemes and two approaches to implement them. Section 4 provides applications to Gaussian regression, support vector machines and logistic regression using Pólya-Gamma augmentation . Section 5 provides the experiments of DA on both regression and classification problems. Section 6 concludes with directions for future research.
Bayesian Deep Learning
In deep learning we wish to recover a multivariate predictive map f θ (·) denoted by where y = (y 1 , . . . , y n ) ′ , y i ∈ R denotes a univariate output and x = (x 1 , . . . , x n ) ′ , x i ∈ R p a high-dimensional set of inputs. Using training data of input-output pairs {y i , x i } n i=1 that generalizes well out-of-sample, the goal is to provide a predictive rule for a new input variable whereθ is estimated from training data typically using SGD. The interest in deep learners lies in their ability to perform better than the additive rule for those interpolation or prediction problems. Other statistical alternatives include Gaussian processes but they often have difficulty in handling higher dimensions.
Deep learners use compositions (Kolmogorov, 1957;Vitushkin, 1964) of ridge functions rather than additive functions that are commonplace in statistical applications. With L ∈ N we denote the number of hidden layers and with p l ∈ N the number of neurons at the l th layer. Setting p L+1 = p, p 0 = p 1 = 1, we denote with p = (p 0 , p 1 , . . . , p L+1 ) ∈ N L+2 the vector of neuron counts for the entire network. Imagine composing L layers, a deep predictor then takes the form where b l ∈ R p l is a shift vector, W l ∈ R p l−1 ×p l is a weight matrix that links neurons between (l −1) th and l th layers and . . , (W L , b L )} as the stacked parameters. We can rewrite the compositions in (2.1) with a set of latent variables Z = (Z 1 , Z 2 , . . . , Z L ) ′ as where Z l ∈ R n×p l is the matrix of hidden nodes in l-th layer. We only consider the case p = 1 and Z 1 ∈ R n in our work. We provide discussion on extensions to cases p > 1 for some of our applications in Section 4.
Bayesian Simulation and Regularization Duality
The problem of deep learning regularization (Polson and Sokolov, 2017) is to find a set of parameters θ which minimizes a combination of a negative log-likelihood ℓ(y, f θ (x)) and a penalty function φ(θ) defined bŷ where λ controls regularization and #θ denotes the number of parameters in θ.
When the function f θ (x) is a deep learner defined as (2.1), we can specify different amount of penalty λ l and form of regularization function φ l (·) for each layer. Then the objective function can be written aŝ (2.4) Commonly used regularization techniques for deep learners include L 2 (weight decay), spike-and-slab regularization (Polson and Rockova, 2018) and dropout (Wager et al., 2013), which can also be viewed as a variant of L 2 -regularization.
As such the optimization problem in (2.4) of training a deep learner f θ (·) involves a highly nonlinear objective function. Stochastic gradient descent (SGD) is a popular tool based on back-propagation (a.k.a. the chain rule), but it often suffers from local traps and overfitting due to the non-convex nature of the problem. We propose data augmentation techniques which can be seamlessly applied in this context and provide efficiency gains. This is achieved via the hierarchical duality between optimization with regularization and finding the maximum a posteriori (MAP) estimate (Polson and Scott, 2011), as described in the following lemma.
Lemma 2.1. The regularization problem is equivalent to finding the the Bayesian MAP estimator defined by which corresponds to the mode of a posterior distribution characterized as Here p(θ) can be interpreted as a prior probability distribution and the log-prior as the regularization penalty.
A Stochastic Top Layer
By exploiting the duality from Lemma 2.1, we wish to use a Bayesian framework to add stochastic layers -so as to fully account for the uncertainty in estimating the predictive rule f θ (·). Thus, we convert the sequence of composite functions in the deep learner specified in (2.2) to a stochastic version given by (2.5) Now the hidden variables Z = (Z 1 , . . . , Z L ) ′ can be viewed as data augmentation variables and hence will also allow the contribution of fast scalable algorithms for inference and prediction.
For the ease of computation, we only replace the top layer of the DNN with a stochastic layer. We denote network structure below the top layer with B = {(W 1 , b 1 ), . . . , (W L , b L )}, and the network structure can be rewritten as where the function f 0 (Z 1 W 0 + b 0 ) is the top layer structure and function f B (x) is the network architecture below the top layer. Considering the objective function in (2.4), we implement the solutions with a two-step iterative search. At iteration t, we have 1. DA-update for the top layer W 0 , b 0 as the MAP estimator of the distribution The main contribution of our work comes from two aspects: (1) we update top layer weights {W 0 , b 0 } conditional on B as in (2.6), which is also equivalent to conditioning on Z 1 , with data augmentation techniques as later shown in Section 3; (2) the latent variables Z 1 is sampled from a normal distribution rather than optimized by gradient descent methods. Z 1 serves as a bridge that connects a weighted L 2 -norm model f 0 and a deep learner f B . Commonly used activation functions {f l } L l=1 are linear affine functions, rectified linear units (ReLU), sigmoid, hyperbolic tangent (tanh), and etc. We illustrate our methods with a deep ReLU network, i.e., {f l } L l=1 are ReLU functions, due to its expressibility and inherent sparsity. In the next section, we introduce our data augmentation strategies and show how the stochastic layers can be achieved via data augmentation.
Data Augmentation for Deep Learning
Data augmentation introduces a vector of auxiliary variables, denoted by ω = (ω 1 , . . . , ω n ) ′ with ω i ∈ R, such that the posterior can be written as where the augmented auxiliary distribution, p(θ, ω | y) factorizes nicely into complete conditionals p(θ | ω, y) and p(ω | θ, y). A crucial ingredient is that p(θ | ω, y) is easily managed typically via conditional Gaussians.
Data augmentation tricks allow us to express the likelihood as an expectation of a weighted L 2 -norm. Specifically, we write where p(ω) is the prior on the auxiliary variables ω = (ω 1 , . . . , ω n ) ′ and the function Q y | f θ (x), ω is designed to be a quadratic form, given the data augmentation vari- Table 1 shows that standard activation functions such as ReLU, logit, lasso and check can be expressed in the form of (3.1). Commonly used activation functions for deep learning, with an appropriate stochastic assumptions for w (for notation of simplicity, we derive the standard form for the single observation case) can be expressed as Here GIG denotes the Generalized Inverse Gaussian distribution, PG represents the Pólya Gamma distribution , and E represents the exponential distribution.
Using the data augmentation strategies, the objectives are represented as mixtures of Gaussians. DA can perform such an optimization with only the use of a sequence of iteratively re-weighted L 2 -norms. This allows us to use XLA techniques to accelerate the training.
Remark 3.1. The log-posterior is optimized given the training data, Deep learning possesses the key property that ∇ θ log p(y|θ, x) is computationally inexpensive to evaluate using tensor methods for very complicated architectures and fast implementation on large datasets. One caveat is that the posterior is highly multi-modal and providing good hyperparameter tuning can be expensive. This is clearly a fruitful area of research for state-of-the-art stochastic MCMC algorithms to provide more efficient algorithms. For shallow architectures, the alternating direction method of multipliers (ADMM) is an efficient solution to the optimization problem.
Similarly we can represent the regularization penalty exp(−λφ(θ)) in data augmentation form. Hence, we can then replace the optimization problem in (2.4) witĥ using the duality in Lemma 2.1.
There are two approaches to Monte Carlo optimization which could handle our data augmentation (Geyer, 1996), missing data methods like Expectation-Maximization (EM) algorithms or stochastic search methods like Markov Chain Monte Carlo (MCMC). The first approach is based on a probabilistic approximation of the objective function (3.1) and is less concerned with exploring Θ. The second type is more exploratory which aims to optimize the objective function by visiting the entire range of Θ and is less tied to the properties of the function.
For EM algorithms, we consider constructing a surrogate optimization problem which has the same solution to (3.1) (Lange et al., 2000). Specifically, we define a new objective function as which is a concave function to be maximized. A natural choice of the surrogate function can be constructed using Jensen's inequality as Maximizing G θ | θ (t) with respect to θ drives H(θ) uphill. The ascent property of the EM algorithm relies on the nonnegativity of the Kullback-Leibler divergence of two conditional probability densities (Hunter and Lange, 2004;Lange, 2013a). The EM algorithm enjoys the numerical stability as it steadily increases the likelihood without wildly overshooting or undershooting. It simplifies the optimization problem by (1) avoiding large matrix inversion; (2) linearizing the objective function; (3) separating the variables of the optimization problem (Lange, 2013b). In Section 4.3 we show how Pólya-Gamma augmentation leads to an EM algorithm for logistic regression.
The exploratory alternative to solve (3.1) is stochastic search methods such as MCMC. The data augmentation strategies enable us to sample from the joint posterior where the prior is related to the regularization penalty, via p(θ) ∝ exp − L l=0 λ l φ l (W l , b l ) . Hence, we can provide an MCMC algorithm in the augmented space (θ, ω) and simulate from the joint posterior distribution, denoted by p(θ, ω | y), namely A sequence can be simulated using MCMC Gibbs conditionals, Then we recover stochastic draws θ (t) ∼ p(θ | y) from the marginal posterior. These draws can be used in prediction to account for predictive uncertainty, namely , ω) is conditionally quadratic, the update step for θ | ω, y can be achieved using SGD or a weighted L 2 -norm -the weights ω are adaptive and provide an automatic choice of the learning rate, thus avoiding backtracking which can be computationally expensive. And the performance of MCMC search is less tied to the statistical properties (i.e. convexity or concavity) of the objective function. We provide examples of how Gaussian regression and SVMs can be implements in Section 4.1 and Section 4.2.
MCMC with J-copies
The MCMC methods offer a full description of the objective function (3.1) over the entire space Θ. Inspired by the simulated annealing algorithm (Metropolis et al., 1953), we introduce a scaling factor J to allow for faster moves on the surface of (3.1) to maximize. It also helps avoiding the trapping attraction of local maxima. In addition, the corresponding posterior is connected to the Boltzmann distribution, whose density is prescribed by the energy potential f (θ) and temperature J as To simulate the posterior mode without evaluating the likelihood directly (Jacquier et al., 2007), we sample J independent copies of hidden variable Z 1 . Denoted the copies with Z 1 1 , . . . , Z J 1 , we sample them simultaneously and independently from the posterior distribution where µ z , σ z are determined by {x, y, θ}. And we stack the J copies as to amplify the information in y, which is especially useful in the finite sample problems. Figure 1 illustrates our network architecture. given data y, x can be written as Hence, the marginal joint posterior concentrates on the density proportional to p(x, y | θ) J p(θ) and provides us with a simulation solution to finding the MAP estimator (Pincus, 1968(Pincus, , 1970. Another alternative to simulate from the posterior mode is Hamiltonian Monte Carlo (Neal, 2011), which is a modification of Metropolis-Hastings (MH) sampler. Adding an additional momentum variable ν to the Boltzmann distribution in (3.3), and generating draws from joint distribution where M is a mass matrix. Chen et al. (2014) adopt this approach in a deep learning setting.
Connection to Diffusion Theory
An alternative to the MCMC algorithm can be derived from diffusion theory (Phillips and Smith, 1996). For example, we can approximate the random walk Metropolis-Hastings algorithm with the Langevin diffusion L t defined by the stochastic differential equation where B t is the standard Brownian motion. More specifically, let d := |θ|, we write the random walk like transition as where ǫ t ∼ N d (0, I d ) and σ 2 corresponds to the discretization size.
This can also be derived by taking a second-order approximation of log(f ), namely where H(θ (t) ) = −∇ 2 log f (θ (t) ) is the Hessian matrix. By taking exponential transformation on both sides, the random walk type approximation to f (θ (t+1) ) is .
where θ (t) = θ (t) +H −1 (θ (t) )∇ log f (θ (t) ). If we simplify this approximation by replacing H(θ (t) ) with σ −2 I p , the Taylor approximation leads to updating step as Roberts and Rosenthal (1998) give further discussion on the choice of σ that would yield an acceptance rate of 0.574 to achieve optimal convergence rate. Mandt et al. (2017) show that SGD can be interpreted as a multivariate Ornstein-Uhlenbeck process here η is the constant learning rate, A is the symmetric Hessian matrix at the optimum and C S is the covariance of the mini-batch (of size S) gradient noise, which is assumed to be approximately constant near the local optimum of the loss. They also provide results on discrete-time dynamics on other Stochastic Gradient MCMC algorithms, such as Stochastic Gradient Langevin dynamics (SGLD) by Welling and Teh (2011) and Stochastic Gradient Fisher Scoring by Ahn et al. (2012).
Combing their results and the Langevin dynamics of MCMC algorithms, we can write the approximation of our DA-DL updating scheme as Similar adaptive dynamics are also observed in other methods. Geman and Hwang (1986) show the convergence of the annealing process using Langevin equations. Slice sampling (Neal, 2003) adaptively chooses the step size based on the local properties of the density function. By constructing local quadratic approximations, it could adapt to the dependencies between variables. Murray et al. (2010) further propose elliptical slice sampling that operates on the ellipse of states.
Applications
To illustrate our methodology, we provide three examples: (1) a standard Gaussian regression model with squared loss; (2) a binary classification model under the support vector machine framework; (3) a logistic regression model paired with a Pólya mixing distribution. For the Gaussian regression and SVM models, we implement with J-copies stacking strategy to provide full posterior modes.
Before diving into the examples, we introduce the notations we use throughout this section. We continue to denote the output with y = (y 1 , . . . , y n ) ′ , y i ∈ R, the input with x = (x 1 , . . . , x n ) ′ , x i ∈ R p , the latent variable of the top layer with Z 1 = (z 1,1 , . . . , z 1,n ) ′ , z 1,i ∈ R and the stacked version as in (3.4). We introduce stochastic noises ǫ 0 = (ǫ 0,1 , . . . , ǫ 0,n ) ′ in the top layer and ǫ z = (ǫ z,1 , . . . , ǫ z,n ) ′ in the second layer, where ǫ 0,i iid ∼ N (0, τ 2 0 ) and ǫ z,i iid ∼ N (0, τ 2 z ). The scale parameters τ 0 and τ z are pre-specified and determine the level of randomness or uncertainty for the DA-update and SGD-update respectively. We use η to denote the learning rate used in the SGD updates and T is number of training epochs. We use · to denote ℓ 2 -norm such that y = n i=1 y 2 i and the matrix-type norm as y Σ = y T Σy. Our models differ from standard deep learning models and some newly proposed Bayesian approaches in the adoption of stochastic noises ǫ 0 and ǫ z . It distinguishes our model from other deterministic neural networks. By letting ǫ z follow a spiky distribution that puts most of its mass around zero, we can control the estimation approximating to posterior mode instead of posterior mean. The randomness allows us to adopt a stacked system and make the best use of data especially when the dataset is small.
Gaussian Regression
We consider the regression model as . The posterior updates are given bŷ whereȳ = 1 n n i=1 y i and C z is a normalizing constant. The latent variable Z 1 is drawn from following normal distribution Z 1 ∼ N (µ Z , σ 2 Z ) with the mean and variance specified as The J copies of Z 1 are simulated and stacked as 1 = (Z 1 1 , . . . , Z J 1 ) ′ . The updating scheme for this Gaussian regression is summarized in Algorithm 1.
The model can also be generalized to multivariate y. Let y i be a q-dimension vector, we denote each dimension as y ik , k = 1, . . . , q, and the model is written as , where W 0 = (W 01 , . . . , W 0q ) ′ is now a q-dimensional vector with W 0k computed similarly to (4.1), b 0 = (b 01 , . . . , b 0q ) ′ is also q-dimensional with b 0k calculated as (4.2). The posterior update for Z 1 becomes Algorithm 1 Data Augmentation with J-copies for Gaussian Regression (DA-GR) jointly from deep learner f B and sampling layer f 0 which is a multivariate normal distribution with the mean and variance as
Support Vector Machines (SVMs)
Support vector machines require data augmentation for rectified linear unit (ReLU) activation functions. Polson and Scott (2011) and Mallick et al. (2005) write the support vector machine model as where p(λ) follows a flat uniform prior. The augmentation variable λ = (λ 1 , . . . , λ n ) ′ can be regarded as slacks admitting fuzzy boundaries between classes.
By incorporating the augmentation variable λ, the ReLU deep learning model can be written as . From a probabilistic perspective, the likelihood function for the output y is given by Derived from this augmented likelihood function, the conditional updates are where Λ = diag(λ 1 , . . . , λ n ) is the diagonal matrix of the augmentation variables.
In order to generate the latent variables, we use conditional Gibbs sampling as with the means and variances given by where IG denotes the Inverse Gaussian distribution and 1 = (1, . . . , 1) ′ is a n-dimensional unit vector.
The J-copies strategy can also be adopted here. Z j 1 and λ j needs to be sampled independently for j = 1, . . . , J. Algorithm 2 summarizes the updating scheme with J-copies for SVMs.
jointly from the deep learner f B and the sampling layer W 0
Logistic Regression
The aim of this example is to show how EM algorithm can be implemented via a weighted L 2 -norm in deep learning. Adopting the logistic regression model from , we focus on the penalization of W 0 , with parameter optimization given bŷ The outcomes y i are coded as ±1, and τ is assumed fixed.
For likelihood function ℓ and regularization penalty φ, we assume (4.8) where µ W , κ W are pre-specified terms controlling the prior of the penalty term and λ is endowed with a Pólya distribution prior P (λ). Let ω −1 i have a Pólya distribution with α = 1, κ = 1/2, the following three updates will generate a sequence of estimates that converges to a stationary point of posterior . . . , ω n ) and Λ = diag(λ 1 , . . . , λ p ) are diagonal matrices. x * can be written as x * = diag(y)Z 1 , φ ′ (·) denotes the derivative of standard normal density function.
In the non-penalized case, with λ i = 0 for every i, the updates can be simplified as weighted least squares We focus on the non-penalized binary classification case and Algorithm 3 summarizes our approach. Further generalizations are available. For example, a ridge-regression penalty, along with the generalized double-pareto prior (Armagan et al., 2013) can be implemented by adding a sample-wise L 2 -regularizer. A multinomial generalization of this model can be found in .
Experiments
We illustrate the performance of our methods on both synthetic and real datasets, compared to the deep ReLU networks without the data augmentation layer. We refer to the latter as DL in our results. We denote the data augmented gaussian regression in Algorithm 1 as DA-GR, the SVM implementation in Algorithm 2 as DA-SVM and the logistic regression in Algorithm 3 as DA-logit. For appropriate comparison, we adopt the same network structures, such as the number of layers, the number of hidden nodes, and regularizations like dropout rates, for DL and our methods. The differences between our Algorithm 3 Data Augmentation for Logistic Regression (DA-logit) Retrieve the input and output of the top layer Z Calculate the sample-wise weights Update the entire deep learner f θ with {y, x} methods and DL are that (1) the top layer weights W 0 , b 0 of DL are updated via SGD optimization, while the weights W 0 , b 0 of our methods are updated via MCMC or EM; (2) for binary classification, DA-logit and DL adopt a sigmoid activation function in the top layer to produce a binary output, while DA-SVM uses a linear function in the top layer and the augmented sampling layer transforms the continuous value into a binary output. For all experiments, the datasets are partitioned into 70% training and 30% testing randomly. For the optimization we use a modification of the SGD algorithm, the Adaptive moment estimation (Adam, Kingma and Ba (2015)) algorithm. The Adam algorithm combines the estimate of the stochastic gradient with the earlier estimate of the gradient, and scales this using an estimate of the second moment of the unit-level gradient. We have also explored RMSprop (Tieleman and Hinton, 2012) optimizer and we observe similar decreases in regression or classification errors.
To illustrate how the choice of J could affect the speed of convergence, we include different implementations of DA-GR and DA-SVM with J = 2, 5, 10. We have explored different sampling noise variance τ 0 , τ Z , but the choices, in general, do not affect the results significantly.
Friedman Data
The benchmark (Friedman, 1991) setup uses a regression of the form . . , x ip ) and only the first 5 covariates are predictive of y i . We run the experiments with n = 100, 1 000 and p = 10, 50, 100, 1 000 to explore the performance in both low dimensional and high dimensional scenarios. We implement both one-layer (L = 1) and two-layer (L = 2) ReLU networks with 64 hidden units in each layer. For DA-GR model, we let τ 0 = 0.1, τ z = 1. The experiments are repeated 50 times with different random seeds. Figure 2 reports the three quartiles of the out-of-sample squared errors (MSEs). The top row is the performance of the one-layer networks and the bottom row is the performance of the two-layer networks. The two-layer networks perform better and converge faster. For DA-GR, when J = 5 or J = 10, it converges significantly faster and the prediction errors are also smaller. When J = 2, the performance of DA-GR is relatively similar to the deep learning model with only SGD updates. This is due to the fact that DA-GR with J-copies learns the posterior mode which is equivalent to the minimization point of the objective function, and it concentrates on the mode faster when J becomes larger.
The computation costs of DA are higher as shown in Figure 3. This is not entirely unexpected since we introduce sampling steps. When J increases, the computation costs also increase slightly. Given the improvement in convergence speed and prediction errors, our data augmentation strategies are still worthwhile even with some extra computation costs. In addition, for each epoch, we can draw the sample-wise posteriors in parallel and the gap between the computation time can be further mitigated.
Boston Housing Data
Another classical regression benchmark dataset is the Boston Housing dataset 1 , see, for example, Hernández-Lobato and Adams (2015). The data contains n = 506 observations with 13 features. To show the robustness of DA, we repeat the experiment 20 times with different training subsets. We adopt the ReLU networks with one hidden layer of 64 units and set the dropout rate to be 0.5. For the DA-GR model, we let τ 0 = 0.1, τ Z = 1. Figure 4 shows the prediction errors of all methods. DA-GR with J = 10 performs significantly better than the others, in terms of both prediction errors and convergence rates. Meanwhile, DA-GR with J = 2 behaves similarly to SGD at the beginning, but it converges significantly faster than SGD after a few epochs. This again, shows that with the J-copies strategy, our method helps the optimization converge at a faster speed, and injecting the noise helps the model generalize well out-of-sample.
Wine Quality Data Set
The Wine Quality Data Set 2 contains 4 898 observations with 11 features. The output wine rating is an integer variable ranging from 0 to 10 (the observed range in the data is from 3 to 9). The frequency of each rating is reported in Table 2. The most frequent ratings are 5 and 6. Since we focus on binary classification problems, we provide two types of classifications, both of which have relatively balanced categories: (1) wine with a rating of 5 or 6 (Test 1); (2) wine with a rating of ≤ 5 or > 5 (Test 2). We use the same network architectures adopted in Friedman's example with τ 0 = τ z = 0.1. Figure 5 provides results for the two types of binary classifications. In both cases, DA-SVM performs better than DA-logit and DL. The advantage of large J is still significant and helps converge especially in the early phase. DA-logit outperforms DL in Test 1 when the network is shallow (L=1), while in other cases performs similarly to DL.
Airbnb Data Set
The Airbnb Kaggle competition 3 provides a more challenging application with 21 3451 observations in total, and classified by destination into 12 classes: 10 most popular countries, other and no destination found (NDF), where other corresponds to any other country which is not among the top 10 and NDF corresponds to situations that no Table 3 reports the percentage of each class. We follow the preprocessing steps in Polson and Sokolov (2017). The list of variables contains information from the sessions records (number of sessions, summary statistics of action types, device types and session duration), and user tables such as gender, language, affiliate provider etc. All categorical variables are converted to binary dummies, which leads to 661 features in total. For the neural network architecture, we use a two-layer ReLU network with 64 hidden units on each layer and set the dropout rate to be 0.3. For the SVM model, we let τ 0 = τ z = 0. Our goal is to test the binary classification models on this dataset. We consider two types of binary responses, both of which have relatively balanced amounts of observations in each category. We compare the misclassification rates of DA-SVM in Algorithm 2 with J = 2, 5, 10, DA-logit in Algorithm 3 and the ReLU networks without the data augmentation layer, after training for 1 to 20 epochs. Figure 6 demonstrates the binary classifications for Spain versus UK and UK versus Italy. For both cases, the out-of-sample misclassification rates are not small and the fluctuations over epochs are big, suggesting that a better model structure may be needed. However, we still observe that DA-SVM with J = 5 or J = 10 has smaller classification errors over epochs and the out-of-sample errors decrease faster during earlier phase of training.
Summary of Experiment Results
From the above examples, we observe that DA-logit which is implemented under the EM principle does not show an obvious advantage over the vanilla neural network. It shows some improvements on the convergence speed when the network is shallow in the Wine Quality dataset case as in Figure 5. This could be partially due to the fact that we did not apply regularization on the DA layer for our logit implementation. More importantly, the performance of the EM algorithm is contingent on the statistical properties of the objective function. Although the surrogate function is constructed via only the top layer whose quadratic form ensures concavity, the property of the objective function as a whole becomes complicated when the deep network architecture is more complex. Since our method also inherits the negative side of EM and MM algorithms, convergence to the global maximum is not guaranteed in the absence of concavity. However, this observation could open the possibility of future research where we can combine the EM algorithms with shape-constrained neural networks (Gupta et al., 2020).
On the contrary, the MCMC methods with the J-copies strategy significantly improve the prediction errors and convergence speed of the neural networks for both regression and classification problems. And the advantages become more outstanding when J is larger. The phenomenon suggests that the stochastic exploratory methods are preferable when the statistical property of the objective function is unknown or too complex. And the J-copies scheme largely relieves the problem of being trapped into local modes.
One concern of using MCMC methods is the extra computation costs induced by the sampling steps. In our current version where p 1 = 1, the sample-wise sampling steps can be computed in parallel. If one wishes to introduce a higher dimension latent variable Z 1 such that p 1 > 1, the computation costs will increase as it may involve sampling from multivariate distributions. In that case, fast sampling implementation such as Bhattacharya et al. (2016) is recommended to speed up the process.
Discussion
Various regularization methods have been deployed in neural networks to prevent overfitting, such as early stopping, weight decay, dropout , gradient noise (Neelakantan et al., 2017). Bayesian strategies tackle the regularization problem by proposing probability structures on the weights. We show that data augmentation strategies are available for many standard activation functions (ReLU, SVM, logit) used in deep learning.
Using MCMC provides a natural stochastic search mechanism that avoids procedures such as back-tracking and provides full descriptions of the objective function over the entire range Θ. Training deep neural networks thus benefits from additional hidden stochastic augmentation units (a.k.a. data augmentation). Uncertainty can be injected into the network through the probabilistic distributions on only one or two layers, permitting more variability of the network. When more data are observed, the level of uncertainty decreases as more information is learned and the network becomes more deterministic. We also exploit the duality between maximum a posteriori estimation and optimization. We provide a J-copies stacking scheme to speed up the convergence to posterior mode and avoid trapping attraction of the local modes. Concerning efficiency, DA provides a natural framework to convert the objective function into weighted least squares and is straightforward to implement with the current deep learning training process.
Our three motivational examples illustrated the advantages of data augmentation. Our work has the potential to be generalized to many other data augmentation schemes and different regularization priors. Probabilistic structures on more units and layers are also possible to allow for more uncertainty.
Our DA-DL methods enjoy the benefits of both worlds. On one hand, with the data augmentation on top, it is robust to random weight initialization. Although we still need to specify the learning rates for the deep architecture, the top layer can learn adaptively and the entire network becomes less sensitive to the choice of learning rate. On the other hand, the fast SGD updates from the deep architecture largely alleviate the computation concerns compared to a fully Bayesian hierarchical model.
There are many directions to future research, including adding more sampling layers so the model could accommodate more randomness and flexibility, and using weighted Bayesian bootstrap (Newton et al., 2021) to approximate the unweighted posteriors by assigning random weight to each observation and penalty. Uncertainty quantification for prediction is also possible. Although we focus on the training aspect of deep learning, one can collect posterior draws θ (t) from the MCMC procedure when the training process converges. Using (3.2), we can construct predictive intervals and conduct inference. | 9,605.4 | 2019-03-22T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
The effect of child benefit on female labor supply
In 2016, the Polish government introduced a large child benefit, called “Family 500+”, with the aim to increase fertility and reduce child poverty. It is universal for the second and every further child and means-tested for the first child. We study the impact of the new benefit on female labor supply, using Labor Force Survey data. Based on a difference-in-differences methodology, we find that the labor market participation rates of women with children decreased after the introduction of the benefit compared to that of childless women. The labor force participation rate of mothers showed a drop of 2–3 percentage points by mid-2017 as a result of the “Family 500+” program. The effect was higher among women with lower levels of education and among women living in small towns. Current version: August 24, 2020
Introduction
In 2016, the Polish government introduced a large new child benefit, called "Family 500+", with the aim to increase fertility from a low level and reduce child poverty. Up until 2019, the monthly benefit -amounting to 500 PLN, a third of a net minimum wage -was universal for the second and every further child and means-tested for the first child. a This program more than doubled fiscal support for families, making Poland one of the top spenders in the European Union concerning cash transfers for families (3% of gross domestic product [GDP] in 2016). Other means-tested family benefits and tax breaks continue to exist, and the "Family 500+" transfer does not affect the eligibility for these or any other benefits, as it is not considered income for the purposes of establishing benefit eligibility.
This paper looks at the impact of the new benefit on female labor supply. The transfer increased out-of-work income significantly, especially for parents with several eligible children, reducing incentives to enter the labor market through an income effect. This held particularly for lower-earning families. Furthermore, in the first 3 years of its operation, the benefit for the first child was fully withdrawn once family income rose above the eligibility ceiling. This could create an inactivity trap for singles or second earners from low-earning families, as they would need to earn quite a high wage to make up for this loss.
From a theoretical perspective, in a simple static labor supply framework, child benefits may reduce labor supply through an income effect, as they shift the consumption-leisure budget constraint (Blundell, 1995;Moffitt, 2002;Cahuc et al., 2014). In a search model framework, the "Family 500+" child benefit is likely to increase the reservation wage and thus discourage labor market participation among individuals close to the income threshold below which the benefit for the first child was paid. Women, as primary caregivers, were likely to be particularly responsive to such incentives, which was confirmed by empirical evidence for other countries (Jaumotte, 2003;Milligan and Stabile, 2009;Haan and Wrohlich, 2011). Schirle (2015) analyzed the introduction of the Universal Child Care Benefit (UCCB) in Canada in 2006 and the impact it had on the labor market. Using Canadian Labor Force Survey data for 2003-2009, she found large and significant negative income effects of the UCCB on labor supply of mothers and fathers. The effects were stronger for less-educated parents, though affecting better educated women as well. Among mothers, labor supply was decreased at both the extensive and the intensive margins. González (2013) used a regression discontinuity framework to analyze the fertility and labor supply effects of a large universal one-time benefit introduced in 2007 in Spain. She found a negative labor force participation effect a year after birth, which however disappeared by the time the child was 2 years old. The negative effects of child benefits on female labor supply tend to be greater for women with lower potential incomes and lower levels of education (Eissa and Liebman, 1996;Immervoll et al., 2007). Moreover, marital status is likely to play a role in the impact of child benefits on female labor supply, with married women reacting more strongly to changes in income and wages. Koebel and Schirle (2016) followed up on Schirle's (2015) study of the Canadian UCCB, finding that the benefit decreased labor supply among married women but increased labor force participation of divorced/separated women, with no impact on mothers who had never been married or those in common-law relationships. Finally, the labor supply response to child benefits differs across countries, reflecting not only the institutional differences in the design of tax-benefit systems but also the level of economic development. In particular, Scharle (2007) finds the negative effect of cash benefits on female labor force participation to be higher in Central and Eastern European countries, which may be a reflection of lower income levels in these countries.
We contribute to the current knowledge on the effects of family benefits for the labor market in three ways. First, we study the labor market effects of such transfers in the context of a catching-up economy with relatively low social and family transfers hitherto. Second, the benefit is large relative to average incomes compared to child benefits in other countries. It amounts to around 16% of the average wage (in net terms), whereas, for instance, the size of the childcare benefit introduced in Canada in 2006 amounted to around 3% of the average monthly wage in Canada at that time. Thus, we expect a higher likelihood of observing a significant impact. Thirdly, the reform can be treated as a natural experiment. The 500+ benefit was introduced quickly after it was first announced as an element of the electoral campaign by a new government, so women are very unlikely to have anticipated the introduction by changing their labor supply or their decision to have children.
At the time of the implementation of the "Family 500+" program, Poland was distinguished by a very good labor market situation on the one hand, and low female labor market participation rates on the other. The latter is related to strong family values shaped by deeprooted Catholicism and a relatively weak, although improving, institutional childcare infrastructure, in particular in rural areas.
Given this unique institutional framework, this study can add important insights into the nature of labor supply effects of child benefits. Our hypothesis is that the new child benefits may have reinforced a longer-standing trend of labor force participation among lower-skilled women in Poland to fall, while that among higher-skilled women increased at a slower pace.
The fact that the benefit for the first child was withdrawn once the per capita family income increased beyond the eligibility ceiling limited the incentives for single mothers or second earners with children to work. An unemployed single mother of two taking up a job that pays the average wage would retain <20% of her earnings as a result of taxes and benefit withdrawal.
Taking childcare costs into account, which can be very high in the private sector -often, the only available option -she would actually lose money.
We use Polish Labor Force Survey data for an ex-post evaluation of the reform. Before the program's implementation, Myck (2016) used a discrete-choice labor-supply model and Polish Household Budget Survey data to simulate the effects of the "Family 500+" benefit on labor supply. He found that the benefit could reduce labor supply in the long term by about 240,000 individuals. Based on a difference-in-differences methodology, we find that the labor market participation rates of women with children decreased significantly after the introduction of the benefit, compared to childless women, who were not eligible for the benefit. Results imply that the labor force participation rate of mothers would have been 2-3 percentage points higher in the absence of the reform. The effect set in earlier for partnered women and, within this group, it was the highest among those with lower levels of educational attainment and thus, generally, with lower incomes.
Methodology and Data
We test the hypothesis that the implementation of the "Family 500+" program led to a fall in labor force participation among mothers. To this end, we use a difference-in-differences approach (Angrist and Pischke, 2014;Lechner, 2011). To identify the effect of the introduction of the "Family 500+" benefit, we compare changes in participation rates of (1) women who were eligible for the transfer, as they had children -our treated group, and (2) women who had no children and as such were not eligible -the control group.
In the case of women with one child, many were not eligible for the benefit, because their income was too high. Yet, single women could, in principle, become eligible by withdrawing from the labor market or reducing their hours worked so that their income dropped below the eligibility ceiling, as could some partnered women -provided their partner's income was low enough. It seems sensible to consider these women as treated, since the child benefit was potentially available to them and might thus influence their behavior. This is less clear for women whose partner's income was so high that they could not become eligible for the benefit even by withdrawing from the labor market. Assigning them to the treated group should bias the estimated impact on participation downward, as they cannot be reasonably expected to react to the benefit. This is why we also test some alternative specifications, which are discussed later.
We use Polish Labor Force Survey data for the years 2010-2017 (and from 2007 in the placebo test). We restrict the sample to women aged 20-49 years. The analyses are run separately for single and partnered women to account for differences in their labor force participation decisions, which are likely to be influenced by the presence of a partner. Partnered women are defined as women living with a spouse or cohabiting partner in the same household.
We compare the labor force participation rates before and after the second half of 2016, as municipal offices started transferring the "Family 500+" benefits as of the end of June 2016. We study the labor market reaction in the first year after the introduction of the benefit, i.e., until mid-2017. It is safe to assume that it was not anticipated and women did not react before they actually received the money -the benefit was announced in February 2016 and formally introduced in April 2016 when the first forms were made available to fill in (the municipal offices then had 3 months to disburse the benefit).
We estimate the following equation: where A it is a dummy variable indicating whether individual i is active in the labor market in period t; a is a constant; X it is a vector containing a set of individual-specific characteristics detailed in Table 1. Unfortunately, income and wage variables cannot be included as controls, as these data are unavailable (income) or too patchy (wages) in the Polish Labor Force Survey.
T i is a treatment group variable, specifying whether the woman has children (treated group) or not (control group); Post t is a dummy variable for the period following the second quarter of 2016 when the child benefit was introduced, or the posttreatment period; e it is an error term; and a, b, g, d, and q are parameters to be estimated. We also introduce time fixed effects to account for changes in labor market policies and the economic situation in general (Y t is a set of half-year dummies).
We use the linear probability model to estimate Equation (1). We run the probit model as a robustness check, and the results were very similar (they are available upon request). To overcome error-term heteroskedasticity, we compute robust standard errors. Table 1 compares the descriptive statistics for the treated and control groups in 2016, distinguishing between single and partnered women. Among the partnered women, those without children were much more likely to be employed (compared to women with children). The finding was opposite among single women, where those with children were more likely to be employed. Not surprisingly, childless women were much younger, in Table 1 Descriptive statistics for women aged 20-49 years in 2016 (treated groupwomen with one or two children; control group -childless women)
Control (%) Treated (%) Control (%) Treated (%)
Labor particular among singles. Childless single women were also already better educated and more likely to be still in education than single mothers. Among partnered women, there was a higher share of rural inhabitants in the treated group. This compositional difference may lead to different trends in labor participation between the two groups. To eliminate the impact of these compositional differences on labor force participation of women with and without children, we introduce the socioeconomic variables displayed in Table 1 as control variables in the estimated models.
Testing the common trends hypothesis
A key assumption of the difference-in-differences methodology is that -before the treatmentchanges in the level of the outcome variable were the same in the treatment and control groups.
We start with a visual inspection of historical trends of our outcome variable, labor force participation (see, e. g., Gebel and Voßemer, 2014;Centeno et al., 2009). Figure 1 shows that changes in participation rates for women with one or two children and those without children were indeed quite similar prior to the introduction of the child benefit in 2016, though these were not completely parallel. b These trends, however, reflect both (1) changing probabilities of participating in the labor market among women with and without children and (2) a changing composition of these two groups (e.g., rising shares of tertiary educated women), which also impact the labor force participation rates. The prereform trend of labor force participation rate of women with three and more children was quite different; therefore, we consider that childless women are not sufficiently similar to them for a valid comparison and drop women with three or more children from our analysis.
To further ensure that comparing the treated and the control groups permits identification of the effect of the child benefits, we test the common trends hypothesis more formally, using two approaches. Firstly, we include -in the model -the interactions of the treatment group variable not only with the posttreatment period (treatment effect) but also with all-time dummy variables (placebo effects) to test whether the difference in the treatment and the control groups has changed at any point in time. Insignificant interaction terms would indicate that the difference between the two groups has remained stable and that the common trend hypothesis is valid.
Secondly, we vary the "Post" variable in Equation 1 so that it covers different periods, including 2009-2016, 2008-2015, 2007-2014 (i.e., the main specification moved backward by 1/2/3 years). If the coefficient of these interaction terms of the treatment group dummy with a subperiod dummy was significant, this would indicate that the difference between the treatment and the control groups has changed over time. In that case, the common trend hypothesis would not be valid.
Results for the main specification are presented in Table 2. 3 Results and Discussion Table 3 reports the estimate of our main parameters of interest, g, the group effect, and q, the treatment effect. Estimates of other coefficients are presented in Table A1 in the Appendix.
The effect of child benefits on labor force participation
The estimates imply that, after adjusting for differences in the composition of the two groups, the labor force participation rate of childless women with a partner was almost 6 percentage points higher than for partnered women with one or two children over the estimation period.
Following the introduction of the child benefits, this difference increased by 2.1 percentage points. The implication is that labor force participation among partnered mothers might have been 2.1 percentage points higher in the absence of the child benefits. The treatment effect for single women is of the same order.
To test whether the effect of the child benefit on female labor force participation changed over time, we also estimated Equation 1, allowing for a different treatment effect in 2016 and 2017. Results presented in Table 4 show that the negative effect of the benefit on labor force participation actually strengthened in 2017 for both partnered and single women. For single women, it was insignificant in the first posttreatment period and a little higher than for partnered women in the second period. The coefficients of partnered women and single women have significance levels of 0.000, and 0.008 respectively. Note: The coefficients of all covariates are provided in Table A1 in the Appendix. Robust standard errors were computed. Significance levels: ***0.01, **0.05, and *0.1. Source: Own calculations based on Polish Labor Force Survey data.
One may expect that the treatment effect for partnered mothers would be higher because their labor force participation is likely to be more elastic. Thus, the same treatment effect for the entire period analyzed may be quite surprising. Yet, the dynamics of the effect shows that single women indeed reacted more slowly to the introduction of the 'Family 500+" benefit.
Overall, in absolute terms, the estimates suggest that up to 100,000 women did not participate in the labor market in the first half of 2017 due to the "Family 500+" benefit. This corresponds to 1.3% of all women participating in the labor market in Poland and 1.9% of active women aged 20-49 years.
Testing for heterogeneous effects
We also test whether the impact of the "Family 500+" benefit on the labor force participation rate of women with children was heterogeneous across different groups of women. To verify this, the group and postperiod dummies and their combination were made to interact with the socioeconomic variables described in Table 1, using the following equation: with the notation as in Equation 1. For parsimony, we test heterogeneity with a simple postperiod dummy and run regressions separately for each socioeconomic variable. X c it is a subvector of X it for the variable of interest. Moreover, s, m, and r are newly added vectors of the parameters to be estimated. In particular, m is a vector with a set of parameters capturing different treatment effects by socioeconomic group.
The heterogeneous treatment effects for partnered women are displayed in Table 5. For single women, the treatment effects do not differ significantly by socioeconomic group in most of the cases. The full set of results is presented in Table A2 in the Appendix.
The estimates confirm that the effect of child benefits is the strongest for women with lower levels of education. It lends support to the idea that women with weak earnings are most likely to react to an increase in transfers, in particular when they can rely on the income of a partner. Women living in midsized towns seem to be most strongly affected, which renders their labor market situations more difficult and earnings lower -which in turn make the new benefit more generous in relative terms. The youngest age group seems to react most strongly to the introduction of child benefits (which may also reflect potentially lower earnings for labor market entrants), while the treatment effect for partnered women older than 30 years of age is insignificant.
Whether women have one or two children does not seem to matter among partnered mothers, although it differentiates the effect significantly among single mothers (Table A2 in Appendix). The treatment effect among single mothers of two children was 4.8 percentage points -4.0 percentage points lower than among single mothers of one child. Such a relatively large reaction of single mothers of two children is likely related to the eligibility ceiling for the first child and the fact that is "easier" to fall below it for single earners.
In terms of age of the youngest child, mothers whose youngest child was <1 year or between 4 and 6 years of age reacted less strongly than others. The treatment effect for mothers of children <1 year was even positive. This has to be interpreted with caution as women on maternity leave are counted as employed. Smaller coefficients for mothers of children aged between 4 and 6 years may be puzzling. One possible explanation is that the income effect was counterbalanced for those mothers. It may be related to weak childcare infrastructure and high costs of private kindergartens. Maybe the 500+ benefit may have made it possible for some mothers of children in preschool age to return to work and afford the childcare costs.
Robustness tests
To test the validity of our results, we run a series of robustness checks. First, we consider women with two children only (who are always eligible to the 500+ benefit) as the treated group, comparing them to childless women and leaving out women with one child. Secondly, we use a dynamic perspective and refer to panel data on flows between activity and inactivity. Thirdly, we modify the assignment of women with one child to the treatment or control group using information on the take-up of social assistance benefits. Fourthly, we reinforce our differencein-differences framework with a matching procedure. In the final, fifth robustness test, we look at employment rather than activity as an outcome variable. All five robustness checks (R1-R5 below) confirm a negative impact of the treatment on female labor market outcomes. Treatment effect for mothers of two children -0.024*** Difference in treatment effect for mothers of one child 0.006 Model with interactions for age of the youngest child (Age of the youngest child -base: 7-12 years) Treatment effect for mothers of children aged 7-12 years -0.040*** Difference in treatment effect for mothers of children aged 0-1 years 0.071*** Difference in treatment effect for mothers of children aged 2-3 years -0.000 Difference in treatment effect for mothers of children aged 4-6 years 0.021** Difference in treatment effect for mothers of children aged 13-17 years 0.012 Source: Own calculations based on Polish Labor Force Survey data.
R1: The effect only for women with two children
As a first robustness check, we compare changes in participation rates among women with two children (treated group) to changes in participation rates among childless women, leaving out women with one child, whose assignment to the proper group is more challenging. Table 6 summarizes the results, which are statistically significant and even stronger in size for single women than in the baseline.
R2: Flow analysis
We make use of the time panel dimension of our data (however, available only as 1-year transitions) and investigate the impact of the "Family 500+" benefit on labor market withdrawal, or the flow from activity to inactivity, rather than the level of activity, thus varying the outcome variable. In particular, we compare the yearly flows from activity to inactivity. Table 7 summarizes the results, which point to a statistically significant difference in labor market withdrawal rates, which are higher for women with children, in particular the single ones.
R3: Modifying the control/treatment group assignment for mothers of one child
To test the impact of the assignment of women with one child to the treatment and control group on our results, we redefine these groups in the following way. We define the treatment group as women with two children and those with one child who are eligible for the 'Family 500+" transfer. Because there is no variable that would allow us to directly identify those receiving the "Family 500+" benefit in the data for 2016, we derived it from other informationwhether a woman declares receiving a social benefit in the form of family benefits or social assistance, as this implies eligibility for the 500+ benefit as well. The control group includes mothers with one child who do not report receipt of any social assistance benefits. Most of Table 6 The effect of child benefits on labor force participation of mothers with two children, separately for partnered and single women them will not be eligible for the 500+ transfer. This approach allows us to gage differences in labor market behavior across eligible and ineligible mothers, rather than comparing mothers with childless women -an additional way to test the robustness of our results. However, because the eligibility ceiling for social assistance is lower than that for the "Family 500+" benefit, mothers with household income that falls between those two eligibility ceilings will be wrongly assigned to the control group. That said, the two income ceilings are close in the 2016 and 2017 data and, therefore, the corresponding bias should be limited. According to our estimates based on 2016 Household Budget Survey data, wrong assignment should concern around 12% of households with one child. Furthermore, we can only use the social assistance (Table 8).
R4: Difference-in-differences framework with a matching procedure
We use the previous difference-in-differences and flow analysis framework, but this time, to increase the comparability of individuals across the treated and control groups and to lower the potential selection bias, we employ a kernel propensity score matching technique (Blundell and Dias, 2009). For each individual, we estimate the probability that she would be in the treated group based on the socioeconomic characteristics described in Table 9.
This probability is referred to as the propensity score. For each treated subject, we derive a weighted average of all individuals in the control group, with weights based on the distance of their propensity score to that of the treated individual. The highest weight is given to those with propensity scores closest to that of the treated unit. Once we weight the covariates based on the propensity score matching technique, the differences in means between the treated and the control groups become statistically insignificant for all variables, substantially reducing the selection bias.
The estimated treatment effects are displayed in Table 10. These effects are positive and statistically significant. The results suggest that after the "Family 500+" program was introduced, the gap in the quarterly withdrawal rate between the treated and the control groups was 2.2 percentage points higher than it was a year earlier for partnered women, and 1.4 percentage Table 8 The effect of child benefits on labor market withdrawal rates, separately for partnered and single women (women aged 20-49 years with one or two children)
Socioeconomic variables Partnered women Single women
Treatment effect (q) 0.016** 0.07 Observations 10,310 6,322 R-squared 0.02 0.045 Note: Robust standard errors were computed. Significance levels: ***0.01, **0.05, and *0.1. Compared to the main specification, we use a more precise assignment of women with one child to control/treatment groups.
Source: Own calculations based on Polish Labor Force Survey data.
Table 9
Balancing t-test of differences in means of covariates between the control and treated groups, 2015
Source:
Own calculations based on Polish Labor Force Survey data.
points for single women. This is a large effect, considering that the average withdrawal rates vary between 1% and 4%. In the second half of 2016, the average quarterly withdrawal rate for the treated group was, on average, 3.9%. Our results imply that it would have been less than half of this figure had the "Family 500+" benefit not been introduced. In absolute terms, this suggests that, on average, 50,000-54,000 women withdrew from the labor market in the second half of 2016 due to the "Family 500+" benefit. This is compatible with the estimates obtained in the first part of our analysis.
R 5: The effect on employment rate
As a last robustness check, we use our baseline mode but look at employment versus nonemployment (unemployment or inactivity) as an outcome variable rather than looking at activity versus inactivity. We might expect that most of the negative impact of the "Family 500+" benefit concerned unemployed women, who stopped searching for a job, while the effect on those employed would be weaker. This turns out not to be the case: the effect among employed women (compared to nonemployed) is even a bit stronger than the results for inactivity (Table 11 summarizes the results).
Conclusions
The results presented in this paper suggest that the introduction of child benefit in 2016 in Poland had a significantly negative impact on labor force participation and employment of eligible mothers. This finding is robust to changes in the precise outcome variable we look at (labor force participation, employment, or labor market withdrawal), to different definitions of the treated and the control groups in our difference-in-differences methodology, and to different estimation approaches. The effects are sizeable, implying that labor force participation and Table 10 The impact of child benefits on labor market withdrawal rates -results from a difference-in-differences estimation with kernel propensity score matching
Socioeconomic variables Partnered women Single women
Treatment effect (q) 0.022*** 0.014*** Observations 10,310 6,311 The coefficients of partnered women and single women have significance levels of 0.002, and 0.001 respectively. Source: Own calculations based on Polish Labor Force Survey data. Several advanced countries are looking for a way to improve low fertility rates and tackle the persistent poverty rates among families with children. Many of them turn to a redesigning of family support and childcare benefits. We hope the present study may be informative for the choice of policy design. Our finding of a sizeable negative effect of the Polish child benefit on female labor force participation, despite a booming labor market and increasing wages, suggests that there is a need to consider short-and long-term labor market effects in the cost-benefit analysis of public policies.
In terms of questions for further research, it will be interesting to study -at a later point in time -the extent to which the new child benefits may lengthen career interruptions of mothers and the ensuing impact on their earnings prospects when they return to the labor market. Furthermore, whether fertility is influenced positively by the new benefit introduced in Poland, as intended, would be an interesting research question for the future, as many countries struggle to alleviate demographic changes and increase the low birth rates.
The size of the effect on labor supply of the "Family 500+" benefit may be influenced by the existing tax disincentives for second earners, insufficient childcare coverage, gender pay gaps, and gendered norms. Studying how these features influence the impact of child benefits on labor supply would shed light on policies that can help alleviate any unwanted side effects of such transfers. Finally, the child benefits might also influence labor supply of men and informality, which would be interesting fields for study for the future.
Endnotes a The benefit became universal as of July 2019; therefore, our study focuses on the period directly prior to the 2016 policy implementation. b We have also verified whether there have been any important, abrupt changes in coverage of childcare facilities and educational enrollment around the time the benefit was introduced, as this could be a potential confounding factor undermining our empirical strategy. Poland has experienced a steady growth in the coverage of crèches and kindergartens since the early 2010s, with no break around the reform.
There were no changes in school enrollment either.
Availability of data and material
This study is based on data from Statistics Poland, namely, microdata from Labor Force Survey 2010-2017, obtained upon agreement, which prohibits sharing the data. Statistics Poland has no responsibility for the results and the conclusions, which are those of the authors. The usual disclaimers apply. All errors are ours. Table A1 The effect of child benefits on labor force participation of mothers, for women aged 20-49 years with one or two children: full set of estimated coefficients
Socioeconomic variables Partnered women Single women
Group effect (
Model estimated coefficient
Model with interactions for educational level (Educational level -base: tertiary) Treatment effect for tertiary education -0.008 Difference in treatment effect for secondary education -0.013 Difference in treatment effect for basic vocational or lower education -0.013 Model with interactions for place of residence (Place of residence -base: city with >100,000inhabitants) Treatment effect for cities with >100,000inhabitants -0.008 Difference in treatment effect for cities with 20,000-100,000inhabitants -0.001 Difference in treatment effect for cities with <20,000inhabitants -0.024 Difference in treatment effect for rural areas -0.016 Model with interactions for age (Age -base: 30-39 years) Treatment effect for age 30-39 years -0.011 Difference in treatment effect for age 20-29 years -0.006 Difference in treatment effect for age 40-49 years -0.008 Model with interactions for number of children (Number of children -base: two) Treatment effect for mothers of two children -0.048*** Difference in treatment effect for mothers of one child 0.040** Model with interactions for age of the youngest child (Age of the youngest child -base: 7-12 years) Treatment effect for mothers of children aged 7-12 years -0.036*** Difference in treatment effect for mothers of children aged 0-1 years 0.067** Difference in treatment effect for mothers of children aged 2-3 years -0.005 Difference in treatment effect for mothers of children aged 4-6 years 0.023 Difference in treatment effect for mothers of children aged 13-17 years 0.025 Note: Robust standard errors were computed. Significance levels: ***0.01, **0.05, and *0.1. Source: Own calculations based on Polish Labor Force Survey data. | 7,627.2 | 2020-03-01T00:00:00.000 | [
"Economics"
] |
Enzymatic Synthesis of Human Milk Fat Substitute - A Review on Technological Approaches
SUMMARY Human milk fat substitute (HMFS) is a structured lipid designed to resemble human milk fat. It contains 60-70% palmitic acid at the sn-2 position and unsaturated fatty acids at the sn-1,3 positions in triacylglycerol structures. HMFS is synthesized by the enzymatic interesterification of vegetable oils, animal fats or a blend of oils. The efficiency of HMFS synthesis can be enhanced through the selection of appropriate substrates, enzymes and reaction methods. This review focuses on the synthesis of HMFS by lipase-catalyzed interesterification and provides a detailed overview of biocatalysts, substrates, synthesis methods, factors influencing the synthesis and purification process for HMFS production. Major challenges and future research in the synthesis of HMFS are also discussed. This review can be used as an information for developing future strategies in producing HMFS.
The composition and distribution of fatty acids in HMF are used as a basis to develop an alternative fat as an ingredient for infant formulas. As sources of nutrients, infant formulas are an alternative to human milk when nursing mother does not produce enough breast milk (13,14). Fats commonly used for infant formulas are vegetable oils or animal fats, especially bovine milk fat (3). However, the composition and distribution of fatty acids in vegetable oils and mammalian milk fats differ from that of HMF (4). In vegetable oils, palmitic acid is mainly (>80 %) esterified to sn-1,3 positions (9). Meanwhile, animal fat such as cow's milk fat has similar palmitic acid content as HMF, but the percentage of palmitic acid esterified at sn-2 position is only about 40 % (4,14). Hereby, vegetable oils, animal fats or blends of oils are modified to mimic the composition and distribution of fatty acids found in HMF (15,16). This modified fat is so-called human milk fat substitute (HMFS) (14,17).
Synthesis of HMFS is conducted by the enzymatic interesterification of oils and fats. The enzymatic interesterification operates at relatively low temperature, and it is considered cost-effective and environmentally friendly method (19). The interesterification utilizes lipase as a biocatalyst, which has specificity and selectivity to produce desired lipids with relatively low amount of by-products (20). Hereby, the changes in the structure of TAGs can be specifically regulated at sn--1,3, sn-2 or an unspecified position (21)(22)(23)(24)(25).
The development of structured lipids using enzymatic process technology has several challenges especially for tailoring higher catalytic efficiency and enzyme stability, which are important for overall productivity (26). HMFS contains more than 70 % palmitic acid at sn-2 position and it can be produced by acidolysis in solvent system between tripalmitin and a mixture of hazelnut oil fatty acids and stearic acid using Lipozyme RM IM (15), or tripalmitin and fatty acids from hazelnut oil and γ-linolenic acid (GLA) using Lipozyme RM IM and Lipozyme TL IM (16). He et al. (6) reported that acidolysis of TAGs from Nannochloropsis oculata and fatty acids from Isochrysis galbana using Novozyme 435, Lipozyme TL IM, Lipozyme RM IM and recombinant Candida antarctica Lipase B (recombinant CAL-B) in solvent-free system produced HMFS containing 59.38-68.13 % palmitic acid at sn-2 position.
The reported studies on HMFS production highlighted the exploration of the use of new oils and fats, finding more cost-effective catalysts, synthesis methods, reactor configurations and purification process. Wei et al. (4) reviewed the achievements and trends of development of HMFS, with focus on nutritional bases, preparation methods and applications of HMFS. As an addition to that comprehensive review, this work is more focused on the utilization of lipase as a biocatalyst and factors that affect lipase-catalyzed synthesis of HMFS. It starts with the biocatalysts used for production of HMFS, followed by substrates, methods and reactor configurations, factors influencing the synthesis and purification of HMFS, with a specific objective to increase the efficiency of HMFS synthesis. Additionally, the developments of HMFS production including challenges and opportunities for future research of HMFS are also presented in this work.
LIPASE FOR HMFS SYNTHESIS
Lipase (triacylglycerol acyl-hydrolase, EC 3.1.1.3) is commonly used for oil or fat hydrolysis. In non-aqueous media, lipase can also catalyze the esterification, acidolysis, alcoholysis and interesterification (20,(27)(28)(29). The lipase-catalyzed interesterification involves the reversible reaction of simultaneous hydrolysis and esterification reactions (30). A small amount of water is important for non-aqueous enzymatic catalysis for maintaining active conformational structure of the enzymes during non-covalent interactions (31). The excess water has to be removed to shift the progress of the reaction from hydrolysis to esterification, thus enhancing the reaction yield. When the hydrolysis prevails over esterification, by-products such as glycerol, free fatty acids (FFA), monoacylglycerol (MAG) and diacylglycerol (DAG) are obtained, which eventually hampers the separation process.
As part of non-aqueous reaction, the esterification of HMFS can be carried out by lipase as the biocatalyst. The sources of lipase are mostly microorganisms. The commercial lipases available on the market and mostly studied in recent years for the production of HMFS are derived from Rhizomucor miehei, Thermomyces lanuginosa, Candida antarctica, Candida parapsilosis, recombinant lipase B from Candida antarctica, Candida lipolytica, Candida sp. 99-125, Rhizopus oryzae, Alcaligenes sp. and Mucor miehei (9,24).
Lipases with regiospecificity and regioselectivity are of interest as the reaction yield can be tuned up by these properties. Additionally, the use of an immobilized lipase with a biocatalytic activity maintained at an industrial scale is required for multiple uses, ensuring the economic viability of the process (32,33), and thus lowering production costs (34,35). The immobilized lipase sometimes has a higher stability than the native one from freely suspended enzyme (36).
Selectivity and/or specificity of lipases as biocatalysts for HMFS synthesis
Compared to chemical catalysts, lipases have functional properties: (i) substrate specificity, i.e. the ability to hydrolyse preferentially a type of acylglycerol, (ii) fatty acid specificity or typoselectivity, i.e. the ability to target a certain fatty acid or group of fatty acids, (iii) positional specificity or regioselectivity, i.e. the ability to distinguish the two external positions of the TAG glycerol backbone, and (iv) stereospecificity, i.e. the ability to distinguish between sn-1 and sn-3 positions of TAG molecule (27). The incorporation of fatty acids into a TAG structure is influenced by many factors, including the geometry of the binding sites of the lipases, free energy changes between the substrate and products, variation of pH values, effect of the chain length of fatty acids on the solubility of water and the physical state (24).
Novozyme 435 is mostly used for HMFS synthesis in the interesterification of oils and fats that improves palmitic acid content at the sn-2 position with donors such as palmitic acid, ethyl palmitate or palm oil fractions. Generally, palm oil fractions have high palmitic acid content distributed at the sn-1,3 positions (64). The incorporation of fatty acids by acidolysis or transesterification using Novozyme 435 is affected by substrates. Novozyme 435 is a highly versatile catalyst that catalyzes a wide variety of different substrates due to its high enantioselectivity (60). Robles et al. (65) used Novozyme 435 for acidolysis of tuna fish oil and palmitic acid, and the produced TAG contained amount of substance fraction x(palmitic acid)=57 % and 17 % DHA at sn-2 position. Turan et al. (66) also used Novozyme 435 in acidolysis and transesterification reactions between hazelnut oil and palmitic acid or ethyl palmitate in a solvent-free system. The optimum conditions were hazelnut/ethyl palmitate at a molar ratio 1:6, temperature 65 °C and reaction time 17 h. Hereby, HMFS with x(palmitic acid)=48.6 % and 35.5 % palmitic acid at sn-2 position was obtained. Novozyme 435 is used in acidolysis of palm oil and a mixture of DHA and ARA to produce HMFS with 17.20 % DHA+ARA incorporated at sn-2 position (67). Acidolysis of palm olein and a mixture of DHA, GLA and palmitic acid using Novozyme 435 produced HMFS with 35.11 % palmitic acid at the sn-2 position (68). Novozyme 435 is also used in transesterification of a mixture of palm stearin, palm kernel oil, soybean oil, olive oil and tuna fish oil to produce HMFS with fatty acid composition resembling HMF (69).
Reusability of lipase
Enzymes are immobilized to prevent denaturation and leakage so that the number of batches or the duration of synthesis can be increased. Enzyme is immobilized through adsorption, entrapment, covalent coupling or cross-linking (36). The enzyme immobilization yields (i.e. loading and recovered activity) strongly depend on the properties of the solid support such as the surface area, the number of accessible sides for binding, porosity and pore size (33). In addition, the hydrophilicity of the enzyme support is a factor that affects the reaction performance and the hydrophilicity of the support could be a beneficial side-effect of the immobilization (70).
Reusability of an immobilized lipase is very important issue to evaluate the operational stability (6,60); it is a major factor in determining the suitability of its utilization in different industries (71). Table 1 (6,32,33,37,39,40,60,61,71) shows the reusability of lipases for HMFS synthesis. It depends on the immobilization technique, inherent thermal properties of enzyme, reaction temperature and operational time. A gradual decrease of enzyme activity may be observed after several reaction batches. This is due to the denaturation (72) and/or loss of lipase immobilized during the reaction (71). In addition, the loss of enzyme activity may be due to a progressive dehydration occurring during the reaction (33). The multiple uses of the immobilized lipases can be expected due to the construction of the support that can protect the enzymes from mechanical inactivation and simultaneously inhibit lipase leakage (73).
Zheng et al. (71) mentioned that Candida lipolytica immobilized in magnetic multi-walled carbon nanotubes (CLL@ mMWCNTs) had a better activity and stability than Lipozyme RM IM and Lipozyme TL IM for interesterification between tripalmitin and oleic acid. Reusability of CLL@mMWCNTs was higher than that of Lipozyme RM IM, which was proven by 1.5-fold higher OPO content than with Lipozyme RM IM when reused for 20 cycles (1 cycle lasted 2 h). Immobilization of C. lipolytica on mMWCNTs via hydrophobic and cation-exchange interactions prevented the extensive conformational changes due to typical thermal denaturation (71). Tecelão et al. (33) reported that the best performance of Rhizopus oryzae lipase immobilized on Accurel® MP 1000 or Lewatit® VP OC 1600 was about 4-fold higher than on Eupergit® C regarding oleic acid incorporation in tripalmitin. Rhizopus oryzae lipase is immobilized on Accurel® MP 1000 and Lewatit® VP OC 1600 by physical adsorption. After the immobilization, glutaraldehyde is added to promote a stable crosslink between the lipase and the matrix, as well as to promote intermolecular bonds between the enzyme molecules. The immobilization of R. oryzae lipase on Eupergit® C can also be performed through direct enzyme binding on support via oxirane groups. However, enzymes immobilized on Eupergit® C 478 (33) through their different groups (amino, sulfhydryl, hydroxyl or phenolic) can block the substrate access to the enzyme active site, or can even lead to enzyme denaturation (33). In conclusion, as reported by Idris and Bukhari (74), materials and techniques for immobilization affect the conformational structure of enzymes related to catalytic properties.
The type of substrate is one of the important factors in the synthesis of HMFS. The composition of the raw material of the substrate that undergoes the interesterification process in the synthesis of HMFS has a significant influence on the final product. In the synthesis of HMFS with high content of palmitic acid at sn-2 position, it is better to use a substrate containing high content of palmitic acid at that position.
Fractionated palm stearin (37). The substrate melting point influences the enzymatic interesterification to obtain the optimal target product. Lee et al. (81) reported that transesterification between lard (27.1 % palmitic acid) and olive oil (73.3 % oleic acid) or camellia oil (81.6 % oleic acid) at 40 °C for 12 h using 8.33 % Lipozyme IM-20 in isooctane solvent yielded HMFS with 12.9 or 15.4 % OPO, respectively. Transesterification of palm oil (44.3 % palmitic acid) with olive oil or camellia oil resulted in HMFS containing 21.8 or 25.2 % OPO. Despite having a high palmitic acid content at the sn-2 position, interesterification of lard produced lower OPO content than palm oil. This is related to the used low reaction temperature (40 °C), lower than the melting point of lard, which is 48 °C, therefore the solubility of lard is low in isooctane at 40 °C (81).
The one-step enzymatic process has been used in many studies due to its simplicity, but its drawbacks are: (i) difficulties in converting intermediate DAGs into desired HMFS resembling HMF, and (ii) complexity of purification due to the presence of by-products (24). To overcome these drawbacks, a multi-step enzymatic process such as alcoholysis followed by esterification has been proposed (9,24). The synthesis of HMFS via multi-step enzymatic process results in a higher OPO purity (74-95 %) than in one-step enzymatic process (about 43 %). However, this approach also has bottlenecks, especially the reaction complexity and high solvent consumption (45).
The two-step process for the HMFS synthesis can be alcoholysis route followed by the esterification reaction (42,93,94) ( Table 5). The two-step process is proposed in the synthesis of HMFS to overcome the drawbacks of acidolysis and transesterification. This method exploits the regioselectivity of lipase at sn-1,3 (9,24). Two-step synthesis consists of alcoholysis of TAG using sn-1,3 specific lipase to produce sn-2 MAG rich in palmitic acid and followed by esterification of sn-2 MAG rich in palmitic acid with FFAs (4,95) or esterified fatty acids (93). Generally, the final product of interesterification between sn-2 MAG rich in palmitic acid and oleic acid contains 92-94 % palmitic acid at sn-2 position and 83-89 % oleic acid at sn-1,3 position, while the yield of OPO reaches 70-72 % (9). The alcoholysis followed by esterification process avoids the acyl migration and obtains a purely structured TAG (HMFS) (5,9,24). However, this process is not commonly used in industrial production due to the complexity of the steps, which leads to an increased overall cost (9). Two-step synthesis of HMFS can also be carried out through two-step acidolysis (61,65) (Table 4). Esteban et al. (61) conducted acidolysis of palm stearin and palmitic acid at r=1:3, temperature 37 °C in solvent system using Novozyme 435, which produced TAGs containing a high palmitic acid content at the sn-2 position (74.5 %). After the first acidolysis, the obtained TAGs were used as intermediates for the second acidolysis with oleic acid at r=1:6 using R. oryzae, Mucor miehei, RM IM, TL IM and Alcaligenes sp. lipases. The final product contained 67.8 % palmitic acid at sn-2 position and 67.2 % oleic acid at sn-1,3 positions (61). In addition, Pina-Rodriguez and Akoh (96) carried out a two-step interesterification (transesterification followed by acidolysis) for the synthesis of the DHA-containing amaranth oil structured lipid. First, a customized amaranth oil was produced by transesterification of amaranth oil and ethyl palmitate using Novozyme 435. The second step was acidolysis of the obtained oil with DHA using Lipozyme RM IM. The final product contained 28 % palmitic acid and 33 % palmitic acid at sn-2 position.
The interesterification for HMFS synthesis can be carried out in batch and continuous reactors (5). The batch reactor is easy to operate and suitable for small scale production. However, at the industrial scale, for an economical production process, continuous operation is preferred rather than batchwise operation (53,97). In continuous reactor system, such as continuous stirred tank reactor (CSTR), plug flow reactor (PFR) or packed bed reactor (PBR), substrate is continuously introduced into the reactor and the product is subsequently withdrawn (98). PBR is more suitable for industrial-scale production than CSTR (24).
The advantages of PBR over the batch reactor for the production of structured lipids can be seen in the following aspects (53): (i) the slow substrate flow through the enzyme column to avoid the damage of the enzyme structure and increase the stability of the enzyme, (ii) the production can be carried out continuously, and (iii) it reduces the occurrence of acyl migration due to the excessive use of the enzyme. To some extent, the continuous process at a high volumetric flow rate is more advantageous than operation at a slow volumetric flow rate. At a high flow rate, possibility of acyl migration reduces, thus increasing the productivity (23). The acyl migration in a PBR is lower than in a stirred batch reactor (24). Nielsen et al. (97) reported that the reaction equilibrium in acidolysis of lard and soybean oil fatty acids in a PBR was reached in <1.5 h residence time. Zou et al. (53) reported that Lipozyme RM IM could be used for 10 days in a PBR without a significant loss of activity in interesterification between palm stearin and mixed of stearic acid, myristic acid and fatty acids from rapeseed oil, sunflower oil and palm kernel oil. Wang et al. (40) also reported that the number of reuses of lipase in a packed reactor increased 2.25-fold compared to that of batch reactor.
482 Table 3. Process conditions for the production of human milk fat substitute through acidolysis Lipase
FACTORS INFLUENCING HMFS SYNTHESIS
Some aspects considered for HMFS synthesis are biocatalyst concentration, reaction type, substrate composition and mode of operation (5). Table 3, Table 4 and Table 5 show selected works on HMFS synthesis using various substrates, enzymes and other relevant parameters for optimizing the process in order to obtain products that resemble HMF.
Effect of lipase concentration
Lipase concentration affects the rate of interesterification reaction. The initial reaction rate increases by the increase of lipase concentration due to a higher number of active side pockets available for catalytic activities (71,93,94). Lipase concentration also affects the amount of DAG and the rate of acyl migration (51). A higher lipase concentration enhances the incorporation of the acyl donors (acyl migration) in acidolysis (39). Some published reports that are shown in Table 3, Table 4 and Table 5 are not comparable because the related reaction conditions are not provided (i.e. the enzyme activity and the amount of substrate). It is worth mentioning that lipase concentration must be optimized. To some extent, the progressive increase of lipase concentration promotes the synthesis of OPO via shortening the reaction time and weakening the acyl migration (99). However, the excessive enzyme amount will favour hydrolytic reaction over esterification.
Zou et al. (51) reported that after reaction time of 2 h in acidolysis between basa catfish oil and fatty acids from sesame oil using 2 % Lipozyme RM IM, the content of sn-2 palmitate was 56 %.
Effect of moisture content
The enzyme inactivation due to dehydration sometimes causes poor interesterification (33). Lipase has high activity in the nearly absent-to-micro-aqueous system, typically interface-activated at the oil-water interface (71). The hydrolysis is usually considered the rate-limiting reaction in which water acts as a reactant. To some extent, the enhancement of moisture content increases the initial activity of lipase. However, excessive water entails the formation of by-products (39). A small amount of water is important for lipase to maintain its activation (i.e. lubrication of the enzyme conformation). Therefore, the amount of water must be controlled especially during acidolysis (51).
Zheng et al. (71) reported that OPO content reached a maximum conversion (43.9 %) at 2 % moisture content during 486 the interesterification of tripalmitin and oleic acid. This conversion decreased as the moisture content increased. In other studies, the addition of 1 % moisture content in acidolysis of lard and oleic acid increased OPO yield from 52.8 to 55.3 %, whereas at 5 % moisture content, the OPO content gradually decreased (32). Zou et al. (51) reported the optimum moisture content in acidolysis between palm stearin and FFAs for HMFS synthesis of about 0.24 %. Thus, the range of water content in HMFS synthesis by enzymatic interesterification is 0.2-2 %.
Effect of solvent
Generally, lipase-catalyzed interesterification for HMFS synthesis can be performed in either solvent system (i.e. organic solvents) or solvent-free system. The solvent increases the solubility of high-melting-point reactants. Thus, the reaction can be operated at a lower temperature, which is beneficial for the enzyme stability. However, excessive solvent amount dilutes the reaction fluid and reduces the random access of substrate to the lipase active sites (94). Several factors must be considered when selecting a proper solvent for a particular enzymatic reaction including: (i) compatibility of the solvent with the reaction, (ii) solvent properties (density, viscosity, surface tension, toxicity, flammability), and (iii) cost. Lipase tends to be more active in n-hexane than in other solvents such as isooctane, acetone, petroleum ether, toluene, or ethyl acetate. n-Hexane plays a key role in increasing the solubility of non-polar substrates and shifting the reaction towards esterification rather than hydrolysis (24).
Palmitic acid-enriched TAG has a high melting point so it requires a higher temperature in the solvent-free reaction system in order to keep the substrate liquid during the reaction (61). Palm stearin and palmitic acid have high melting points so they are difficult to react without a solvent as they require a minimum temperature of 65 °C (37). Esteban et al. (61) reported that the incorporation of oleic acid at the sn-1,3 position was slightly lower in the solvent-free system (46.2 %) than in the solvent system (50.4 %) in the interesterification between palmitic acid-enriched TAG from palm stearin and oleic acid. It was caused by a lower reaction rate due to a lower mass transfer rate when no solvent is available. In addition, Cao et al. (100) reported that in acidolysis, the rate of acyl migration and the concentration of intermediate or side products (e.g. DAG and MAG) decreased significantly in the anhydrous reaction system.
Effect of substrate ratio
The interesterification reaction rate of HMFS synthesis depends on the amount of substrate ratio (TAG to acyl donor) after the reaction equilibrium has been achieved (39,71). Enhancement of the amount of substance ratio of TAG to fatty acids leads to a reaction equilibrium (32,39,101), and produces the desired incorporation of fatty acids into TAG (6). The presence of excessive TAG substrate reduces lipase active site capabilities. Also, an excessive FFA amount causes environmental acidification, increases the viscosity of the system, inhibits biocatalyst activity and reduces mass transfer rate (71). The high amount of TAG to fatty acid ratio may increase the frequency of collisions between the enzyme and substrates (102). The increase of palmitic acid content at sn-2 position is greater when the amount of substance ratio of TAG to fatty acid is enhanced in the interesterification between palmitic acid-enriched TAG from palm stearin and oleic acid (61). The amount of substrate ratio also affects fatty acids at the sn-1,3 positions. Increasing the amount of substrate ratio decreased the saturated fatty acid content at sn-1,3 position in acidolysis between a mixture of palm stearin and ARA oil with oleic acid (47). Bryś et al. (88) reported transesterification between lard and milk thistle oil at a mass ratio 6:4 and 8:2 at 60 °C using 8 % Lipozyme RM IM. After 4 h at the amount of substrate ratio 8:2, HMFS with 21 % palmitic acid and about 75 % palmitic acid at the sn-2 position was obtained. Meanwhile, at a ratio of 6:4, HMFS contained less than 70 % palmitic acid at the sn-2 position. In addition, Tecelão et al. (86) reported that the incorporation of oleic acid increased drastically (from r=32 to 51 %) by raising the substrate ratio of tripalmitin to ethyl oleate from 1:2 to 1:8. Zou et al. (52) reported that at the optimum amount of substrate ratio for acidolysis between palm stearin and a mixture of stearic acid, myristic acid and FFAs from rapeseed oil, sunflower oil and palm kernel oil of 1:14.6 yielded HMFS with 29.7 % palmitic acid and 62.8 % palmitic acid at sn-2 position. Generally, the range of the amount of substrate ratio (i.e. tripalmitin, palm stearin, lard and catfish oil) to fatty acids in the interesterification for HMFS synthesis is from 1:2 to 1:14.
Effect of reaction temperature
The reaction temperature influences the subtle variations in the architecture/conformation of lipase and leads to thermal inactivation of lipase and reduction in the affinity between the substrate and the biocatalyst (103). A higher temperature enhances the mass transfer and, to some extent, increases the activity of lipase as well (94). In endothermic reactions, higher temperatures provide better results due to the shift in thermodynamic balance. At high temperatures, the operation of the process is also easy as the solubility of the reactants increases and the viscosity of the solution decreases (39). Moderately high temperatures can provide sufficient energy to overcome the reaction barrier, while too high temperatures can cause lipase thermal deactivation (104). Therefore, the reaction temperature should be considered as low as possible so that the reaction efficiency and product quality are ensured (51). The optimal temperature will vary with different lipase sources (6,105). The reaction temperature is positively correlated with acyl migration. It also has an effect on the acyl incorporation (106) in which high temperatures may facilitate acyl migration (39).
The OPO content reached a maximum (46.5 %) at a reaction temperature of 50 °C for the interesterification between October-December 2021 | Vol. 59 | No. 4 tripalmitin and oleic acid using CLL@mMWCNTs. However, the OPO content decreased with the increase in reaction temperatures, especially above 50 °C (71). He et al. (6) reported that the highest amount of ω-3 PUFAs (13.92-17.12 %) in HMFS was obtained by interesterification between TAG from Nannochloropsis oculata and fatty acids from Isochrysis galbana using Novozyme 435, recombinant CAL-B lipase, Lipozyme TL IM and Lipozyme RM IM at reaction temperatures of 60, 50, 60 and 50 °C, respectively. Generally, the range of reaction temperatures for HMFS synthesis via enzymatic interesterification is 40-60 °C.
Effect of reaction time
The reaction yield for the synthesis of structured lipids is positively affected by an increase in reaction time (57,66). Reaction time in the interesterification is governed by the reactor configuration (i.e. batch or continuous reactor) (40). Wang et al. (40) reported that the reaction time for HMFS synthesis via interesterification between tripalmitin and PUFAs from microalgal oil in PBR (2.5 h) was faster than that of batch reactor (7 h). Generally, the reaction time in a batch reactor is the factor that the most affects the increase in acyl migration and eventually results in the production of the partial acylglycerols such as DAG and MAG. The acyl migration increases linearly with an increased reaction time (59). In addition, the reaction temperature also affects the reaction time. Yang et al. (39) reported the interesterification between lard and fatty acids from soybean where the reaction time to reach incorporation of 20 % linoleic acid and 3 % linolenic acid decreased with the increase of reaction temperature from 5 h at 50 °C down to 2.4 h at 90 °C.
Bryś et al. (89) reported that transesterification between lard and milk thistle oil at a mass ratio 8:2 using 8 % Lipozyme RM IM at 70 °C yielded HMFS with above 70 % palmitic acid at the sn-2 position after 2 and 6 h, but only 53.4 % after 4 h. In addition, Bryś et al. (90) also reported transesterification between lard and rapeseed oil at a mass ratio 8:2 using 8 % Lipozyme RM IM at 70 °C for 4 h. The produced HMFS had 24.2 % palmitic acid and 41.6 % palmitic acid at the sn-2 position. On the other hand, after 8 and 24 h of reaction, HMFS had 34.9 and 26.4 % palmitic acid at the sn-2 position, respectively. The OPO content in the product of interesterification between tripalmitin-rich palm stearin and ethyl oleate in a batch process using Lipozyme TL IM decreased from 29.3 to 18.5 % as the reaction time increased from 3 to 12 h, respectively (59). In addition, Zou et al. (53) reported the interesterification between palm stearin and a mixture of stearic, myristic and fatty acids from rapeseed, sunflower and palm kernel oil, respectively, in PBR with the following reaction conditions: residence time 2.7 h, temperature 58 °C and amount of substrate ratio 1:9.5. Under these conditions, the contents of palmitic acid in TAGs and at sn-2 position were 28.8 and 53.2 %, respectively. Generally, the range of reaction time for HMFS synthesis via enzymatic interesterification in a batch process is 2-24 h, while in a continuous process it is 1-3 h.
PURIFICATION OF HMFS
The synthesis of structured lipids by the enzymatic interesterification produces TAGs, partial glycerides (DAG and MAG) and FFAs. The acidolysis between TAG and fatty acids gives products with a high FFA content. Products of acidolysis between palm stearin and palmitic acid at an amount of substance ratio 1:3 contain 50 % FFAs (37). The transesterification between TAG molecules gives products with low content of FFAs (0.5-7 %) (56,58,69). Thus, each type of enzymatic interesterification or utilization of different substrates can result in different complexity in the purification of HMFS. This complexity, as indicated earlier, depends on the number of by-products contained in the reaction mixture. Purification after HMFS synthesis is intended to increase TAG fraction by removing FFAs and partial glycerides. The removal of FFAs can be carried out by neutralization (57,61,62,65,82), liquid-liquid extraction (55,83) and evaporation using molecular distillation (45,48,50,51,53,84,85). Molecular distillation is also applied to remove both FFAs and partial glycerides simultaneously (48).
Neutralization is carried out through saponification of FFAs using an alkaline solution such as KOH. The acylglycerol fraction is then extracted using hexane (57,61,62,65,82). Ilyasoglu (57) reported that the neutralization of the transesterification product of tripalmitin and a mixture of olive oil and flaxseed oil (1:1) (r=1:2.67) using 0.8 M KOH enhanced TAG content up to 78 %. Robles et al. (65) also reported the neutralization of the acidolysis product of palm stearin rich in palmitic acid at the sn-2 position and oleic acid (r=1:6) using 0.5 M KOH at 37 °C. The TAG yield was up to 80 %. Esteban et al. (61) confirmed the neutralization of acidolysis product of palm stearin rich in palmitic acid at the sn-2 position and oleic acid using 0.5 M KOH in the presence or absence of hexane. With (at room temperature) and without solvent at 50 °C, the neutralization can increase TAG purity to 99 % with the yield of 96 %. Yuan et al. (55) reported the removal of FFAs from the interesterification product using liquid-liquid extraction with 85 % ethanol at a volume ratio of 1:1.
Separation using molecular distillation is carried out based on the difference in vaporization temperatures of FFAs, partial glycerides and TAGs. Using molecular distillation, Qin et al. (45) purified the acidolysis product of 34L-leaf lard and camellia fatty acids (r=1:4). At the evaporation temperature of 180 °C and pressure of 6.7-7.5 Pa, the TAGs were rich in OPO with the purity of 91.39 % and the yield of 40.75 %. Zou et al. (50) also reported the purification of the product of acidolysis between the solid fraction of basa catfish oil and high oleic sunflower oil fatty acids (r=1:6). At the evaporation temperature of 185 °C and the pressure of 2 Pa, the TAG fraction with the yield of 95.7 % was obtained. A stepwise evaporation using molecular distillation is also possible for purification of interesterification product. Sørensen et al. (84) produced TAG fraction of 31.3 % from the acidolysis between butterfat and a mixture of fatty acids from rapeseed oil and soybean oil (r=1:2). The conditions were pressure of 0.1 Pa and the 488 evaporation temperatures in stages 1 and 2 of 90 and 185 °C, respectively. The ranges of evaporation temperatures and pressures of molecular distillation to remove FFAs during HMFS purification are 180-185 °C and 0.1-7.5 Pa. In addition, the separation of TAGs from partial glycerides is carried out at the evaporation temperature of 230 °C and a pressure of 10 7 Pa (48).
In the two-step acidolysis (i.e. a multi-stage process), purification starts with the first acidolysis to remove FFAs from the reaction mixture. In the second acidolysis, FFAs and DAGs are also removed from the product mixture. A single-step enzymatic process can also produce nearly pure HMFS. However, it is challenging to convert all of the intermediate DAGs formed during the reaction. In addition, multiple purification steps are required to remove the by-products (24). The concentration of target TAGs containing palmitic acid at the sn-2 position in the final product can be increased by separating the other TAGs through fractionated crystallization (58,81,84 (84) reported that HMFS with 56 % palmitic acid at the sn-2 position was produced from fractionation of the acidolysis product of butterfat and a mixture of fatty acids from rapeseed and soybean oil. Also, the acidolysis product of solid fractions from fractionation of butterfat and a mixture of fatty acids from rapeseed and soybean oil produced HMFS with 47 % palmitic acid at the sn-2 position.
CURRENT DEVELOPMENT OF HMFS PRODUCTION
In the last two decades, HMFS has been developed from a wide variety of substrates and enzymes and under various reaction conditions. In general, the most studied type of HMFS is sn-2 palmitate (OPO) because this TAG is the major component of HMF. Thus, the main consideration in HMFS production is to have palmitic acid at the sn-2 position (107). OPO-enriched HMFS is produced from interesterification between palmitic acid-containing source (i.e. lard, tripalmitin, palm oil and its derivatives: palm stearin or palm olein, catfish oil, palmitic acid or ethyl palmitate) and oleic acid-containing sources (i.e. olive oil, high oleic sunflower oil, oleic acid or ethyl oleate).
A better understanding of the composition and structure of HMF leads to better HMFS investigations (9). Recently, Wang et al. (75) synthesized both OPL and OPO from palm stearin fractions. OPL synthesis has not received much attention. The OPO to OPL ratios in HMF range from 0.5 to 2.0 (108,109). Apart from sn-2 palmitate, HMF also contains PUFAs and MCFAs, which play an important role during the early human development (4,110).
Synthesis of HMFS enriched with long-chain polyunsaturated fatty acids can be obtained from fish oil, algal oil, fungal oil, microbial oil, silkworm pupae oil, hazelnut oil, soybean oil, sunflower oil, ALA, GLA, DHA and ARA. Ghosh et al. (56) synthesized HMFS from palm stearin fractions and fish oil (r=2:1) A single-step enzymatic transesterification can produce HMFS similar to HMF using the suited amount of substrate ratio. For example, Zou et al. (91,92) reported a mixture of lard, sunflower oil, canola oil, palm kernel oil, palm oil, algal oil and microbial oil at a mass ratio 1.00:0.10:0.50:0.13:0.12:0.02:0.02 for HMFS synthesis. This substrate in the mixture was transesterified at a temperature of 60 °C, moisture content of 3.5 % (on the lipase mass basis), reaction time of 3 h and Lipozyme RM IM 11 % (on the total substrate mass basis). The product of HMFS had palmitic acid content of 20.1 % with 38.2 % palmitic acid at sn-2 position. The resulting HMFS had a high degree of similarity with HMF in the composition of total and sn-2 position fatty acids, PUFA and TAG with the values of 92.5, 90.3, 61.5 and 71.9, respectively (91). Zou et al. (92) also used the substrate at that mixture ratio, which was transesterified using Lipozyme RM IM in PBR at 50 °C and a residence time of 1.5 h. The obtained HMFS had 39.2 % palmitic acid at sn-2 position, 0.5 % ARA and 0.3 % DHA. Based on TAG content and purity, the degree of similarity of HMFS to HMF was 72.3.
At present, the commercial HMFS for inclusion in infant formulas has been successfully produced from various sources of oils and fats (4,5,9). The sn-2 palmitate is one of the structured TAGs that is generally supplemented into infant formulas (5,113). HMFS
OUTLOOK: CHALLENGES AND OPPORTUNITIES IN HMFS SYNTHESIS
Structured lipids are designed through the modification of oils and fats to have desired nutritional or physicochemical properties suitable for food industry (9,114,115). HMFS is one of the ingredients in infant formula that is potentially and continuously developed to support infant growth according to the needs of each stage of baby's age (i.e. infant and advanced formulas) and baby conditions (normal or premature and low birth mass babies).
The challenge for developing HMFS is the relatively high production cost. To enhance productivity (thus, reducing overall production cost), the synthesis of HMFS is carried out through a careful selection of the substrate, enzyme, reactor configuration and reaction conditions. Generally, the optimum reaction conditions for HMFS synthesis are at amount of substrate ratios between TAGs and FFAs of 1:2-1:14, temperatures of 40-60 °C, enzyme loads of 8-10 % and reaction times of 2-24 h in batch process or 1-3 h in continuous process. Large-scale production of HMFS through a one-stage process using tripalmitin is not attractive because of its high cost and difficulty in obtaining products resembling HMF (24). On the other hand, the multistep reaction can produce higher yield of HMFS that has properties resembling HMF. However, the increase in reaction system complexity will also tend to increase downstreaming costs. It is worth mentioning that the production of HMFS in a solvent-free system is preferred in terms of food safety and costs (5).
One of the potential sources of substrates for HMFS synthesis is palm stearin because of its high palmitic acid content and relatively low price. However, the content of palmitic acid-rich TAGs at the sn-2 position of palm stearin needs to be increased through chemical interesterification (52,53), enzymatic interesterification (37,38) or fractionation (47,56,59,116), which is due to the nature of palm stearin that is abundant in oleic acid at the sn-2 position. The acidolysis between palm stearin and oleic acid using an sn-1,3-specific lipase will result in triolein, which is not preferred (75). The HMFS synthetic route using palm stearin has to be started with enhancement of palmitic acid-rich TAGs at the sn-2 position (116). Then, the fatty acids at the sn-1,3 positions from the palmitic acid-rich TAGs are replaced with acyl donors through acidolysis or transesterification. The common acyl donors are single fatty acids (oleic acid, ALA, GLA, EPA, DHA and ARA), FFA mixtures of vegetable oils (such as olive, camelina, rapeseed, sunflower or hazelnut oil), sources of ω-3 PUFAs (such as fish or microalgal oil) (5,9,24), or sources of MCFA (such as coconut or palm kernel oil). HMFS that is similar to HMF and has C8:0, C10:0, C12:0, C16:0, C18:1, C18:2, EPA, DHA, GLA and ARA can potentially be commercialized in the future.
In HMFS synthesis, the high ratio of acyl donors is not attractive due to the difficulties in the separation process (such as deacidification) (51). This entails high costs of post-process separation (32). The possibility of producing HMFS with a low ratio of acyl donors is very interesting. However, the main limitation in the reaction process is low mass transfer, thus, a lower reaction rate. To overcome this problem, an enzyme that has a higher specificity and stability is needed. Faustino et al. (77) reported that tripalmitin consumption of 62.7 % was achieved at r=1:1.2 at 65 °C using R. oryzae lipase immobilized on Lewatit VPOC 1600 during acidolysis between tripalmitin and FFAs from camelina oil. The isolation and genetic engineering of new lipases with better stability during operation at high temperatures are also of interest for future research (9,19). The mutagenesis techniques are also promising for creating novel lipases such as an sn-2-specific lipase (22), which would facilitate the production of OPO. In addition, the use of continuous systems other than PBR, such as enzymatic membrane reactor, is also interesting to be developed (117). In enzymatic membrane reactor system, a continuous reaction can be facilitated by having immobilized enzyme retained inside the reactor (117).
CONCLUSIONS
Human milk fat substitute (HMFS) is synthesized by the enzymatic interesterification of vegetable oils, animal fats or blends of oils. The main characteristic of HMFS is having triacylglycerols (TAGs) with palmitic acid located at the sn-2 position and unsaturated fatty acids at the sn-1,3 positions. Selection of substrates, enzymes, batch or continuous reactor configuration and reaction conditions needs to be considered to increase the overall production of HMFS. Lipozyme RM IM, Lipozyme TL IM and Novozyme 435 are widely used for the synthesis of HMFS. Lipozyme RM IM and Lipozyme TL IM are used as biocatalysts due to their regiospecificity towards sn-1,3 positions. Generally, Lipozyme RM IM is used in acidolysis, whereas Lipozyme TL IM is used in transesterification. Novozyme 435 is used due to its regiospecificity towards sn-2 position, which is beneficial for incorporating palmitic acid at sn-2 position of the oils and fats, both in acidolysis and transesterification. Generally, the optimum reaction conditions for HMFS synthesis are amount of substrate ratios of TAGs and fatty acids between 1:2 and 1:14, temperatures of 40-60 °C, enzyme loads of 8-10 %, moisture contents of 0.2-2 % and reaction times of 2-24 h in batch process or 1-3 h in continuous process. The separation of interesterification product from FFAs in HMFS synthesis is carried out by neutralization using 0.5 M KOH (1.5 times the quantity of KOH required to neutralize the FFAs) or molecular distillation at the evaporation temperatures of 180-185 °C and pressures of 0.1-7.5 Pa. | 9,574.6 | 2021-12-01T00:00:00.000 | [
"Biology"
] |
Depletable Peroxidase-like Activity of Fe3O4 Nanozymes Accompanied with Phase Transformation Triggered by Separate Migration of Electron and Iron Ion
As the pioneering Fe 3 O 4 nanozymes, their explicit peroxidase (POD)-like catalytic mechanism remains elusive. Although many studies have proposed the surface Fe 2+ -induced Fenton-like reactions accounting for their POD-like activity, few focus on the internal atomic changes and their contribution to the catalytic reaction. Here we report that Fe 2+ within Fe 3 O 4 transfers electrons to the surface via the Fe 2+ -O-Fe 3+ chain, regenerating the surface Fe 2+ and enabling a sustained POD-like catalytic reaction. This process occurs with the outward migration of excess oxidized Fe 3+ from the lattice, which is a rate-limiting step. After prolonged catalysis, Fe 3 O 4 nanozymes suffer the phase transformation to γ -Fe 2 O 3 with a depletable POD-like activity. This self-depleting characteristic of nanozymes with internal atoms involved in electrons transfer and ion migration is well-validated on lithium iron phosphate nanoparticles. We reveal a key yet ever ignored issue concerning the necessity of considering both surface and internal atoms when designing, modulating, and applying nanozymes.
Introduction
2][13][14][15] Given the intricate structure-activity relationships and restricted characterization techniques, however, fewer breakthroughs have been made in understanding the explicit mechanism of most nanozymes. 3,45][16][17][18][19] To date, it is generally accepted that high-reactive hydroxyl radicals (•OH) generated by Fenton-like reactions (Equation 1-2) involving the surface Fe 2+ under acid conditions contributes to the POD-like activity of Fe3O4 NPs. 18,19Similar to natural horseradish peroxidase (HRP), Fe3O4 nanozymes follow the ping-pong mechanism and Michaelis-Menten kinetics. 101][22][23] Other individual studies have investigated the absorption, activation, and desorption processes of substrates (e.g.H2O2 and TMB) on the surface of Fe3O4 at the atomic level based on density functional theory and developed some descriptors to predict their POD-like activity. 14,15 2+ + 2 2 → 3+ +• + − 1 = 76 (/) −1 −1 (1) The above mechanistic studies share a theoretical premise: only the surface-active sites play a decisive role in the enzymatic-like property of nanozymes since catalysis occurs mainly on the particle surface or interface.This view is now widely recognized and works for most types of nanozymes. 1,2,4,11,23,24For example, in a recent controversial question regarding how to define "nanozyme concentration", Liu et al. argued that considering the whole particle or all atomic units within a particle as an enzyme unit would overestimate and underestimate the catalytic activity of nanozymes, respectively, because it is the surface atoms that are really the catalytic active sites. 24However, in the Fenton-like reactions triggered by Fe3O4 nanozymes, we noticed that the reaction rate constant of Equation ( 1) is much higher than that of Equation ( 2), which implies that the surface-active Fe 2+ is hardly recovered after being oxidized.This irreversible oxidation of surface Fe 2+ prompts us to ponder if only the surface atoms of the nanozymes, particularly for metal oxide nanozymes, act in enzymatic-like catalysis, would these active sites be exhausted after long-term catalysis, rendering the nanozymes inactive?So far, nevertheless, no relevant studies can conclusively answer this crucial question.
Here we propose a key yet ever ignored issue regarding the POD-like mechanism of nanozymes by characterizing the chemical composition and catalytic activity of the recycled Fe3O4 NPs participating in cyclic POD-like catalysis.Both surface and interior Fe 2+ were found to impart POD-like property to Fe3O4 nanozymes.Namely, Fe 2+ inside the particle transfers its electron to the surface layer, regenerating the surface Fe 2+ and sustaining the catalytic reaction.This process is coupled with the outward migration of excess oxidized Fe 3+ , which is a rate-limiting step.As the catalysis continues, Fe3O4 is slowly oxidized into γ-Fe2O3 accompanying the depleted enzyme-like activity, similar to the conventional low-temperature oxidation of magnetite, only with different electron receptors.This self-depleting characteristic of nanozymes with internal atoms involved in electrons transfer and ion migration is further demonstrated by a typical model material, LiFePO4, which contains the redox-active metal sites and mobile lithium ions (Li + ) encapsulated in a rigid phosphate network.This paper reveals that internal atoms may also contribute to the nanozyme-catalyzed reactions even though these reactions occur on the surface of NPs, which is thought-provoking when designing, regulating, and applying nanozymes.
Results
Synthesis and Characterization of IONPs.Near-spherical magnetite nanoparticles (Fe3O4 NPs) with an average diameter of 10.16 ± 0.12 nm (Supplementary Fig. 1a) were synthesized using the chemical co-precipitation method. 18Maghemite (γ-Fe2O3) and hematite (α-Fe2O3) NPs were derived by calcining the Fe3O4 NPs powder at 200 °C and 650 °C for 2 hours, respectively (Fig. 1a).XRD and Raman spectra (Supplementary Fig. 1b-c) show the successful synthesis of these three iron oxide NPs (IONPs).These IONPs were uniformly dispersed in an aqueous solution at pH of 3 by ultrasonication (Supplementary Fig. 1d).To avoid affecting the enzymatic-like activity, all particles were free of the surface coating.Their POD-like activities were assessed using different colorimetric substrates, including TMB, ABTS, and OPD, in the presence of H2O2.The results show that their catalytic activity followed the order of Fe3O4 NPs >> γ-Fe2O3 NPs > α-Fe2O3 NPs (Supplementary Fig. 2).To better quantify their POD-like activity, we calculated their specific activity (anano) according to the specified method, 26,27 which were 1.79, 0.44, and 0.03 U•mg -1 , respectively (Fig. 1b).As previously reported, 10,18 the higher catalytic ability of Fe3O4 NPs originates from the Fenton-like reaction triggered by the surface Fe 2+ (Supplementary Fig. 3).The negligible anano of α-Fe2O3 NPs compared with γ-Fe2O3 NPs is ascribed to the change of the inverse spinel structure due to the higher calcination temperature.Cyclic POD-like catalysis of Fe3O4 NPs.To investigate whether the surface Fe 2+ of Fe3O4 NPs is depleted after participating in prolonged catalysis, we continuously increased the amount of substrate TMB under sufficient H2O2 with as-synthesized three IONPs as "continuous catalysts", and monitored the absorbance changes of TMB oxidation products at 650 nm within 12 h.From Supplementary Fig. 4, even though the TMB was increased from 0.087 mM to 0.52 mM, the Fe3O4 NPs were still able to continuously and rapidly engage in the catalytic reaction for a long duration (≥ 12 h) without showing signs of depletion.We speculated two reasons: 1) the amount of substrate is still too low to completely consume the surface-active Fe 2+ ; 2) The Fe 2+ within Fe3O4 NPs provides the impetus for the continuous catalysis.
Cyclic POD-like catalytic assays (Fig. 1c) were carried out as validation, which could provide sufficient substrates for Fe3O4 NPs to keep exerting their POD-like capacity.We evaluated the anano of the recycled Fe3O4 NPs within five days.The results show that the catalytic ability of Fe3O4 NPs decreased to a level comparable to that of γ-Fe2O3 NPs after five days of cyclic catalysis, while the changes of γ-Fe2O3 NPs were negligible (Fig. 1d and Supplementary Fig. 5).It pushed us to wonder how the surface-active Fe 2+ of Fe3O4 NPs alone could sustain the TMB oxidation up to 100 hours?Conceivably, if only the surface-active sites are responsible for the enzyme-like performance, nanozymes will deactivate when the surface-active sites are exhausted.
To reveal the potential reasons for the sustained catalytic capacity of Fe3O4 NPs, we characterized the physicochemical properties of the recycled Fe3O4 NPs using different methodologies.The chemical states of Fe atoms in Fe3O4 NPs recycled from catalysis at days 0, 1, 3, and 5 were first analyzed by XPS technology.The X-ray penetration depth of the analyzed sample ranges from 2 to10 nm.Since the average diameter of as-synthesized Fe3O4 NPs is around 10 nm, the Fe valence state obtained from the Fe2p fitting analysis can be approximated as the oxidation state of individual Fe3O4 NPs.As shown in Fig. 1e, the Fe 2+ in Fe3O4 NPs decreased from the original 30.9% to 0% with the increase of cyclic catalytic days, indicating that the interior Fe 2+ was also oxidized to Fe 3+ in the successive POD-like reactions.
Furthermore, in the Raman spectra of the recycled Fe3O4 NPs, the feature of the A1g mode band shifted from 660 cm -1 to 700 cm -1 , corresponding to a transition from magnetite to maghemite (Fig. 1f). 28,29Besides, this phase transformation was also confirmed by the NEXAFS spectroscopy.Figure 1g shows the Fe L-edge NEXAFS spectra of the control Fe3O4 NPs and the recycled Fe3O4 NPs after 5 days of catalysis, in comparison with two reference spectra of FeSO4 and Fe2O3.1][32] Additionally, TEM images (Fig. 1h) and XRD pattern (Supplementary Fig. 6) show that the influence of this transformation on the particle morphology, size, and lattice structure is negligible.Based on these characterization results, we conclude that both surface and internal Fe 2+ can be oxidized into Fe 3+ accompanied by a gradual phase transformation to γ-Fe2O3 during Fe3O4 nanozymes exerting their POD-like activity.
Aeration oxidation kinetics of Fe3O4 NPs.We assume that the oxidation of Fe3O4 nanozymes induced by POD-like catalysis is comparable to the traditional low-temperature (< 200 °C) air oxidation of magnetite since the crystal structure of both remains unchanged during the oxidation process. 33Both magnetite and maghemite contain 32 O atoms per unit cell.The difference is that the former contains 24 Fe atoms (16 Fe 3+ and 8 Fe 2+ ), while the latter has only 21. 33 Fe atoms (all Fe 3+ ).Namely, once 8 Fe 2+ in magnetite are oxidized to 8 Fe 3+ releasing 8 electrons, a charge imbalance will occur (Equation 3).To maintain electroneutrality, 2.67 Fe 3+ have to migrate to the crystal surface, leaving the cation vacancy (Equation 4). 34The outward moving Fe 3+ will coordinate with the surface absorbed O 2-which is ionized by the electrons generated by the oxidation of Fe 2+ .Therefore, the phase transformation of Fe3O4 to γ-Fe2O3 is a single-phase topological reaction accompanied with the separate migration of electrons and excess Fe 3+ . 34ttice defects have been reported to facilitate the outward migration of excess iron ions, thereby accelerating the oxidation process of magnetite. 35As verification, we compared the aeration oxidation kinetics of Fe3O4 NPs synthesized by two methods with different levels of lattice defects.One was prepared by the chemical co-precipitation method as described above (Fig. 1a), which is considered to possess more lattice defects (named cc-Fe3O4 NPs).The other was prepared by the thermal decomposition method (Supplementary Fig. 7) with a relatively complete lattice structure (named TD-Fe3O4 NPs). 36Both Fe3O4 NPs have a similar average particle size (~10 nm) without surface coating.Their aqueous solutions were stirred under the same aeration rate (with air) for 12 h at 120 °C.For a better comparison, the oxidation system of cc-Fe3O4 NPs (total 170 mL, 3.6 mg Fe/mL) was much larger than that of TD-Fe3O4 NPs (total 30 mL, 0.45 mg Fe/mL).This implies that individual TD-Fe3O4 could gain more oxygen than cc-Fe3O4 to keep it oxidized.From Fig. 2a-b, both Fe3O4 NPs exhibited electronic transitions in the visible and NIR region due to intervalence charge transfer between Fe 2+ and Fe 3+ , 37 which decreased gradually with oxidation time.At the end of aeration oxidation, little absorption beyond 700 nm was observed, indicating a phase transformation from Fe3O4 NPs to γ-Fe2O3 NPs. 37Besides, the color of both suspensions gradually changed from dark-brown to reddish-brown.Notably, despite the less oxygen exposure for individual cc-Fe3O4 NP, its NIR absorption decreased faster than that of TD-Fe3O4 NP, especially during the initial oxidation phase (within three hours).These results confirm that more lattice defects favor the oxidation reaction of Fe3O4 NPs due to the faster electron and ion transfer.
Analogous to aerated oxidation, the rapid electron and ion migration also facilitates the POD-like catalysis of Fe3O4 NPs, with the only difference that the electron receptor changed from O2 in aerated oxidation reaction to H2O2 in POD-like reaction.To prove this, the POD-like activity of cc-Fe3O4 NPs and TD-Fe3O4 NPs as well as their variation with aerated oxidation time were investigated.As seen in Supplementary Fig. 8, the POD-like activity of cc-Fe3O4 NPs was higher (2.8 folds) than that of TD-Fe3O4 NPs, despite TD-Fe3O4 NPs having a smaller hydrodynamic diameter and negative surface potential contributing to a strong affinity with TMB.Aeration oxidation kinetic studies show that the POD-like activity of both Fe3O4 NPs decreased with oxidation time (Fig. 2c), along with slight fluctuations in hydrodynamic size and surface potential (Supplementary Fig. 9).However, the decline rate of cc-Fe3O4 NPs was faster than TD-Fe3O4 NPs, particularly in the initial oxidation stage.This phenomenon is consistent with the changes of NIR spectra shown in Fig. 2a-b.These results further confirm that the more lattice defects of Fe3O4 NPs, the easier the migration of excess Fe ions, and thus the higher the POD-like activity.It also means that Fe3O4 NPs with more defect sites are easier to be depleted when involved in a POD-like reaction due to their excellent catalytic capability.LiFePO4 NPs as an ideal verification model.9][40][41] LiFePO4 undergoes redox reactions along with the lithium insertion/extraction during the charge-discharge process (Equation 5-6) without changing its ordered-olivine structure (Fig. 4a). 38We speculate the charging process of LiFePO4 is resembling the oxidation process of Fe3O4, both of which involve the oxidation of Fe 2+ and the migration of internal ions, which motivated us to focus on whether LiFePO4 NPs also have the POD-like catalytic ability.
Rod-like LiFePO4 NPs with an average length of 321.9 nm and width of 172.2 nm (Fig. 4b) were successfully synthesized using the solvothermal method 38 and characterized via various methodologies (Supplementary Fig. 10 and Table S1-2).As expected, the POD-like activity of LiFePO4 NPs was demonstrated with different chromogenic substrates including TMB, ABTS, and OPD (Fig. 4c and Supplementary Fig. 11).Also, they follow pH, temperature as well as NPs concentration dependence and the Michaelis-Menten kinetics (Supplementary Fig. 12-13).The optimal pH is about 4.0.The ESR spectra show that •OH was produced from the decomposition of H2O2 catalyzed by LiFePO4 NPs in a time-dependent manner (Fig. 4d), which is similar to Fe3O4 NPs.We then compared the POD-like activity of LiFePO4 NPs and cc-Fe3O4 NPs using two oppositely charged substrates (TMB and ABTS) at pH 3.6.The results consistently show that LiFePO4 NPs had higher catalytic ability than cc-Fe3O4 NPs (Supplementary Fig. 14), and the anano of LiFePO4 NPs was approximately four times that of cc-Fe3O4 NPs, despite their larger particle size (Fig. 4e).
These results imply that LiFePO4 NPs may share a similar POD-like catalytic mechanism with Fe3O4 NPs, differing in that the rapid Li + migration in the lattice of LiFePO4 NPs confers them a superior POD-like catalytic activity (Fig. 4f).peaks of the recycled LiFePO4 NPs were shifted toward the higher binding energy (Fig. 5a), indicating the oxidation of Fe 2+ within the NPs.In the XRD pattern (Fig. 5b), the residual LiFePO4 phase (marked with "о" in the yellow pattern) in the recycled NPs was negligible, proving that almost all LiFePO4 were delithiated and oxidized into FePO4 (marked with "+") after cyclic POD-like catalysis.This result was further confirmed by ICP analysis that the Li element content in recycled NPs was almost 0 (Table 1).Moreover, the electrochemical property of the recycled NPs was examined using cyclic voltammetry (CV) 43 at various scan rates in the voltage range of 0.2 to 0.5 V (Supplementary Fig. 15).The redox peak currents of the recycled NPs were dramatically reduced due to the absence of Li + in their lattice (Fig. 5c).
This phase transformation, as expected, severely impaired the POD-like activity of the recycled LiFePO4 NPs (Fig. 5d), in agreement with the self-depleting characteristic of the Fe3O4 NPs described above.Mobile Li-ions as the limiting factor for LiFePO4 NPs-catalyzed POD-like reaction.In the field of sodium (Na)-ion batteries, the charge transfer resistances and lattice volume change upon Na + migration are larger for NaFePO4 electrodes, compared with their Li equivalents due to the larger ionic radius of Na (1.02 Å) than Li (0.76 Å). 44 Inspired by this, We partially replaced Li with Na in the lattice of LiFePO4 NPs to explore the potential effect of Na-doping on their POD-like activity.Concretely, three NaLiFePO4 NPs with similar physicochemical properties but different Na-doping amounts were successfully synthesized (Supplementary Fig. 16 and Table S3).We then compared their POD-like activities under the same reaction conditions and found that the more Na doping, the lower the POD-like activity (Fig. 5e), indicating that the large Na + radius hinders the free migration of Na + and Li + in the crystal, thereby impairing the electron transfer rate.We attempted to use K-doped LiFePO4 NPs as further proof, however, the large ionic radius of K (1.38 Å) makes it difficult to embed into the electrode materials (Supplementary Fig. 17 and Table S3), which is a common issue in K-ion batteries. 45 further prove the decisive role of mobile Li + , we measured the POD-like activity of commercially available LiFePO4, Fe3(PO4)2, and FePO4 materials with similar hydrodynamic dimensions and surface negative potentials (Supplementary Fig. 18).The results show that their POD-like activity follows LiFePO4 >> Fe3(PO4)2 > FePO4 (Fig. 5f), directly confirming that the presence of Fe 2+ alone in Fe3(PO4)2 cannot ensure the superior catalytic performance, but the transportable Li + contributes to the outstanding POD-like activity of LiFePO4.
Conclusion
In summary, a detailed mechanism of the POD-like activity of Fe3O4 nanozymes is elucidated by characterizing the chemical composition and catalytic activity of the Fe3O4 NPs recycled from the long-term POD-like catalysis.These studies demonstrate that all Fe 2+ in Fe3O4 nanozymes contribute to their POD-like activity.The Fe 2+ inside the particle transfers electrons to the surface, regenerating the surface Fe 2+ that is directly involved in the sustained catalytic reaction.This process is accompanied by the outward migration of excess oxidized Fe 3+ from the interior of the crystal, which is a rate-limiting step.Analogous to the low-temperature oxidation of magnetite, Fe3O4 NPs participated in the POD-like reaction are eventually oxidized to γ-Fe2O3 NPs with a reduced POD-like capacity.Furthermore, this mechanism is well-validated on LiFePO4 NPs.This work reveals the depletable characteristic of Fe3O4 nanozymes that differ from natural enzymes and highlights the potential contribution of internal metal atoms in nanozymes-catalyzed reactions.Meanwhile, these findings bring new thoughts for the mechanistic study and rational design of nanozymes.
Fig. 1
Fig. 1 The synthesis of IONPs and cyclic POD-like catalysis.(a) Illustration of the synthesis process of IONPs.(b) The specific activity (anano) of these three IONPs with TMB as colorimetric substrates. (c) Diagram of the cyclic catalysis assay. (d) Kinetic study of anano values of Fe3O4 NPs with the days of cyclic catalytic reaction.(e) The fitted Fe2p XPS spectra and (f) Raman spectra of Fe3O4 NPs recycled after catalysis on days 0, 1, 3, and 5. (g) The Fe L-edge
Fig. 2
Fig. 2 The aeration oxidization kinetics of Fe3O4 NPs.Variation of UV-vis-NIR absorption of (a) cc-Fe3O4 NPs and (b) TD-Fe3O4 NPs with aeration oxidation time.Insets are photos of the suspensions corresponding to oxidation times at 0, 0.5, 1, 3, 5, 8, 10, and 12 h.All spectra and photos were obtained at the same Fe concentration.(c) Changes in anano of the oxidized cc-Fe3O4 NPs and TD-Fe3O4 NPs during the aeration oxidation.
Fig. 3
Fig. 3 Schematic diagram of the catalytic mechanism of the POD-like activity for Fe3O4 NPs.
𝐿𝐿𝐿𝐿𝐹𝐹𝐹𝐹𝐿𝐿𝑂𝑂 4 − 5 )𝐹𝐹𝐹𝐹𝐿𝐿𝑂𝑂 4 + 6 )Fig. 4 LiFePO4 2 M
Fig. 4 LiFePO4 NPs as verification materials and their POD-like activity.(a) The crystal structure of LiFePO4 and FePO4 viewed along the a, b, c-axis.The olivine structure is maintained during Li-ions insertion and extraction.(b) The SEM image of as-synthesized LiFePO4 NPs.Inset is a photo of LiFePO4 NPs aqueous solution.(c) The POD-like activity of LiFePO4 NPs (6.25 ug Fe/mL) with TMB (1.7 mM) as colorimetric substrates under the presence of H2O2 (0.8 M) in 0.2 M acetate buffer (pH = 3.6).(d) ESR spectra of spin adducts DMPO/•OH produced by LiFePO4 NPs (10 ug/mL) in the presence or absence of H2O2 (0.165 M ) in 0.2 M acetate buffer (pH = 3.6).(e) Comparison of the anano of as-synthesized LiFePO4 NPs and cc-Fe3O4 NPs.(f) Diagram of the POD-like catalytic reaction process of LiFePO4 NPs and Fe3O4 NPs. | 4,614.4 | 2022-01-24T00:00:00.000 | [
"Chemistry",
"Environmental Science",
"Materials Science"
] |
Ranking the Key Areas for Autonomous Proving Ground Development Using Pareto Analytic Hierarchy Process
Autonomous or highly automated road vehicles and all related technologies are under intensive research and development. Moreover, internationally a massive investment increase can be observed in the automotive industry. According to this megatrend, new automotive test tracks appear or older ones transform to be capable of testing and proving for autonomous vehicles. Therefore, the question emerges: what are the key areas for automated drive development, which must be financed in case of autonomous proving ground design? It is a real challenge to be able to make the right decisions due to a lack of numerous experiences in this field. In this research, experts of automated driving technology have been surveyed and their opinion and knowledge have been synthesized. As a strong purpose of gaining robust results, the conventional AHP has been amended by the Pareto approach to ensure that the derived weights correspond to the expert scoring intention so perfectly that it cannot be more improved. Since the non-Pareto optimal weight results might cause rank reversal in the final prioritization, the applied Pareto test guarantees that the final outcome reflects the expert evaluators’ incentive. The conducted analysis has indicated that the obtained results are robust not only from the sensitivity point of view but also from the Pareto optimality approach. The proposed hierarchical decision model is therefore applicable to assist decision making for autonomous proving ground developments. The main contribution of the article, however, is to present the first reliable prioritization of the autonomous proving ground elements to extend the body of professional knowledge.
I. INTRODUCTION
Thanks to the widespread automation in all fields of science and technology, also road vehicles are manufactured with a growing number of automated features or subsystems. This global trend is also amplified by the general motivation called sustainable transportation which aims to reduce environmental impacts (energy consumption, emission, etc.), mitigate congestion, and improve social well-being [1]. In our days, it is an important research field to analyze the changes that will The associate editor coordinating the review of this manuscript and approving it for publication was Jesus Felez . be experienced by autonomous driving [2], [3]. More attention has to be paid to the testing of the interaction between the different road users and typically to the vulnerable road users [4], [5]. The most important expected outcome of automated cars is the improvement of road traffic safety. The majority of accidents (app. 95%) are produced by human imperfection. By contrast, autonomous driving technology could eliminate 90% of road traffic accidents [6]. To reach future objectives it is necessary to increase the level of automation of road vehicles and road transport infrastructure. Automated vehicles with different levels of automation are already present and fully autonomous cars will appear soon in everyday transportation, i.e. Connected and Automated Vehicle (CAV) technology will fundamentally transform our life.
As the automation of road vehicles has a strong safety impact, it is of paramount importance to control the CAV development process from the perspective of technological regulation and law [7]. To guarantee the safety operation of these new technologies novel testing and validation processes are needed [8]. There are several approaches to analyze the safety properties of automated vehicles. For instance, EuroNCAP handles it as a new, more complex active safety system [9]. When speaking about CAV safety and its testing, it also unavoidable to recall the importance of cyber security, e.g. [10] proposed a novel a cyber-risk classification framework for CAVs. Besides, it will be necessary to define new standards to classify the complexity of automated or autonomous vehicles. The most famous standards are the SAE levels [11] and a new relevant ISO standard (ISO 26262-1:2018) was also born in this field recently [12]. Based on the experience of the independent and international organizations for standardization, national governments can also create new regulations for these new technologies. At the same time, the national regulations shall need more international harmonization in the future [1]. Another important suggestion in the literature concerning the efficient international standardization of the CAV technology is that autonomous driving should be integrated into the mainstream education [13] for seamless adoption.
Obviously, the CAV revolution affects many areas of science and technology, i.e. control theory, artificial intelligence, transportation engineering, information technology, etc. However, the importance of the various fields is different regarding the CAV technology development. To determine the key areas in the development process is therefore a key problem. The right decision making of the relevant stakeholders (industry, government, authorities, national public tender system, research institutes) needs to be supported. To the best knowledge of the authors, none has approached the issue so far. Only three research articles were found close to this topic. Chen et al. [14] introduced an optimization method to design proving ground for CAVs, but only focused on defining necessary road assets without ranking the importance of the revealed elements. Zhankaziev et al. [15] introduced a testing architecture for testing and proving purposes of ITS (Intelligent Transportation Systems) and unmanned driving technologies. Again, this article provided a special range of functionalities without prioritization. Chen et al. [16] published a method to assess the capability of proving grounds. Through the proposed method a strong link between proving ground testing results of CAVs and their anticipated public street performance has been found. Although the articles above investigated the key functionalities of CAV proving grounds, they did not conduct extensive research to prioritize them, nor leveraged related expert knowledge. Accordingly, the goal of our article is to provide an efficient methodology to assist CAV related development, namely the planning of future automotive proving grounds which are specially dedicated for CAV testing and proving. As a main result, a hierarchical decision model is presented directly applicable for ranking future functionalities of CAV test tracks.
Beside the identified research gap, the article is also motivated by the favorable situation that a brand new automotive proving ground, called ZalaZone (https://ZalaZone.hu/en/), is underway in Hungary near the city of Zalaegerszeg. This test track is specially designed to be capable of serving technological testing and proving processes of autonomous/highly automated vehicles. The more, the mission of ZalaZone is not limited to pure commercial use. It is also a major goal to lay the foundation for research and innovation activities in national and international cooperation with universities, research centers and industrial participants [17].
In all, the goal of the article is to introduce a novel method to determine and rank the key areas for CAV development, more specifically for CAV proving ground design. To this end, an efficient classification for all related areas is presented. Based on the classification, questionnaires have been worked out. The target group of the questionnaires consisted of academic people. The results of the questionnaires are evaluated by the Analytic Hierarchy Process (AHP), a well-known method for decision making based on multiple criteria [18]. The AHP technique was chosen as it is intensively applied in transport related decision making [19], [20] and [21]. Furthermore, since expert opinions have been acquired and because of the pioneer characteristics of the research, the robustness of the results plays a key role in drawing conclusions. Thus, the original AHP technique has been integrated with the Pareto approach to ensure that the eigenvector method has produced nonimprovable weight scores connected to the decision elements of autonomous testing.
The article is organized as follows. Section 2 presents the theoretical preliminaries applied later for decision making support. Section 3 contains the methodology used to analyze the features based on a questionnaire. Section 4 discusses the core outcomes of the Pareto AHP based analysis. Finally, research concluding remarks are provided in Section 5.
II. THE APPLIED METHODOLOGY: PARETO AHP
Since autonomous car testing is approached as a multicriteria decision making (MCDM) problem in this research, the appropriate methodological tool could be selected from MCDM techniques. One of the objectives was to synthesize strategic, tactical and operational issues for the analysis as well as general and more specific items. These divisional aspects could all be considered by a hierarchical decision structure proven to be comprehensive for expert evaluators. Consequently, Analytic Hierarchy Process seemed to be suitable for the analysis. However, based on a recent development of the method [22], [23], the original technique has been amended by an optimization process aiming to improve the gained weight scores by the Pareto principle, in terms of their approximation of the expert evaluation values. VOLUME 9, 2021 In this section the original AHP method and the improved Pareto Analytic Hierarchy Process (PAHP) model are also introduced briefly.
A. OVERVIEW ON ANALYTIC HIERARCHY PROCESS (AHP)
Analytic Hierarchy Process is based on the decision structure created from the decision criteria of a complex decision structure [18]. Criteria, sub-criteria, sub-sub-criteria, etc. are identified with the last level of the alternatives in the decision tree. The linkages of the elements are also important since they determine not only the pairwise comparisons in the procedure but also the final weights and alternative scores by considering the respective scores of the elements at previous level.
Let us denote A(p) the pairwise comparison matrix of the alternatives with respect to the criterion p, and w(p) the weight vector calculated from the matrix A(p) by Saaty's eigenvector method [18] (other calculations methods also exist) based on the following equation: where λ_max denotes the maximum eigenvalue of matrix A. Eigenvector w can be calculated based on the formula below: Let also have C, the pairwise comparison matrix of the criteria with w C being the weight vector belonging to C. Then the final evaluation scores of the alternatives u(w) can be gained by: The reciprocity (a ji = 1/a ij , where a ii = 1) is necessarily to be fulfilled for each pairwise comparison matrices (PCMs), so for each C and A(p). However these experiential matrices are most likely not consistent, so a ik = a ij a jk , in which i, j and k represent the rows and columns of the pairwise comparison matrix A or C.
Thus, the consistency of PCMs have to be checked for the experiential matrices C and A(p) by the consistency ratio (CR) defined by [24], which is acceptable when its value is smaller than 0.1. The calculation of the CR is as follows: where RI is the average of consistency index CI of randomly generated PCMs with the same size. CI is calculated as given below: The pairwise evaluation of the PCMs are generally done based on the Saaty-scale in which '1' denotes equal importance rate of the elements, '3' means moderate importance of an element over another, '5' marks strong importance, '7' means very strong importance, while '9' denotes extreme superiority of an element over another. Intermediate values 2, 4, 6, 8 can also be applied to express superiority. To express inferiority, fractions (1/2, 1/3, . . . , 1/9) can be used. If multiple evaluators are applied in the process, individual scores have to be aggregated. Aczél and Saaty [24] proved that only the geometric mean ensures that there is no rank reversal in the aggregation method compared to the arithmetic mean. The aggregation of the individual PCM scores is conducted as follows: where a ijg denotes entries in the same matrix position i, j, filled in by the g-th decision maker and l denotes the number of total evaluators, while A is the gained aggregated matrix for which the eigenvector method can be applied and final overall weight scores can be derived.
Conducting the sensitivity analysis is also part of the AHP procedure which enables the decision makers to check the robustness of the results by detecting the impact of slight changes of certain weight scores on the whole decision structure ranking.
B. THE APPLIED PARETO ANALYTIC HIERARCHY PROCESS (PAHP)
A reasonable expectation, both from the decision maker and the analyst, for any weight vector is that it could not be improved in a trivial way, namely, such that every pairwise ratio is at least as close to the corresponding matrix element given by the decision maker and it is strictly closer in at least one position. Formally, weight vector w is called Pareto optimal (or efficient) if no dominating weight vector w exists such that |a ij − w i /w j | ≤ |a ij − w i /w j | for all pairs of indices i, j, and if the inequality is strict for at least one pair of indices i, j.
In the history of AHP applications, the eigenvector calculation of Saaty [24] (see Formula 2) has been assumed to fulfil the Pareto principal. However, surprisingly, the eigenvector is not always Pareto optimal [25].
Let us consider the following 4 × 4 pairwise comparison matrix with acceptable inconsistency (CR < 0.1).
Its right eigenvector calculated by the Saaty method is [0.552625; 0.302041; 0.081295; 0.064038] T . However, this eigenvector is not Pareto optimal because in its neighborhood another (dominating) vector can be found for which the difference between the a ij and w i /w j For demonstration, we provide the w i /w j approximation of the a ij elements of the 4 × 4 PCM and it is visible that in the case of the dominating weight vector the approximation is better. For instance, the a 13 element is perfectly approximated by the dominant weight vector: 0.559862/0.07998022 = 7.
Finding a sufficient and necessary condition for the Pareto optimality of the eigenvector is a challenging open problem, but now we can state that there are non-Pareto cases which can be improved in terms of better approximation of the evaluators' intentions.
Even though Pareto optimization generally causes merely slight modification in the weight vector coordinates, there is not only theoretical, but also practical evidence that the non-Pareto optimality in AHP might cause rank reversal, which means that the real intention of the modelling; the prioritization of the criteria or alternatives can be biased. By simulation, [22] showed that for a 4 × 4 matrix, the conventional AHP would have set up the ranking of C2, C1, C3, C4, the Pareto optimal ranking, however, was C1, C2, C3, C4. [23] concluded that in practical AHP models, if the Pareto non-optimality is on the higher levels of the hierarchy, the rank reversal is very probable even in case of a slight modification in the weight vector coordinates. This is due to the hierarchical nature of the AHP, because the weight of the attributes in the lower levels are partly determined by the respective higher level weight of the connected attribute.
Bozóki and Fülöp [22] developed a linear programming based algorithm to check whether a given weight vector is Pareto optimal, and if it is not, then a dominating Pareto optimal weight vector is found. Duleba and Moslem [23] demonstrated the Pareto optimality test in a real-world multicriteria decision problem and ran the optimization process and calculations on real scoring. In our research, this algorithm has been applied to all aggregated matrices to test if the intentions of the experts are well approximated. We find it important to include the test of Pareto optimality, because a non Pareto optimal weight vector cannot express, in the best possible way, the preferences of the decision maker. However, it has to be emphasized that if the eigenvector is not Pareto optimal, there might exist a dominating Pareto optimal weight vector in its very small neighborhood. Consequently, the improvement in the weight scores is in most of the cases just small. Considering this, the Pareto test is recommended for sensitive and multi-level decision problems in which even relatively slight modifications in weight scores play significant role. If the weight scores of some decision elements are close to each other (sensitivity) or the slight modification is on the highest level and the modified score flows down in the whole hierarchy (multi-level decision problems), checking Pareto optimality can be a crucial issue.
The general procedure of Pareto-test, created by [23] can be seen in Fig. 1.
We have applied all steps of Pareto-test in our survey for ensuring the robustness of the final results.
III. HIERARCHICAL DECISION MODEL FOR AUTONOMOUS PROVING GROUND DEVELOPMENTS
According to the method presented in Section 2, a questionnaire based investigation process has been worked out. As a first step a hierarchy tree in the field of connected and automated vehicles had to be determined. The defined hierarchy tree has three main levels according to the common decision making structure when the problem is divided into several sub-problems embracing strategic, tactical, and operational decisions [26], i.e. decisions must be approached on 1. strategic level, 2. tactical level, 3. operational level. The strategic level is directly adopted from the report of [27], which was prepared under the initiative of European Commission's Directorate-General for Research and Innovation. The reason for this adoption is the relevance, i.e. the content of the report is based on the contribution of different relevant European stakeholders from industry, academic field as well as authorities. The report's original aim was to develop a research and innovation roadmap for CAV transport. Thus, it was fully applicable for the questionnaire based investigation, the goal of which was to determine the key areas of autonomous proving ground developments. The strategic level has to embrace all significant areas of autonomous transport. It covers a wide field of disciplines, all the way from vehicle control to socio-economic issues affected by autonomous transport. The seven elements of the strategic level [27] are listed below: 1. In-vehicle enablers 2. Vehicle validation 3. Shared, connected and automated mobility services for people and goods 4. Socio-economic impacts, user/public acceptance 5. Human factors 6. Physical and digital infrastructure, and secure connectivity 7. Big data, artificial intelligence Given the strategic level adopted from literature, the next step of hierarchy tree building was to determine the sublevels. As one of the contributions of the conducted research we defined the elements on the tactical and on the operational level. The aim was to detail the subdivisions of the different areas of automated transport. The final hierarchy tree is based on the report of [27], the cited articles in the next paragraphs as well as on our experiences and knowledge. For the AHP analysis it is not important to complete every level, e.g. in our case the operational level is not fully detailed. Basically, the aim was that all the elements should directly relate to autonomous transport. At the same time, we did not intend to analyze the hierarchy of different science areas.
In the following subsections each strategic level is introduced in detail together with the tactical and operational levels. The levels are tabulated into the subsequent tables.
A. IN-VEHICLE ENABLERS
In a CAV the human driver is partially or fully substituted by a controller's logic. The new testing and validation processes have to focus on the operation of that control system. The design of the control architecture elements and the testing procedure of their cooperation is one of the most important steps of realizing in-vehicle automated functions. The subdivisions of the in-vehicle enablers are the development process, the control system architecture, the environment sensing and the functional safety. Many of them were detailed earlier in the article of Gáspár et al. [28]. The strategic level of In-vehicle enablers is tabulated into Table 4.
B. VEHICLE VALIDATION
Due to the safety issues of the automated vehicle features testing and validation processes became more important than earlier. Conventional vehicle tests typically focus on the behavior of the vehicle in various road conditions. Test cases usually concentrate on the dynamic properties and endurance capabilities of a single vehicle. In the case of CAV transport, however, the testing and validation are not restricted to a single vehicle anymore, but rather to a complex traffic system where the automated vehicle is a part of the surrounding traffic environment. CAVs must be tested in different traffic use-cases. Due to the stochasticity of traffic situations the testing and validation process should be more complex than earlier: it shall be carried out by the ''V-model'' of product development [29]. The subdivision of the vehicle validation is created based on the article [30]. This article introduced the CAV testing and validation pyramid which lavers were adopted corresponding to the elements in the tactical level. The operational level was left blank intentionally as it has minor influence from the aspect of the whole research. The strategic level of Vehicle validation is tabulated into Table 5.
C. SHARED, CONNECTED AND AUTOMATED MOBILITY SERVICES FOR PEOPLE AND GOODS
With the arrival of autonomous vehicles, the whole transportation system will continuously change. Everyday transport of people and goods will transform to new services to be offered. This process has already started with the appearance of the concept of MaaS (Mobility as a Service) a few years ago. This means that based on the driverless capability of road vehicles, shared mobility services can distribute and so optimize the transportation needs in the future. Although mobility services mean software development mainly, the implications of this change is also worth investigating in a test track environment, e.g. traffic management, autonomous public transport, smart city developments. The subdivision of this strategic element is based on previous article [31]. The strategic level of Shared, connected and automated mobility services for people and goods is given by Table 6.
D. SOCIO-ECONOMIC IMPACTS, USER/PUBLIC ACCEPTANCE
The social and economic impacts are unavoidable and will be disruptive due to the fast technological changes in both automotive and information technologies. Accordingly, the issues of education planning, research support, change of legislation as well as public acceptance must be carefully prepared and conducted in order to adapt users smoothly to the proper use of the new technologies. Apart from transportation, automated driving technology can reach society through education and legislation. Moreover, CAV transport creates jobs in the research area. The well-organized education, research and legislation are key factors of the acceptance of CAV technologies. The strategic level of Socio-economic impacts, user/public acceptance and its tactical sublevels are provided by Table 7 (again with operational level intentionally left blank).
E. HUMAN FACTORS
The human factors are important throughout the whole development process of autonomous cars as humans will always be present in the system. The more, human factors arise the main challenges in the development. The automated technologies must deal with human interactions and behavior, e.g. in the context of human and autonomous vehicles interaction or traveler's behavior in autonomous public transport. Therefore, also the implications of human factors need to be considered in a test track environment. The subdivision on the tactical level is based on the research of Hudson et al. [4]. The strategic level of Human factors is tabulated with its sublevels into Table 8 (operational level intentionally left blank).
In the previous part the possible research and development areas at autonomous proving ground have been determined for the survey research. The full hierarchy tree of the areas is provided in the Appendix.
F. PHYSICAL AND DIGITAL INFRASTRUCTURE AND SECURE CONNECTIVITY
The proper testing possibility of smart road infrastructure is important in a test track as future autonomous vehicles' operation will strongly depend on the intelligent infrastructure. Similarly, the surrounding ICT infrastructure needs to be tested intensively. Vehicle localization (HD maps, satellite navigation) and communication systems within and among vehicles are also indispensable to be developed in a test track environment. Based on the connected technologies the localization and the control of automated vehicles can be further improved, e.g. by combining a GNSS system with environment mapping, the local lateral and longitudinal control can be more accurate. Besides, by V2X (Vehicle-to-everything) communication traffic management can be made more effective. If infrastructure is present for CAV control, the wireless communication systems are to be deployed. Communication involves further challenge into road transport, i.e. cyber security must be guaranteed at all times [10]. Due to the wide VOLUME 9, 2021 utilization possibilities of connected technologies in the field of autonomous transport the subdivision of this strategic part is more detailed. The elements of the tactical level represent the typical utilization types of connected technologies. The operational level enumerates here different technical development opportunities. This subdivision applies several ideas from the research of Gáspár et al. [28]. The strategic level of Physical and digital infrastructure and secure connectivity and its sublevels are provided by Table 9.
G. BIG DATA, ARTIFICIAL INTELLIGENCE
Data management is also a crucial issue in the context of autonomous transportation. On the one hand, data of the individual cars must be safely and efficiently handled. On the other hand, traveler data is also a significant value for application in CAV transport operation and management, i.e. autonomous transport of the future can be optimized based on relevant and up-to-date travel data. Appropriate storage, access, analytics and privacy issues are to be therefore investigated and regularized. The basic applications of these tasks must be launched when testing on the test track. To realize traffic management with connected vehicles a huge amount of data has to be handled. Many research investigate the challenges of big data management. Besides, the achievements of artificial intelligence also can be applied in the decision making processes in modern traffic management systems. The subdivision is partially based on the research of Bartolini et al [7]. The strategic level of Big data, artificial intelligence with the tactical and operation levels are given in Table 10.
H. QUESTIONNAIRES FILLED BY THE TARGET GROUP
Based on the hierarchy tree explained in the previous part, two types of questionnaires have been worked out. The members of target group had to complete these questionnaires. In the questionnaires the participants had to give higher scores to the given element if they regarded it as a more important key factor in the spread of autonomous driving. To explain it from another aspect, the questionnaires investigated which area needs more development financing to facilitate autonomous driving. In the first questionnaire the target group had to give a score to each element in every level of the hierarchy tree. The results of this questionnaire were used to check the results of the second questionnaire, which was based on the Pareto AHP method. Another important goal of the first questionnaire was to present the hierarchy tree. While the members filled the first questionnaire, they could visually understand the structure of the hierarchy tree. In the second questionnaire, the elements of the hierarchy tree had to be compared and to be scored in pairs. Filling this questionnaire needed more time as it did not show the hierarchy tree. At the same time, the second questionnaire provided more detailed results.
The questionnaires were completed in February of 2019 by the target group designated at the authors' institutional affiliation (i.e. the Budapest University of Technology and Economics). The participants of the target group came from different scientific areas and from different hierarchy levels, at the same time they were all familiar with CAV technologies and working together with industrial partners as well under the umbrella of CAV research and development projects. In all, the target group contained 20 members whose opinion can be regarded well-founded due to their everyday work. Besides, it is important to emphasize that the participants work in different technology fields which were beneficial as usually experts overestimate their specialization. Although the target group was based on the Higher Education Excellence Program (BME FIKP-MI/FM) running at the university from 2018 to 2021, all participants of the group work in different discipline of CAV transport development both for the university and ZalaZone test track [32]. The evaluators were transportation engineers, vehicle engineers, electrical engineers, software engineers, control engineers as well as civil engineers. From the aspect of qualification, there were PhD students, lecturers, university professors, as well as research fellows in the pattern. It must be emphasized, however, that about half of the target group participants also hold a position in industrial companies (that is common in the Hungarian academic area), i.e. they have a clear view from commercial and industrial aspects. The questionnaire was completed on article with personal attendance of the instructor. Thus, the false filling due to any misunderstanding could be avoided. The first questionnaire needed 15 minutes to fill, and the second one needed 40 minutes on average.
Since the pattern consisted strictly specialized experts, the size of 20 can be considered sufficient for an MCDM and within that for an AHP survey. As a justification to this practice in decision making research, consider the research of Lee [33] which applied 21 evaluators with different backgrounds, i.e. researchers, business executives and public agency staff members in a public transport survey.
IV. RESULTS
Pareto-test, presented in the Methodology section, has been carried out following the rules of PAHP. Having set up the VOLUME 9, 2021 hierarchical decision tree of autonomous proving ground development (see Table 4 ), we created the questionnaires and collected survey data as introduced in the previous section. The consistency check ensured that all evaluations were within the range of acceptable inconsistency, namely the computed Consistency Ratio was below the 0.1 threshold in each experiential pairwise comparison matrix. Afterwards, we used the geometric mean to create the aggregated matrices for each part of the hierarchical decision tree (see Table 11). Then we applied the eigenvector method (Formula 2) for every aggregated matrix, but simultaneously, we have tested the Pareto optimality of the eigenvector in case of all the 18 pairwise comparison matrices (the eigenvectors of 3 × 3 matrices are always Pareto optimal). We have not found any matrix with non Pareto optimal eigenvector. For demonstration, the aggregated matrix of the first level decision elements is provided by Table 11.
This approximation cannot be improved (it is not perfect due to the small but acceptable inconsistency of the scoring), so the results of the AHP process can be considered Pareto optimal. In case this first level weight score calculation would not have been optimal, even little modifications would have caused a significant change in lower level ranking of the decision elements. The efficiency test has proven the robustness of the survey in terms of deriving the weight scores from the evaluated matrices. Consequently, the final scores can be considered as trustworthy and the results as reliable.
On strategic level according to the target group's opinion, the key factors of the development are the control systems of the autonomous vehicles, i.e. In-vehicle enablers (C1) and Vehicle validation (C2) reached the highest scores. Their importance is obviously more significant compared to the other elements. Big data, artificial intelligence (C7) and Physical and digital infrastructure and secure connectivity (C6) got medium high score. The least important elements on strategic level are Shared, connected and automated mobility services for people and goods (C3), Human factors (C5) and Socio-economic impacts, user/public acceptance (C4). The evaluation shows that the scientific fields connected more to the social sciences were considered as less important areas. It is also possible to explain the results by the future of the new technologies. First, CAV control technologies have to be realized, i.e. the first step is to design, test and validate the new control systems. If they operate well, they will be able to physically appear in road transport. The issues with human factors and socio-economic impacts will obviously appear in a later phase when CAV transport becomes more and more reality and used in practice. The results of the strategic level can be seen in Fig. 2a. The relationship in scores of the strategic level items can be better perceived in Fig. 2b. The scores of the tactical level are meaningful because the tactical level does not have empty parts. According to the results of the Pareto AHP method the most important elements of the tactical level are the Automotive functional safety and cyber secure control (C14), Public road testing and validation (C26) and Big data for efficient transport planning and traffic management (C72). Besides, it can be seen that the elements which are related to environment sensing, vehicle testing, vehicle localization and artificial intelligence are also reached higher scores. In the results less important are Human factors and dynamic characteristics (C51) and Travel behavior in autonomous public transport (C55).
The high score of public road testing and validation must be emphasized. It means that the target group found, it is not enough to test the new technologies in a closed area, the stochasticity of traffic situations also requires real traffic environments. The further elements in function of the scores of the tactical level show similarity with the strategic level: the elements related to vehicle control systems, safety and localization have higher scores. It justifies that the operability, reliability and the safety operation are the main key factors in the next few years. The human factors again got lower scores. Surprisingly, the topic of big data in road transport achieved a high score which was unexpected as it rather means transport operation and management issues than CAV development. The low scores of laboratory and simulation testing as well as validation were also unexpected. Probably, the target group balanced the difference between the different testing levels and from this aspect public road testing really have a stronger effect on road safety. It is also an interesting result that the target group regarded research itself more important than education, although most of the participants (beside their research and development activities) teach in university programs.
The results on the tactical level are shown in Fig. 3. On the operational level there are also similar areas as on the higher level and there are also new areas in the list of key elements. The target group selected the following fields as the most important topics for CAV research and development: Road traffic control for mixed traffic (autonomous and conventional cars) (C312), Traffic management for fully autonomous transport (C313), GPS/DGPS technologies (C634) and the Data privacy (C714). There are also significant elements with higher scores that are related to the control system of automated vehicles for instance, with sensor architectures, with trajectory making An interesting result of the operational level that it repeats one result of the tactical level: the high score for road traffic management and control. Behind the results, the importance of urban traffic issues can be found. Urban traffic management needs vehicle localization and communication between them. The elements which are related to localization (and also to vehicle control or environment sensing) obtained higher scores, in contrast to the lower scores of the connected technologies (that was unexpected). The higher score of control systems and the lower score of the autonomous delivery services or the autonomous freight fleets show similarity as in higher levels. Probably, the background of that target group opinion is that the development of the basic functions is more important than the features that will be able to realize by them.
The results on the operational level can be seen in Fig. 4. The sensitivity analysis proved the stability of the gained ranking results. We have conducted numerous versions of the analysis and selected decision elements from the first level of the decision hierarchy to examine the impact of the modifications on the lower levels and find out if the originally obtained priority is sensitive. The most obvious selection from the testing point of view was to pick the closest two first level items, the Vehicle validation (C2) and In-vehicle enablers (C1) and subscribing some points from C2 and adding to C1. We have reached the limit of the change of 0.01 (this can be considered as a significant change for a PAHP test) for which the ranking for all levels still remained stable. As demonstrated in Table 13, the position of the first five most significant elements did not change, neither on the second, nor on the third level of the decision hierarchy. In Table 13, the affected elements and their new scores are highlighted in bold. VOLUME 9, 2021
V. CONCLUSION
As a main output of the conducted research a hierarchical decision model has been created for testing and proving processes of CAVs and related technologies. The applied methodology has confirmed the robustness of the results twofold. Sensitivity analysis revealed the stability for minor changes in the hierarchical structure weight scores while Pareto test approved that the gained weights are the best possible to reflect the experts' intention. In the case of expert surveys where the calculated weight scores have high significance (e.g. in investment decisions), it is highly recommended to apply not only the conventional tools of AHP (including sensitivity analysis) but also Pareto optimality test to gain robust results. Accordingly, a Pareto AHP model was carried out and successfully applied for the problem of autonomous proving ground development. Although it is not obvious that the outcome of the recent research in terms of ranking the decision elements of autonomous testing can be generalized due to the specific features of the given test track (size, geographic location. etc.), the proposed hierarchical decision model and the created PAHP procedure is a proper base for similar problem to assist decision making. Even though the examined case study did not contain non-Pareto optimal weight vectors, it is highly suggested for other applications conducting the Pareto test to avoid the risk of rank reversal and thus, to avoid biased final decisions. Future work consists of the extension of the current model by adding possible interrelations of the decision elements. However, it has to be emphasized that the linkages of the elements are basically hierarchical, thus AHP can be considered as a good approximation of the problem. Analytic Network Process (ANP) would examine all possible connections of the attributes and for this enormous number of criteria would probably be difficult to apply. A more realistic methodology might be such model which keeps the PAHP results and amends it with the non-hierarchical linkages such as the combination of AHP and Interpretive Structural Modelling He acts as an associate professor and also participates in research and industrial projects as researcher and a project coordinator. He has coauthored more than 100 scientific articles, two patents, and several books. His current research interests include road traffic modeling, estimation, and control with applications in intelligent and autonomous transportation systems. He is a member of Committee on Transport Engineering of Hungarian Academy of Sciences. He is a Management Committee Member with the European Cooperation in Science and Technology COST Action CA162222 (Wider Impacts and Scenario Evaluation of Autonomous and Connected Transport).
ÁDÁM NYERGES received the M.Sc. degree in vehicle engineering in 2013. Since 2013, he has been a Researcher and an Assistant Lecturer with the Budapest University of Technology and Economics. Besides, he participated in several research in the topic of automated vehicle testing and validation. His current research interests include the simulation and the controlling of complex systems, typically internal combustion engines, hybrid, and electric powertrain systems.
ZSOLT SZALAY received the M.Sc. degree in electrical engineering and the M.Sc. degree in business administration from the Budapest University of Technology and Economics, in 1995 and 1997, respectively, and the Ph.D. degree in mechanical engineering in 2002. He is currently an Associate Professor and the Head of the Budapest University of Technology and Economics, Hungary. He wrote more than 200 scientific publications. His research interests include advanced automotive technologies, the IoT telematics, and the security of vehicle cyber-physical systems. | 9,178.4 | 2021-01-01T00:00:00.000 | [
"Economics"
] |
Dynamics of family households and elderly living arrangements in China, 1990–2010
This article presents analyses on dynamics of family households and elderly living arrangements in China mainly based on the micro data of 2010, 2000 and 1990 censuses. We demonstrate and discuss the trends and rural–urban differentials of largely declined household size, quickly increasing one-person and one-couple-only households, substantially increased proportions of elderly living alone or with spouse only. It is strikingly interesting that proportion of three-generation family households increased by 18.9% in rural area but decreased by 23.7% in urban areas in 2010 compared to 1990, due to rural–urban differences in demographic effects of large fertility decline and socioeconomic/attitude changes. We also present and discuss two interesting demographic phenomenon which were relatively overlooked in the literature. First, increase in number of households is much larger than population growth, due to shrinking of the household size and decomposition of larger families into smaller ones, and very much slowed-down population growth. Second, increases in numbers of elderly (especially oldest-old) who live alone or with spouse only are dramatically larger than the increase in the corresponding proportions, due to the effects of rapid population aging, while later and larger birth cohorts become old. Such trends have important implications for the analyses on the current and future market demands of the products and services, of which households are the consumption units. We recommend that the studies on home-based energy use and sustainable development should be based on analyses of family household dynamics rather than population growth.
Introduction
Under the rapid socioeconomic transformations, which have been taken place in China over the last several decades, how have the Chinese family households and elderly living arrangements changed? How can we better understand these dynamic changes? Our previous studies based on the one-per-thousand micro sample data from 1982,1990 and 2000 censuses of China have shown that, during the period of 1982-2000, the one-person and one-couple-only households have been increasing quickly; average household size decreased significantly; the proportions of elderly-couple only households and elderly who did not live with children substantially increased (Zeng and Wang 2003). Other studies also had similar findings and concluded that the family transformation in China during the period 1982-2000 was caused by factors including the tremendous fertility decline, rapid industrialization, increasing migration, rise in women's education, and the significant changes in social attitudes and economic mobility related to co-residence between old parents and adult children (Wang 2006;Guo 2008;Fan 2002;Cheung and Yeung 2013).
The most recent census of China in 2010 reveals that the trends outlined above have continued. For example, although the total number of households continues to increase in China, the average household size reduced from 3.44 in 2000 to 3.09 in 2010; in particular, small households with only one or two persons have increased rapidly (Zhou 2013). With regard to the household structure, Wang (2013) found that the nuclear households, the three-generation stem family households, and the one-person households make up the majority of the Chinese household in 2010. Among these three major types of households, the proportion of three-generation stem family household remain stable in recent decades, whereas the proportions of the nuclear family households has significantly declined in 2010 as compared to 2000 due to the rapid increase of the oneperson household. Hu and Peng (2014) and Cheung and Yeung (2013) pointed out that the young rural immigrants to urban areas could have contributed to the growth of one-person households in both rural and urban areas: the inflow of young immigrants increase the one-person household in cities; simultaneously the left-behind elderly parents in rural area contribute to the increase of one-person elderly household in the rural regions. With regard to elderly living arrangements, the increase in the proportion of elderly aged 65 or over who live alone or with spouse only and the decrease of the proportion of elderly living in threegeneration stem family households from 1982 to 2010 are very substantial (Wang 2014;Zhang 2013).
Based on our own and others' previous studies, this article intends to make some significant contributions to better understanding of dynamics of households and elderly living arrangements in China. We conduct comparative analyses across different periods as well as rural and urban areas, based on analysing the micro data files of the 2010, 2000, and 1990 censuses in combination with the officially published 100% cross-tabulations. We integrate the analysis of elderly living arrangements with the family household dynamics in this article because Dynamics of family households and elderly living arrangements… Chinese population has been aging rapidly (Banister et al. 2010) and family is the most important institution for old-age support in Chinese society (Pei and Pillai 1999;Chen and Silverstein 2000;Yeung and Xu 2012). We will investigate the trends and patterns based on not only dynamics of proportion distributions of the household types/sizes and elderly living arrangements but also changes in the absolute numbers, which are useful for socioeconomic planning and business/ market analyses. The next section outlines the data sources and the approach of analyses. The third and fourth sections present the general patterns and dynamic changes of family household sizes and types as well as the living arrangements of the elderly since 1990. The fifth section discusses the rural-urban differentials. Throughout the paper, we will also discuss socio-economic and cultural explanations on the patterns and dynamic changes in Chinese family household and elderly living arrangements.
Data sources and the approach of analyses
The analyses presented in this article are mainly based on the micro sample data of the 2010, 2000, and 1990 censuses with a sample size of 1.34, 12.6 and 1.14 million persons, respectively (the sample fraction was one-per-thousand of the total population for 2010 and 1990 censuses and one-per-hundred for 2000 census). 1 Based on analyzing the 1953, 1964, and 1982 censuses data and the 1982 one-per-thousand fertility survey data, Coale (1984) concluded that the data passed a series of stringent tests of accuracy and consistency. Other scholars who have analyzed Chinese censuses and survey data have reached similar conclusions (Kannisto 1986;Lavely 2001;Cai 2013). Underreporting of births has, however, become a problem in recent decades contributing to underestimation of not only fertility but also family household size. Based on sophisticated demographic analysis using the censuses and various other kinds of data, many scholars demonstrated that the overall fertility in China (especially in urban areas) has been far below the replacement level since the late 1990s (Zhang and Zhao 2006;Zhao and Chen 2011), and thus the effects of underreporting of births on statistics of magnitude of family household size may not be very large. Statistical officers and scholars in the field generally believe that census enumerations had become more difficult in the process of radical market economic reform mainly because many more people were moving around and the administrative system was not yet adapted to the tremendous changes. For example, based on post-census sampling surveys, the officially published net undercount rate of the 2000 census was 1.81%, in contrast to 0.6% in the 1990 census. However, the officially reported net undercount rate in the 2010 census was 0.12%, largely reduced compared to 2000 and 1990, perhaps due to the more mature administrative system adapted to the market economic system (Cui et al. 2013). In general, the undercount rates in the contemporary Chinese censuses are not very high as compared to other countries (Zhao 2011). Nevertheless, we must keep the issue of undercount rate in mind, although it may not significantly affect our analysis on family household types and living arrangements of elderly who usually do not move around.
Note that the governmental socioeconomic planning and private business market analysis need not only detailed proportions distributions but also absolute numbers of households by types/sizes and elders by living arrangements. In some circumstances, the dynamic changes in absolute numbers may be of more practical usefulness than that of proportions. For example, as to be discussed in Sect. 4.3, the numbers of Chinese oldest-old aged 80+ living alone (who may likely need care services) in 2010 increased by 233.2% compared to 1990, in contrast to 21.8% increase in the proportion of oldest-old living alone among total population in the same period.
The statistical offices publish cross-tabulations of both proportions and absolute numbers based on the 100% census data, but these cross-tabulations only contain certain limited broad categories and do not have detailed information of households by types/sizes and contain very little information about elderly living arrangements. Thus, scholars rely on the data source of micro samples of the censuses to estimate the proportion distributions by detailed types of family households and elderly living arrangements, which are very useful for academic research and policy analysis. However, almost all of the previously published studies on family households and elderly living arrangements based on the census micro data included only proportion distributions, but did not contain detailed information about the cross-sectional and dynamic changes in absolute numbers. Our present study intends to contribute to this research field by estimating and discussing both detailed proportions and absolute numbers of family households by types/sizes and elderly population by living arrangements, based on the approach of integrated analyses on the census micro samples data and the official 100% cross-tabulations.
Note that it is not valid to simply multiply the detailed proportion distributions of family households and elderly living arrangements derived from the census micro sample data by the absolute numbers of the officially published very limited summary measures based on the 100% census data to estimate the corresponding detailed absolute numbers, as it would produce results which are not internally and logically consistent. Thus, to avoid the inconsistence, we apply the "BasePop" module of the ProFamy extended cohort-component model and its software program for households and elderly living arrangements projection (Zeng et al. 2014). Based on the detailed census micro samples data and the official 100% census data cross-tabulations of summary measures, the ProFamy "BasePop" module prepares the detailed 100% population distributions of households and living arrangements by household types/sizes, age/sex, co-residence and rural-urban residence in the census year as baseline for the family households projections, while ensuring the internal consistencies and accuracy. The ProFamy model and its technical modules (including Base-Pop) and procedures were described, numerically evaluated and discussed elsewhere (Zeng et al. 1998(Zeng et al. , 2014, and thus no need to be detailed here.
Dynamics of family households and elderly living arrangements… 3 Changing family households, 1990-2010
Chinese family household size is steadily decreasing
In 1990, four-person households constituted the largest share of all household categories by size, but it became the second largest in 2000 and the third in 2010. The five-or-more-person households account for 33% of the total family households in 1990 but sharply declined to 22% in 2000 and to 17% in 2010. Three-person households constituted the largest percentage share in both 2000 (30%) and 2010 (27%); whereas the two-person household became the second largest group of households in 2010 (23%). Large households were no longer popular, namely, the six-or-more-person households constituted 15.4% in 1990, and decreased to only 8.1% in 2000, and further down to 6.6% in 2010 (see Fig. 1).
The average family household size in China was 5.6 in 1930China was 5.6 in -1940China was 5.6 in and 4.36 in 1982 it was reduced to 3.94 in 1990; further decreased to 3.45 in 2000 and then to 3.10 in 2010. Note that according to the Chinese census enumeration rules, the average family household sizes include the emigrants who left home for less than half a year for job-related reasons, and they were counted as homehousehold members; but these persons were not actually living in their home residence. Therefore, the actual average household size in China today would be even smaller than the published figures, if those who left home for less than half a year for finding a permanent job elsewhere were not counted as their hometownhousehold members. It is clear that Chinese family household size is steadily and substantially decreasing due to dramatically decreased fertility, the rapid industrialization, rise in education, and changes in people's attitudes, which tend to favor smaller family households.
Although Chinese family households maintain the typical Asian characteristics, namely, the three-generation extended family households remain a relatively large proportion of the household types (to be detailed in Sect. 3.5), Chinese family households in 2000-2010 were substantially smaller than those of many large Asian developing countries. For example, the average family household size in India and Indonesia in 2010 was 4.91 and 3.90 (per the Indian and Indonesian censuses), which is 58.9 and 26.2 percent larger than that in China in the same year, respectively.
Dramatically increased proportion of one-person and one-couple only households
One-person households in 2010, 2000 and 1990 accounted for 14.5, 8.3 and 6.3 percent of all households, respectively, representing a 75.1 and 131.8 percent increase in 2010 compared to 2000 and 1990, respectively. The one-couple only family households accounted for 17.7% of all households in 2010, which was 2.7 times as large as that in 1990, and 1.4 times as large as that in 2000 (see Table 1). The average annual rate of increase in the percent of one-couple only households was 8.6% in the period between 1990 and 2010. This dramatic increase is likely due mainly to considerably more elderly couples living without their children (to be discussed later) and many couples delaying childbearing in 2010 as compared to 2000 and 1990; the increasing number of young couples in the cities who choose to remain childless (i.e., the so-called "Double Income and No Kids") may also be a contributing factor. For example, based on the famous "Zero point index" surveys, the proportion of "Double Income and No Kids" family households in the largest Chinese cities of Beijing, Shanghai, Guanzhou and Wuhan increased from 1.1% in 1997 to 10.5% in 2004, and the average proportion among 20 Chinese cities (including middle and smaller size ones) was 6.5% in 2008.
However, the dramatically increased percentages Chinese one-person and onecouple only households are still much lower than those in Western countries. For example, the one-person and one-couple only households in the United States in 2010 constitute 26.7 and 27.2 percent of the total number of households, being 1.84, and 1.54 times as high as the Chinese ones, respectively. The main reasons why the percentages of one-person and one-couple only households in China are still much lower than those in Western countries are threefold. First, many fewer Chinese remain never-married for life. Second, most Chinese couples, especially about half of the population who live in rural areas, had their first birth earlier than their Western counterparts and much fewer couples remain permanently childless. Third, as discussed in greater details later, unlike the elderly in the Western countries who mostly do not live with their adult children, most Chinese elderly, especially those who have no spouse, live with their children, and such a tradition remains in place although it is declining. Figure 2 shows that the number of Chinese family households increased by 45.1% in 2010 compared to 1990, which is 2.5 times as large as that of the population growth (17.9%) during the same period. Figure 2 also demonstrates that the relative difference between the increase of households and population size in later period 2000-2010 was much larger than that in earlier period 1990-2000. More Dynamics of family households and elderly living arrangements… specifically, the increase in the number of households was 3.8 times (= 18.0%/4.8%) as large as that of population growth in 2000-2010, in contrast to the corresponding relative difference of 1.8 times (23.0%/12.5%) in 1990-2000. The data shown in Fig. 2 clearly indicate that while population growth in China has slowed down substantially, the number of households is increasing rapidly because many Chinese people are forming one-or two-person and other kinds of small households. The trends and pattern of much faster increase in number of households than the population growth have important implications in the current and future market demands for products and services, of which households (rather than individuals) are the consumption units, such as housing, home-based energy use, TV, refrigerators, washing machines, furniture and family-use vehicles. For example, there has been a rising consensus that households increase (rather than population growth) should be considered as one of the most important factors in analyses for homebased energy consumption (such as cooking, heating, cooling and private vehicles) and sustainable development (MacKellar et al. 1995;Liu et al. 2003). Even without population growth, energy consumption is driven by growing number of households resulting from smaller size of the residential units.
Substantially decreasing percent of two-generation nuclear family households
The proportions of nuclear family households of one-couple and children and single-parent and children households decreased by 28.9 and 33.7 percent respectively, from 1990 to 2010 (see Table 1). This substantial decrease in nuclear family households is due to the large increase in one-couple only and one-person households. The decreasing percentage of single-parent family households while the divorce rate in China is increasing (Wang et al. 2018; Fig. 1 Dynamics of family households and elderly living arrangements… involve couples who have no children or whose children have already left home, increased remarriage rates and the decreasing widowhood rate.
Changes in proportion of three-generation family households
While nuclear family households are the mainstream in Chinese society today, extended family households with three generations also constituted a relatively large proportion: 18.41, 18.98, and 18.00 percent in 1990, 2000, and 2010, respectively (see Table 1). The three-generation family household was the second largest family household type in 2010, while the most popular one was the two-generation nuclear households, and the third and fourth were one-couple-only and one-person-only households.
Note that the proportion of three-generation family households in rural areas had increased by 18.9% in 2010 as compared to 1990, but it had decreased by 23.7% in the urban areas in the same period, while the proportion of three-generation family households in rural and urban combined in slightly decreased by 2.3% in 1990-2010. We will discuss such interesting phenomenon and the dramatic rural-urban differentials in the Sect. 5.1.
Dynamics of elderly living arrangements, 1990-2010
Analysing the changes of elderly living arrangements would more directly and accurately reveal the changes in intergenerational co-residence between old parents and adult children than looking at only the proportions of three-generation versus nuclear family households as discussed above. Furthermore, we must pay special attention to the living arrangements of oldest-old aged 80+, who most likely need help of care in daily life and are increasing much faster than that of any other age groups. We, therefore, devote a substantial portion of this paper to analysing the dynamic changes in elderly living arrangements since 1990 and classify the elderly population into two broad groups of younger elders aged 65-79 and the oldest-old aged 80+.
Co-residence between old parents and adult children declined substantially
As shown in Tables 2, 3 and 4, the proportions of elderly living with children (including children and grandchildren hereafter, unless otherwise specified) have declined substantially in both periods 1990-2000 and 2000-2010. In the same time, the majority of Chinese elderly still live with their children, because children are currently the major source of old age care in Chinese society. Note that the decrease among the young-olds (Table 3) were faster than that among the oldest-olds (Table 4). More specifically, the proportions of younger male and female elderly aged 65-79 who co-resided with children in 2010 was lower by 28.5 and 21.3 percent respectively as compared to 1990 (Table 3); and the corresponding figures of Dynamics of family households and elderly living arrangements… Dynamics of family households and elderly living arrangements… decrease among male and female oldest-old (aged 80+) were 20.3 and 13.1 percent (Table 4). Among the male and female elderly populations aged 65+, the proportion of those living with children dropped by 27.6 and 19.2 percent respectively in 2010 as compared to 1990 (Table 2). These data indicate that the prevalence of the traditional co-residence between elderly parents and adult children has declined substantially from 1990 to 2010, and the decrease was considerably more profound among young-olds than among the oldest-olds, and the decrease was substantially faster among males than females. Such trends and patterns may be due to younger and healthier elderly parents' increasing preference to live independently, and due to more adult children migrated away from their elderly parents for job-related reasons. It is clear that the female elderly (either young-olds or oldest-olds) are much more likely to live with their adult children (see Tables 2, 3, 4) than male elderly; and this gender differentials have increased in 2000-2010 as compared to 1990. This is because elderly women are more likely to be widowed and economically dependent and they are also more likely prefer and to be requested by their children to live together to take care of grandchildren.
Proportion of living alone and living with spouse only among Chinese elderly substantially increased
The proportion of elderly aged 65+ who live alone has declined by 6.4% from 1990 to 2000, but increased by 33.0% in 2010 compared to 2000. In the 20-year period from 1990 to 2010, the proportion of elderly living alone has increased by 24.3% ( Table 2). The relative increase of young-olds who lived alone was substantially faster than that for the oldest-olds, and the relative increase of female young-old and female oldest-old who lived alone was substantially faster than that of their male counterparts (Tables 3, 4). The relative increase in the proportion of elderly who lived with their spouse only for the oldest-olds (113.8%) in the period of 1990-2010 was much faster than that for the young-olds (82.2%), especially so for the female oldest-olds (167.1% increase) versus female young-old (86.6% increase). The large increase in the proportion of elderly who lived with their spouse only in 1990-2010 was likely because of the substantial decline in the proportion of elderly who lived with their adult children due to increase in preference for independent living, increased mobility of their children, decline in mortality of elders' spouses, and rise in remarriage rates among the elderly. The increase in remarriage rates among the elderly is a result of social reform and the progress of mate-matching services in the last two decades in China. The reform aimed to protect elders' rights, including the right to remarry, which in traditional Chinese society were often violated by the intervention of children and other family members. While the proportion of the elderly who live with a spouse only in China has increased substantially in the past two decades, it is still much lower than that in the Western countries, because the proportion of Chinese elderly who live with children is much higher than that in the Western countries (Zeng et al. 2013).
Note that both younger elderly women and oldest-old women are much more likely to be widowed and thus to live without a spouse, with children only or even 1 3 Dynamics of family households and elderly living arrangements… to live alone (see Tables 3, 4). On the other hand, elderly women are economically more dependent. Therefore, the disadvantages of women in marital life and independent family household living arrangements are substantially more serious than those of men at old ages, and the gender differentials tend to increase with age.
The relative increases in absolute numbers versus proportions of elderly by living arrangements
It is interesting to note that, while the proportions of elderly who live alone and live with a spouse only among elders aged 65+ in 2010 has increased by 24.3 and 78.6 percent since 1990, respectively (see Table 2), the absolute numbers of elders aged 65+ who lived alone and lived with a spouse only have increased by 134.7 and 237.3 percent in the same period (see Appendix Table 5). It is even more remarkable to note that the numbers of oldest-old who lived alone and lived with a spouse only have increased by 233.2 and 484.7 percent from 1990 to 2010 (see Appendix Table 7), in contrast, the increase in the proportions in the same period was 21.8 and 113.8 percent (see Table 4). The much larger relative increases in the absolute numbers of the elderly (especially the oldest-old) who live alone or with spouse only than the corresponding proportions are mainly due to the rapid population aging in China characterized by a rapid increase of the numbers of elderly especially the oldestolds as the later and larger cohorts become elderly and oldest-old (ref. discussions in Sect. 2). The policy makers and business managers need to draw special attentions to such trends of dramatic increase in the absolute numbers of elderly (especially the oldest-olds) who live alone or live with spouse only (rather than looking at the proportions only) in their analysis, so as to appropriately plan for the social service programs and commercial market products.
Rural-urban differentials in family household structure
The average sizes of family households in Chinese rural and urban areas in 2010 were 3.3 and 2.8, respectively; the average household size in urban area dropped by 25.6% from 1990 to 2010, in contrast to the 19.9% decrease in the rural area in the same period (Table 1). As shown in Fig. 3, the major difference of the percentage distributions of households by size between rural and urban areas is that the percent of small households of 1 or 2-3 persons in urban area are much higher than those in rural areas, while the opposite was true for the larger household of 4-5 and 6+ persons. The rural-urban differences tend to be larger in the later years of 2010 and 2000 compared to 1990. The main factors for such substantial differentials of family household sizes between the Chinese urban and rural areas include the much lower fertility in urban than in rural areas, and the large rural-urban family structural differentials to be discussed below.
The one-person households and one-couple only households were substantially less prevalent in rural areas than that in the urban areas as revealed in all three censuses conducted in 1990, 2000 and 2010 (see Table 1). The proportion of one-person only households has increased by 149.7% in the urban areas from 1990 to 2010, as compared to 112.1% increase in the rural areas. The higher and faster increase of one-person households in the urban areas may be a result of a higher divorce rate and that more not-married elderly prefer to have independent living or shortage or mobility of children in the cities than that in the countryside.
The proportion of one person and other(s) households in the urban area more than tripled in 2010 (3.2%) compared to 1990 (1.0%), while it has increased by 34.3% only in the rural areas. Data (not shown) indicate that almost all of the tremendous increase in proportion of households with one person and other(s) in 1990-2010 were from households with a reference person aged less than 65. Thus, we believe that this is mainly due to the fact that much more young or middle-age urban residents who do not live with spouse and children but share an apartment with the roommate(s).
The three-generation family households constituted 22.8% in the rural areas in 2010, in contrast to 13.6% in urban areas in the same year; these data indicate that the prevalence of three-generation family households in rural area was 1.7 times as high as that in the urban areas (see Table 1). It was interesting to note that, compare to 1990, the proportion of three-generation family households in 2010 increased by 18.9% in rural area, while it was decreased by 23.7% in the urban area (Table 1).
Was the family household structure in the rural areas in China in 2010 more traditional than that in 1990? This seems unlikely because it is contradictory to the expected attitudes and behavior changes induced by the rapid socioeconomic development and the opening door to the outside world that have been occurring in China including rural and urban areas in the past four decades. Moreover, as shown in Table 2, the co-residence between old parents and adult children in rural areas has also declined substantially during the period of 1990-2010. Therefore, we believe that, while the proportion of two-generation nuclear family households has dropped substantially in rural areas (see Table 1), the considerable increase in the proportion of the three-generation family households in the rural areas in 2010 compared to 1990 was mainly due to the demographic effects of a sharp decline in fertility. More specifically, given that most rural elderly parents still live with one married child (although declining), the adult children who were born after the early 1970s and have much fewer siblings due to a large decline in fertility have a smaller chance of moving out of the parental home to form an independent nuclear family household (Zeng 1986(Zeng , 1991, and thus resulted in the considerable structural increase in the proportion of three-generation households in rural China in 1990-2010. However, while rural fertility is still slightly above or around replacement level, fertility level in Chinese urban area declined to below replacement level in late 1970s and continued to decline or sustained at a very low level since then. As modeled and numerically simulated in Zeng (1986Zeng ( , 1991, if fertility continues to fall after reaching the replacement level, a further reduction in the birth rate will reduce the proportion of three-generation households because it will be impossible for some elderly parents to live with their married child even if they wish to do so due to the shortage of children. Of course, in addition to such impacts of far-below-replacement fertility level in the urban areas, largely changing attitudes concerning intergenerational co-residence and increasing job mobility of adult children are also the major factors contributing to substantially decreased proportion of threegeneration households in urban China in 2010 compared to 1990. Clearly, while family households have been radically changing in both rural and urban areas, rural Chinese family households are more traditional than are their urban counterparts, because the socio-economic development level and changes in people's attitudes about multi-generational co-residence is substantially slower in rural than in urban areas, as well as due to the different demographic effects of fertility decline between rural and urban areas.
Rural-urban differentials in elderly living arrangements and its dynamic changes
The proportions of elderly men who live with children in rural and urban areas in 2010 were 49.7 and 45.1 respectively, and the corresponding figures for women were 62.8 and 55.4, respectively (see Table 2). Obviously, the rural elderly are more likely to live with their children than their urban counterparts do. Moreover, the proportion of elderly living with children has declined at a slower speed in rural (20.8%) than in urban areas (25.0%) from 1990 to 2010. The proportion of urban elderly women living alone is higher than that in rural areas by 2.1 percentage points. But the proportion of urban elderly men living alone is 2.5 percentage points lower than that in rural areas (see Table 2). In the urban area, there was a 25.6% increase of male oldest-old who lived alone, in contrast to 18.0% of increase of rural male oldest-old who lived alone between 1990 and 2010 (Table 4). In 2010, about one-fifth of female oldest-old living alone, representing a 58.4% increase compared to that in 1990 in the urban areas, in contrast to 15.4% of female oldestold living alone in 2010 with a 10.9% increase in the rural areas in the period of 1 3 1990-2010 (Table 4). The rural-urban and gender differences in the proportions of oldest-old living alone are enormous, and the largely increased female oldest-old living alone in urban areas deserve attentions from the government and society.
The proportions of urban elderly men and women who lived with a spouse only in 2010 were higher than those of their rural counterparts by 7.0 and 4.6 percentage points respectively, and the higher widowhood rates and lower remarriage rates in rural areas than in urban areas may have contributed to this phenomenon.
Tables 5, 6 and 7 in the Appendix demonstrated the rural-urban differences in relative increases of absolute numbers of old adults by living arrangements, which are dramatically larger than the rural-urban difference in changes in the proportions of the different living arrangements of elderly presented in Tables 2, 3 and 4.
Conclusions
This article presents analyses on dynamics of family households and elderly living arrangements in China mainly based on the micro data of 2010, 2000 and 1990 censuses. We demonstrate and discuss the trends and rural-urban differentials of largely declined household size, quickly increasing one-person and one-couple-only households, substantially increased proportions of elderly living alone or with spouse only. It is strikingly interesting that proportion of three-generation family households increased by 18.9% in rural area but decreased by 23.7% in urban areas in 2010 compared to 1990, due to rural-urban differences in demographic effects of large fertility decline and socioeconomic/attitude changes. We also present and discuss two interesting demographic phenomenon which are relatively overlooked in the literature. First, increase in number of households is much larger than population growth, due to shrinking of the household, decompositions of larger families into smaller ones, and very much slowed-down population growth. Second, increases in numbers of elderly (especially oldest-old) who live alone or with spouse only are dramatically larger than the increase in the corresponding proportions, due to the effects of rapid population aging, while later and larger birth cohorts become old. Such trends have important implications for the analyses on the current and future market demands of the products and services, of which households are the consumption units, such as home-based energy use, housing, TV, refrigerators, washing machines, furniture, family-use vehicles and health care services. We recommend that the studies on home-based energy use and sustainable development should be based on analyses of family household dynamics rather than population growth.
Dynamics of family households and elderly living arrangements… Table 5 Numbers of elderly aged 65+ by living arrangements (unit: 10,000), 1990-2010, China Rural and urban combined Rural Urban 1990200020102010vs. 1990(%) 1990200020102010vs. 1990(%) 1990200020102010vs. 1990 Dynamics of family households and elderly living arrangements… Dynamics of family households and elderly living arrangements… | 7,974.4 | 2018-06-01T00:00:00.000 | [
"Economics"
] |
Biodegradable bacterial polyester, poly (hydroxybutyrate-co-hydroxyvalerate) copolymer, produced by moderately halophile bacterium Halomonas sp. PR-1 isolated from marine environment
: PHA (polyhydroxyalkanoate) production by halophiles has attracted much attention in recent years. It was certified that the halophile bacteria Halomonas sp. PR-1 isolated from saltern soil synthesised poly (hydroxybutyrate-co-hydroxyvalerate) (PHBV) intracellularly from simple carbon substrate by means of FT-IR spectra analysis. The carbon and nitrogen source suitable for PHA production were selected as glucose and NH4Cl, respectively. The optimal ratio of glucose to NH 4 Cl was 20, when PHA content was 2.85 g/L and PHA yield was 50.85%. The optimal NaCl concentration for PHA biosynthesis was 30 g/L, when PHA yield was 63.3%. The halophile bacterium Halomonas sp. PR-1 was considered as a promising candidate for PHA production.
Introduction
Polyhydroxyalkanoate (PHA) is a group of bacterial polyesters and synthesised by numerous bacteria as an intracellular energy storage material under nutrientlimiting conditions with excess carbon.The bacterial polyesters may be considered as the alternatives to plastic materials as their structural properties are similar to polyethylene and polypropylene [1][2][3].Their good biodegradability and biocompatibility make them profitable biomaterial for application in packaging, agriculture and medical field [4][5][6].Recently, COVID-19 pandemic increased the demand for plastics and consequently highlighted the associated environmental challenge [7,8].In relation to this situation bioplastics such as PHA have increasingly become a more sustainable solution [9].
The most common polyhydroxyalkanoate is polyhydroxybutyrate (PHB), a homopolymer formed by polymerization of 3-hydroxybutyrate.PHB is easy to break down during melting stage because its melting temperature is nearly few degrees low than its degradation temperature.In addition, its structural nature is brittleness, hard and crystalline, and therefore high crystallinity results in slow biodegradation of PHB [10,11].A way to overcome the drawbacks posed by homopolymer for its applications is biosynthesis of copolymers consisting of 3-hydroxybutyrate (3HB) and the other hydroxyalkanoate.It was reported that copolymers had faster hydrolytic and enzymatic degradation rate than homopolymers due to their low crystallinity [12].
PHA production by halophiles has attracted much attention in recent years [15][16][17][18].PHA biosynthesis in halophilic microbe is a certain countermeasure for adaptation to hypertonic conditions and osmotic fluctuations.PHA granules intracellularly accumulated help bacterial cells to preserve cell integrity when exposed to sudden osmotic imbalances [19].PHA production by halophiles provides additional advantages: PHA biosynthesis can be performed in open unsterile environment and PHA granules can be easily obtained from bacterial cells by osmotic lysis [20,21].
This study describes the biosynthesis of PHBV by a moderately halophilic strain, Halomonas sp.PR-1 isolated from saltern soil in the western of DPR Korea.
PHA production studies
PHA level was measured by crotonic acid assay method.Biomass was determined turbidimetrically: turbidity of culture broth was measured at 660 nm (OD660) and then was converted to cell dry weight via a standard curve.All experiments were carried out in shakable Erlenmeyer flasks and values were measured 3 times and estimated statistically.
Effect of carbon source and nitrogen source
Halomonas sp.PR-1 was grown in 500 mL Erlenmeyer flask containing 50 mL mineral medium with individual addition of glucose, fructose, sucrose, lactose and glycerol as a carbon source.The carbon source type which recorded the highest PHA level was used in subsequent experiments.Halomonas sp.PR-1 was cultivated in medium with individual addition of (NH4)2SO4, NH4Cl, NH4NO3, (NH4)3PO4 and NaNO3 as a nitrogen source.The nitrogen source which provides the maximum PHA production was utilized in the subsequent experiments.The effect of carbon to nitrogen (C/N) ratio on the PHA production by Halomonas sp.PR-1 was investigated in mineral medium formulated by varying carbon source level at a fixed concentration of nitrogen source.Experiments were performed by using a fixed NaCl concentration of 50 g/L.
Effect of sodium chloride
For investigating the effects of NaCl on PHA production, Halomonas sp.PR-1 was grown in minimal medium inclusive of varying concentrations of NaCl (10, 30, 50, 70, 100, 120 and 150 g/L).
FT-IR spectroscopy
The absorption spectrum curves were obtained using a Fourier transform infrared (FT-IR) spectrometer WQB-520A by scanning.The extracted and purified PHA samples were mixed with pulverized potassium bromide (KBr) and then compacted in tab form.The scanning was performed at wavelength range between 4000 and 500 cm −1 .
FT-IR spectroscopy
Purified PHA sample was characterised with respect to FT-IR spectra analysis (Figure 1).FT-IR spectra displayed the absorption band at 1724 cm −1 corresponding to carbonyl group in ester bond.The FT-IR result of PHA formed from Halomonas sp.PR-1 in complete agreement with standard FT-IR spectra of copolymer PHBV.PHA sample analysed was made from glucose, the simplest carbon source.
Effect of carbon source type on PHA biosynthesis by Halomonas sp. PR-1
The type of carbon substrate significantly affects bacterial growth and PHA production.In our study, simple carbon sources such as glucose, sucrose, fructose, lactose and glycerol were compared in terms of biomass and PHA production.As shown in Figure 2 glucose supported the highest production level of biomass of 5.52 g/L and PHA of 1.58 g/L among the carbon sources tested.Glucose is the simplest carbon substrate that can be easily metabolized by microbe.Halophile bacterium Halomonas sp.PR-1 formed intracellulary PHBV from glucose (Figure 1).To our knowledge, so far there is no report on copolymer biosynthesis from glucose by halophile bacterium without addition of any precursor.Moderately halophilic and alkalitolerant Halomonas campisalis MCM B-1027 produces copolymer PHBV using maltose [23].
The production level of biomass and PHA formed from sucrose and fructose was quantitatively like.Sucrose has been commonly used in PHA production by the other species such as Halomonas boliviensis and Alcaligenes latus [24].The amount of PHA produced from lactose was trivial in comparison with those from the others, indicating that lactose is not a proper carbon source for PHA production.
Effect of nitrgen source type on PHA biosynthesis by Halomonas sp. PR-1
The effect of nitrogen source type on PHA biosynthesis by Halomonas sp.PR-1 is examined (Figure 3).Among the various nitrogen sources screened, ammonium chloride supported the growth and PHA production of Halomonas sp.PR-1.Ammonium chloride provides the highest biomass of 5.8 g/L with PHA of 1.66 g/L.This agrees with results of other study in which H. boliviensis produced PHA using ammonium chloride as the nitrogen source.However, Gao et al. reported that the most suitable nitrogen source for PHA production by H. venusta was (NH4)2SO4 [25].Kawata et al. [24] reported NaNO3 as the nitrogen source supported the greatest PHA biosynthesis by Halomonas sp.KM-1.However, in this study the biomass and PHA production showed the lowest level when using NaNO3 as nitrogen source.This may be explained by diversity of strain preference and variation.Like carbon source, the nitrogen source is important compound for microbial metabolism.The deprivation or reduction of the nitrogen supply to microbial cells results in the limitation of bacterial cell multiplication thereby decreasing the volumetric productivity of PHA.However, bacteria have been found to accumulate PHA when there is a limitation of nutrients, specifically nitrogen sources.Therefore, the ratio of carbon to nitrogen in growth medium should be essentially examined for PHA production studies.For microbial growth the carbon and nitrogen substrates are essential.In general, Carbon requirement of microorganism is larger than nitrogen requirement and affects the organisms' ability to utilize the nutrition material.Especially, the ratio of carbon to nitrogen concentration is very important for PHA production because PHA accumulation in bacteria cell is usually triggered under abnormal growth environment.As shown in Figure 4, the effect of the ratio of glucose to NH4Cl on PHA production was studied at various levels.The highest PHA level of 2.85 g/L was achieved when glucose and ammonium chloride ratio was 20, with biomass production of 5.6 g/L corresponding to PHA yield of 50.89%.Halomonas sp.PR-1 biosynthesized maximum biomass of 6.5 g/L at the ratio of glucose to ammonium chloride of 15, but at which its PHA productivity was worse than that at 20 of glucose and ammonium chloride ratio.When glucose and ammonium chloride ratio was 30 in growth medium PHB yield was relatively high, but volumetric productivity of PHA decreased.These results indicate that C/N ratio is very important factor affects PHA production by microorganism.Therefore, the optimal ratio of glucose to ammonium chloride was set for 20.The optimal C/N ratio for PHA production varies with the bacterial species.In Haloferax mediterranei, an extreme halophilic archaebacterium, the maximum PHA yield of 47.22% was achieved at glucose and NH4Cl medium with C/N ratio of 35 [22].When the C/N ratio of 8 was selected for Cupriavidus taiwanensis 184, PHA productivity was the highest [26].
Effect of NaCl on PHA biosynthesis by Halomonas sp. PR-1
The growth of marine bacteria and its PHA accumulation are subjected to influence of the salinity.To examine the effect of NaCl concentration on PHA biosynthesis by Halomonas sp.PR-1, the NaCl concentrations in medium were conditioned to 10, 30, 50, 70, 100, 120 and 150 g L −1 .The highest amount of PHA (3.25 g/L) was obtained at 30g/L of NaCl concentration with 56.03% of PHA yield (Figure 5).The largest cell density (6.8 g/L) was obtained at 70 g/L of NaCl concentration with low PHA yield of 37.5%.As the NaCl concentration increased up to 70 g/L of NaCl concentration the amount of biomass and PHA decreased indicating that high NaCl concentration inhibited cell growth and PHA accumulation.This finding reflects the necessity of controlling the salinity in medium to overcome osmotic stress and its effect on PHA production [19].Therefore, the optimal NaCl concentration for PHA production by Halomonas sp.PR-1 was 30 g/L being different from those reported the by previous studies of the other halophile bacteria.For example, the proper NaCl concentration for PHA production by the moderately halophile H. boliviensis was 45 g/L, and marine bacteria Vibrio sp.BM-1 produced the maximum PHA at 18g/L of NaCl [27,28].
Conclusion
The halophile bacterium Halomonas sp.PR-1 isolated from saltern soil synthesised PHBV intracellularly synthesise from glucose.The proper C/N ratio was
Figure 2 .
Figure 2. Effect of carbon source type on biomass and PHA biosynthesis in a batch culture of Halomonas sp.PR-1 grown in a mineral medium at a temperature of 30 ℃ and an agitation rate of 200 rpm for 70 h.
Figure 3 . 3 . 4 . 1 Figure 4 .
Figure 3.Effect of nitrogen source type on biomass and PHA biosynthesis in a batch culture of Halomonas sp.PR-1 grown in a mineral medium at a temperature of 30 ℃ and an agitation rate of 200 rpm for 70 h.
Figure 5 .
Figure 5.Effect of NaCl concentration on biomass and PHA biosynthesis in a batch culture of Halomonas sp.PR-1 grown in a mineral medium at a temperature of 30 ℃ and an agitation rate of 200 rpm for 70 h. | 2,431.8 | 2024-05-28T00:00:00.000 | [
"Environmental Science",
"Biology",
"Chemistry"
] |
Stochastic simulation algorithms for Interacting Particle Systems
Interacting Particle Systems (IPSs) are used to model spatio-temporal stochastic systems in many disparate areas of science. We design an algorithmic framework that reduces IPS simulation to simulation of well-mixed Chemical Reaction Networks (CRNs). This framework minimizes the number of associated reaction channels and decouples the computational cost of the simulations from the size of the lattice. Decoupling allows our software to make use of a wide class of techniques typically reserved for well-mixed CRNs. We implement the direct stochastic simulation algorithm in the open source programming language Julia. We also apply our algorithms to several complex spatial stochastic phenomena. including a rock-paper-scissors game, cancer growth in response to immunotherapy, and lipid oxidation dynamics. Our approach aids in standardizing mathematical models and in generating hypotheses based on concrete mechanistic behavior across a wide range of observed spatial phenomena.
Introduction
Stochastic effects are crucial for accurately modeling evolutionary and biological processes such as tumor growth, desertification, disease spread, embryonic development, maintenance of species biodiversity, and pattern formation in general [1][2][3]. The associated spatial mathematical models are commonly analytically intractable. Fortunately the advent of efficient computing has allowed simulation to serve as a common first approach to stochastic modeling. Non-spatial well-mixed versions of these models are often substituted due to their tractability and ease of use. Many celebrated simulation algorithms such as the exact Stochastic Simulation Algorithm (SSA), τ-leaping, and the next-reaction method have been developed and extensively modified to address a wide range of well-mixed stochastic phenomena [4]. However, well-mixed models fail to capture the appropriate statistics and pattern formation seen in the spatial setting. Phenomena due to volume exclusion and spatial dispersion cannot be accurately captured via well-mixed Chemical Reaction Network (CRN) simulation. One common approach to stochastic spatial simulation is to partition the spatial domain into wellmixed voxels. This approach utilizes a Reaction-Diffusion Master Equation (RDME) to model the movement of particles between voxels and the reaction of particles within the same voxel. While this method has substantial algorithmic development [5], it fails to take into account important volume exclusion effects and fine-grained spatial variation. In particular, volumeexclusion has been shown to alter the mass-action kinetics observed in well-mixed models, instead producing fractal kinetics [6,7]. This deviation from mass-action kinetics increases depending on the regularity of the spatial structure in question; for our lattice-based models, we expect to see significant departures from the well-mixed case due to these volume-excluding effects [6].
Interacting Particle Systems (IPSs) provide an alternative to both well-mixed CRN and RDME based modeling. IPSs are a class of stochastic models with full spatial detail, tracking each particle's location on a lattice [8]. Interactions are assumed to be local, meaning particles must be adjacent to each other to interact. Notions of locality and adjacency are details that must be specified in a given model. For some typical reactions, see Table 1. Importantly, IPSs preserve volume exclusion, meaning at most one particle can be present on any given lattice site. Diffusive movement is typically modeled as particles undergoing random walks between sites, respecting exclusion. This is in contrast with more common RDME approaches that couple compartments obeying well-mixed dynamics through non-excluding Brownian motion. Recent significant advances in the basic RDME approach incorporate volume-excluding effects. These include the excluded volume reaction-diffusion master equation (vRDME) [9] and an approach that uses scaled particle theory to allow different-sized particles to have different diffusion rates between voxels [10]. Performing a full comparison between the different IPS and RDME approaches, volume-excluding and otherwise, is beyond the scope of this article. For a detailed modern review that explores the wide array of RDME and Brownian motion approaches to spatial stochastic simulation, we particularly recommend [11].
Example IPSs include the voter and contact processes as well as the classic Ising model from statistical mechanics [12]. These specific models have a large body of theoretical results from the mathematics community, specifically on their critical behavior. Unfortunately these results do not readily extend to multi-type processes and complicated spatial domains. Numerical approaches are computationally prohibitive, leaving direct simulation as the first and frequently only line of attack. The recent IPS simulation package Spatiocyte [13] and its numerous extensions address simulation biases generated under a lattice-based spatial structure [14] and parallelize the original simulation code [15]. We provide a more detailed Table 1. Example processes with reaction diagrams.
Example reactions Processes
On-site ; ! A Immigration description of the differences between our approach and Spatiocyte in the section "Availability and Future Directions." The current paper extends the classic n-fold simulation method, defined later, to IPSs [16]. Our extension enjoys three major advantages over previous approaches. First, we generate the minimum number of required reaction channels for a simulation, avoiding the combinatorial difficulties that arise from counting adjacent configurations of particles. Second, we provide efficient local updates after a reaction channel fires; thus only particles adjacent to a reaction are updated. Critically, this prevents the computational complexity of the simulations from scaling with the size of the lattice. Third, and perhaps most important, we separate the time and reaction sampling steps from the configuration update steps in the algorithm. This reduces our spatial process to the computational complexity of a CRN simulation, albeit with an additional complicated update. Accordingly, we can implement any CRN sampling algorithm for our spatial setting with little additional effort. Well-mixed CRN simulation is extensively developed [17][18][19][20][21][22][23]; therefore, spatial IPSs directly benefit from these prior innovations.
We build on the software package BioSimulator [24], written in the open source programming language Julia [25]. BioSimulator implements different algorithms for simulating IPSs, including the direct stochastic simulation algorithm (SSA) and versions of the next reaction method [26] and the sorting direct method [27]. Our software provides a simple, intuitive interface through which nonspecialists can quickly observe complex behaviors of spatial models with multiple interacting species. Summary statistics and particle count trajectories permit straightforward model checking for the proposed systems. Within this framework, modelers can determine which reactions and parameters are important for producing a certain desired behavior. A recent example of an IPS in action has been reported in a recent immunotherapy model for cancer treatment [28]. This complex model of tumor-immune system interactions illustrates which parameters generate the appropriate immune responses and spatial patterns.
Our software is primarily directed at systems biologists, cancer researchers, ecologists, evolutionary biologists, epidemiologists, and other scientists who are interested in the spatio-temporal effects of discrete actors. We anticipate that BioSimulator's ease of use and flexibility will encourage researchers unfamiliar with stochastic processes to investigate the stochastic and spatial features of their models via simulation. Finally, our software allows users to avoid tedious re-implementation of different algorithms in their simulation studies.
The remaining exposition is organized as follows. First we give a mathematical description of IPSs. We then enumerate the different sample classes for probabilistically equivalent particles using the species types and neighborhood configurations of the lattice or graph. This enumeration plus a description of the reaction rates across these sample classes provides a straightforward means of extending the well-mixed SSA to IPSs. Lastly, we summarize how our software implements each reaction, including updating the sample classes and reaction rates. This is followed by a series of examples of complex, multi-species spatial stochastic phenomena. We conclude with a brief description of the benefits of writing BioSimulator in the Julia programming language.
IPSs and pairwise reactions
An IPS models a collection of particles moving and reacting stochastically over some spatial domain. Particles are discrete entities that may model animals, proteins, wildfire patches, or cancer cells. Like well-mixed CRNs, these particles interact through a series of reaction channels. While stochastic CRNs assume every particle interacts uniformly with every other particle, IPSs restrict these interactions to neighboring particles. Each IPS has an associated graph describing the spatial domain over which the process evolves. Nodes on the graph are sites that a particle may occupy. Edges specify that two nodes are adjacent and hence liable to interact. Typically we restrict nodes to contain at most one particle at a time; we refer to this effect as volume exclusion.
Fortunately, embedding the IPSs on a graph allows us to restrict the reactions to being pairwise. We use the term pairwise instead of bimolecular deliberately; unimolecular reactions that produce two product particles require an open adjacent site due to the volume excluding effect. For example, birth through binary fission is written in well mixed reaction notation as A ! A + A. On a graph with exclusion, birth requires an open adjacent site and becomes A + ; ! A + A where ; denotes an open site that becomes occupied by one of the offspring particles. This schema emphasizes volume exclusion since birth cannot occur when the A particle has no open adjacent sites. We classify reactions into two groups, on-site and pairwise. For a non-exhaustive list of examples, see Table 1; for a specific predator-prey example see Table 2. These two reaction types, on-site and pairwise, are useful in describing a number of biological applications, but like all models they have their limitations when the system's dynamics are complex. Higher-order reactions are reduced to pairwise interactions through the formation of intermediate complexes.
Markovian dynamics, reaction channels, and sample classes
Particles evolve on the graph according to standard Markovian dynamics where the waiting time to the next reaction is exponentially distributed [29]. If a particle can take part in multiple reactions, then its exponential waiting time has rate equal to the sum of the rates of each individual reaction under mass-action kinetics. Note that more complicated kinetics are allowed provided that we restrict the interactions to neighboring particles. Longer range interactions are feasible in principle, though they introduce combinatorial complexity in enumerating the neighboring configurations. The current version of BioSimulator is restricted to massaction kinetics for immediate neighbors.
The rate at which a particle undergoes reactions depends on both the species of the particle and the number and species of its neighboring particles. Although open sites are not collectively considered a species, open sites next to occupied sites play a negative role in volume exclusion. In order to draw parallels with well-mixed CRNs, we split each pairwise reaction into a series of reaction channels. Each pairwise reaction channel is associated with a center particle interacting with up to D neighboring particles of the appropriate type, where D is the number of adjacent neighbors. D takes the values 4, 6, and 8, respectively, on a square planar lattice, a hexagonal planar lattice, and 3-dimensional cubic lattice. Therefore the total number of reaction channels is R = D × # pairwise reactions + # on-site reactions. See Fig 1 for a depiction of a predator-prey process involving foxes and rabbits on a hexagonal lattice and Table 3 for its associated reaction channels. For instance, when the third predation reaction channel fires, the simulation searches for a fox adjacent to exactly three rabbits to undergo the predation.
There are two approaches to sampling a reaction channel and associated particle. The more rudimentary approach is to scan through the particles in the lattice, sum the per-particle reaction rates, and select a particular particle to fire with probability proportional to its contribution to this sum [30]. A more sophisticated method is given by Bortz, Kalos, and Lebowitz under the n-fold way [16]. Here particles are grouped into classes such that all members of a Each initial pairwise reaction in Fig 1a is split into six reaction channels, one for each number of adjacent reactants. Each reaction channel has an associated per particle rate and sample index. This sample index points to the collection of particles that the reaction channel samples a reactant from. The total rate of each reaction channel is equal to the per animal rate times the number of animals in the associated sample class. Note that the rabbit reproduction and migration channels share the same sample indices because they share the same reactants. https://doi.org/10.1371/journal.pone.0247046.t003 given class take part in a specific reaction with the same rate. This explicitly forms a series of reaction channels for sampling using Markovian dynamics and avoids time consuming searches of the lattice. We provide an extension to the n-fold way that decouples the sampling of the reaction channels, an inherently non-spatial maneuver, from the sampling of a particle to undergo the reaction. This in turn separates the spatial dependencies inherent in IPSs from the Markovian dynamics of the reaction channels. Thus, spatial correlations are handled during the update step. We do this via generating sample classes, which are collections of particles that can be sampled by one or more reaction channels. The sample classes are motivated by the observation that the exact configuration of neighboring particles does not matter for a given reaction channel firing. Only the number of neighboring particles of the appropriate type influence the reaction rate. Therefore each sample class contains particles of a specific species that are adjacent to a specific number of particles of a type that the particle under consideration can react with. This is best demonstrated by an example; see the sample classes associated with each reaction channel in Table 3. The rabbits in the predator-prey example are sorted into seven different sample classes, numbers 7 through 12 and 20, one for each central rabbit interacting with one to six open adjacent sites and a final class containing only rabbits. As an example for the pairwise reaction channels, sample class 9 contains rabbits adjacent to three open sites. This sample class is targeted by two different reaction channels, one for rabbit migration with three neighbors and one for rabbit reproduction with three neighbors. Likewise there is a sample class associated with each on-site reaction; sample class 20 contains every rabbit that can undergo death.
Because multiple reaction channels may sample particles from the same sample class, the total number of sample classes is less than or equal to the number of reaction channels. Specifically, the number of sample classes is equal to D × # unique pairs of reactants + # unique onsite reactants. Using the list of reactants, we assign each reaction channel to its appropriate sample class. Multiple reaction channels will map to the same sample class when the reactions use the same pair of reactants. For example rabbit migration and reproduction map to the same sample class, as shown in Table 3.
Local updates
For a reaction channel to fire, a particle is sampled uniformly from the appropriate sample class. This particle, possibly along with a neighbor of the appropriate interacting species, then undergoes the reaction. At this point, the reaction rates must be updated to reflect the changing configuration. Again there are two possible methods for updating these rates [30]. The first, called a global update, scans the entire lattice grouping particles into classes and calculating reaction rates. While straightforward, this is inefficient due to the fact that configuration changes take place over at most two neighboring particles. In contrast, we use a local update that changes the rates associated with particles immediately adjacent to or involved in the reaction. There is overhead associated with sorting the particles by class and the local updates, but these improvements prevent the simulation step from scaling with the number of particles in computational complexity.
We now expand upon how we perform local updates. Because each particle's behavior depends only on its adjacent particles, it suffices to enumerate these different neighborhood configurations. Specifically, we count the number of ways L species can be distributed across D neighboring sites. The standard stars-and-bars argument shows that the total number of configurations K is equal to the binomial coefficient Highly efficient algorithms exist to
PLOS ONE
systematically enumerate all configurations [31]. For example, with D = 4 neighbors and L = 2 species, the configuration 1 + 1 + 2 corresponds to one open adjacent site, one adjacent particle of the first type, and two adjacent particles of the second type. See Fig 2a for an example predator-prey model using the neighborhood configurations. The naive approach to sampling particles for each reaction channel would be to group particles together by species and neighborhood configuration k 2 {1, 2, . . ., K}. However, K scales factorially with the number of species in the simulation. This scaling issue further motivates our previous discussion of the sample classes, which scale with the number of reaction channels. We therefore restrict the use of neighborhood configurations purely for updating the sample classes after a reaction has occurred. We will now expand on how the neighborhood configurations, sample classes, and reaction channels interact. Fig 2b provides an example of the local update procedure after a reaction channel has been chosen to fire. In this scenario, reaction channel 1 is firing, meaning the simulation searches for a fox F adjacent to exactly one rabbit R to undergo the predation reaction. Foxes that satisfy this condition are contained within sample class 1 as denoted in Table 3. Suppose the bolded fox in the first configuration of Fig 2b is sampled from sample class 1 to undergo the reaction. Since it has only one adjacent rabbit, also bolded, this rabbit is likewise sampled to be the target of the predation reaction. At this point the rabbit changes type to a fox, shifting from vermillion to cyan. The neighboring indices and sample classes of both particles and their adjacent species now are updated to reflect the rabbit changing type. For example, the sampled fox loses an adjacent rabbit and is removed from sample class 1 to reflect this change. Lastly we update the rates of the reaction channels that have changed in terms of numbers of associated particles.
This approach has two major benefits. First, it decouples the size of the simulation from the size of the lattice. The most intensive operations required are those involving sampling a particle in a given sample class. These operations scale O(n) with the number of elements n in the sample class. Second, this decouples the sampling algorithm from the update step, allowing us to extend our approach to arbitrary simulation algorithms. It is worth noting that a significant portion of the simulation time is spent updating the sample classes of a particle after a reaction has occurred; this is an unavoidable consequence of the spatial structure imposed by the lattice. See the first table in the S1 File for an example breakdown of the run-time of different parts of the simulation step.
Extension to arbitrary simulation algorithms
We begin by demonstrating how our IPS sampling method maps neatly onto the SSA. Let r = 1, 2, . . ., R denote the index of a reaction channel, and let λ r denote the associated reaction rate for the r-th reaction channel. λ 0 = ∑ r λ r is the total reaction rate for the process. Given U 1 and U 2 independent uniform [0, 1] random variables, we determine the time T to the next reaction and the next reaction channel j to fire by the conditions Since the update step is kept separate from the time and reaction channel sampling steps, we are able to decouple the stochastic simulation algorithm of choice from the spatial considerations of the system. This applies to arbitrary simulation algorithms, including both exact and approximate methods. For example, τ-leaping proceeds exactly as described in [19]: the time increment is chosen to satisfy a leap condition, and a Poisson number of events from each reaction channel is chosen to fire. As an example, one might devise a leap condition by Updating the neighborhood and sample indices after a reaction. Suppose the highlighted fox and rabbit sites undergo a predation event. The rabbit is replaced with a fox, and so the neighborhood and sample indices of the sites surrounding the former R require updating to reflect the new configuration. The sample indices of the new F will change as well, but its neighborhood index will not. This update procedure need only be done for and around sites that change species.
https://doi.org/10.1371/journal.pone.0247046.g002 restricting the expected number of double-firing events on an individual particle, which necessarily depends on the total number of particles. However, the update step is no longer commutative as updates after a reaction must be carried out sequentially. We cannot sum the total changes to the sample classes in the same fashion as in the well-mixed case, because we must account for the possibility of intersecting particle paths. Thus an additional overhead is needed to randomly shuffle the order in which each reaction channel fires. In the case of exact SSAs, we can use a reaction-reaction dependency graph to restrict the reaction rates that are updated after each event to the subset that is dependent on the fired reaction channel. Unlike their counterparts for CRNs, a dependency between two reactions is captured through common sample classes. Our local updates necessarily affect multiple sample classes which are not uniquely determined by participating particle types as our local update mechanism also affects sample classes. As mentioned earlier, we will follow this manuscript with an extensive review of the many different available well-mixed simulation algorithms applied to IPSs.
Results
We provide simulation outputs generated by our software for four examples from models of varying complexity. Each demonstrates a phenomenon that is observed in the spatial IPS version of the process but not in the well-mixed CRN version. Jupyter notebooks [32] generating each image can be found at https://github.com/alanderos91/BioSimulator.jl. Animations for each example as well as tutorial notebooks explaining syntax, model construction, and simulation output are also provided through the link. For a list of reactions and parameters for each example, see the S1-S4 Tables in S1 File.
First, we have the predator-prey model previously described in Table 3. The output from the simulation is visualized in Fig 3. An animation of Fig 3 shows spiral wave patterns created by the prey migrating into unoccupied areas while being chased by predators, see the S1 File. The predators at the end of the wave die off, leaving space for the prey to migrate into and repeat the process. In this example, spatial dispersion promotes increased biodiversity. It prevents the large spike in predators that can lead to extinction or dramatic fluctuations in the number of predators and prey commonly seen in the CRN version of the model. Second, we present the three species rock-paper-scissors game depicted in Fig 4. Each species undergoes a birth-death-migration process and has an additional predation reaction: rock preys on scissors, scissors prey on paper, and paper preys on rock. Spiral wave patterns are also observed in this animation. Spatial dispersal similarly maintains biodiversity. Migration at a high rate can destroy this diversity as the populations mix [3].
Third, we have a more complicated model of an immune system interacting with a growing tumor in Fig 5. The cancer cells undergo a standard birth-death-migration process. Immune cells migrate in from the barrier cells at a constant rate and destroy tumor cells on contact. This predation may produce a fibrotic cell that is weakly porous to immune cells, simultaneously blocking the spread of the cancer and the eradication of the cancer by the immune system. The formation of a protective shell of fibroblasts is not seen in the well-mixed or RDME cases due to the lack of volume exclusion. Our simulations recapture the immune-excluded response result shown in [28]. This model is useful for exploring potential barriers to tumor eradication during immuno-therapy.
Finally, we present a model of polyunsaturated fatty acid (PUFA) oxidation in lipid membranes. Certain PUFAs are susceptible to oxidation which creates a kink in their long unsaturated hydrocarbon tails. As a result, a membrane with a significant number of oxidated PUFAs loses flexibility and can lead to neurodegeneration and aging [33]. Replacing the affected hydrogen atoms with deuterium significantly reduces the rate of oxidation, acting as a vaccine of sorts against the infective nature of reactive oxygen species. Fig 6 shows the trail of depleted (kinked) PUFAs left behind a reactive oxygen species jumping to unoxidated PUFAs. There exists a phase transition when the frequency of deuteration reaches approximately a 20%. This transition drastically reduces the length of the depleted PUFA chains left by an oxidated species. This reduction has been observed in vitro through mortality experiments on yeast. It can be observed using our software as a consequence of the oxidated species becoming trapped by its own tail and the deuterated PUFAs.
Availability and future directions
We have presented a principle for algorithm design that stresses elegance, performance, reproducibility, and wide applicability. These benefits can be broken down into three larger points.
First, our design allows for model standardization based on interacting particle systems. Many in-silico studies of spatial particle processes are haphazard in their construction and do not follow continuous time Markovian reaction dynamics. This limits the comparisons that can be made between models and creates barriers for new researchers looking to perform their own simulation studies. Adopting IPSs as a standard mathematical model enhances the most useful application of spatial stochastic simulation, namely generating hypotheses for given Table 3. https://doi.org/10.1371/journal.pone.0247046.g003
PLOS ONE
phenomena. Having a set of concrete, mechanistic rules with a straightforward probabilistic interpretation allows researchers to develop a hypothesis based on reaction dynamics that reproduce a given behavior in-silico and then take these dynamics back to an experimental setting for verification. The PUFA oxidation example serves as a demonstration of how hypotheses about an experimentally observed phenomenon can be tested using our software.
Second, our software is open-source and easily modifiable to individual needs. We have coded our implementation in Julia, a fast, expressive, and flexible open-source programming language designed for scientific computing [25]. Julia's ease of use facilitates extensions of our software to handle, for example, genealogies, particle tracking, and potentially long-range interactions between particles. The ease of use and model standardization taken together make further research done with our software easily reproducible and straightforward to document.
Lastly, our algorithm design allows us to apply arbitrary well-mixed stochastic simulation algorithms to spatial IPSs. This will be explored later in a review article that compares how each algorithm behaves in the spatial setting. Regardless, we can now apply a large swath of algorithms to spatial stochastic simulation without tedious re-implementation.
It is enlightening to contrast our algorithmic framework with previous work that has been published on different versions of stochastic IPS simulation. The most general spatial approach
PLOS ONE
uses Green's Function Reaction Dynamics (GFRD) to allow particles to diffuse over a continuous space [34,35]. This approach does allow for volume exclusion but due to the nature of Brownian motion is numerically intensive. Both Spatiocyte and our approach address this problem by restricting particles to diffuse across a lattice [13,15,36]. The software package Spatiocyte is generally similar to what we present here; particles diffuse and react across a lattice obeying volume exclusion. While Spatiocyte is a very sophisticated package with many enhancements, it does not enjoy the advantages of our sample classes in allowing invocation of a range of different stochastic simulation algorithms. Specifically pSpatiocyte, the parallelized and most recent version of Spatiocyte, restricts sampling to Gillespie's direct method [15].
While we have provided a framework for performing IPS simulations, significant extensions are possible that reduce the bias created by imposing a lattice spatial structure in IPS simulations [14]. First, square lattices are biologically unrealistic and can bias reaction kinetics during simulation. Chew et al. [14] ameliorate this problem imposing a hexagonal closepacked (hcp) lattice. Second, these authors derive a lattice spacing that minimizes the error caused by particles of different sizes. Finally, Chew et al. show how to use a species' diffusion coefficient to derive diffusion rates on the hcp lattice. Each of these extensions can be implemented in BioSimulator without changing the underlying algorithmic framework. These enhancements and code parallelization along the lines of [15] must await future versions of BioSimulator.
Our implementation of lattice simulation constitutes an extension of the BioSimulator package and is available along with the entire package on the GitHub site https://github.com/ alanderos91/BioSimulator.jl. The code can be downloaded anonymously from the GitHub URL. The site includes an issue reporting service as well as documentation, an installation guide, example notebooks, build statuses, and code coverage. BioSimulator is licensed under MIT "Expat" License and is OSI compliant. | 6,605.2 | 2021-03-02T00:00:00.000 | [
"Computer Science"
] |
Theory of the radiation pressure on magneto-dielectric materials
We present a classical linear response theory for a magneto-dielectric material and determine the polariton dispersion relations. The electromagnetic field fluctuation spectra are obtained and polariton sum rules for their optical parameters are presented. The electromagnetic field for systems with multiple polariton branches is quantised in 3 dimensions and field operators are converted to 1-dimensional forms appropriate for parallel light beams. We show that the field-operator commutation relations agree with previous calculations that ignored polariton effects. The Abraham (kinetic) and Minkowski (canonical) momentum operators are introduced and their corresponding single-photon momenta are identified. The commutation relations of these and of their angular analogues support the identification, in particular, of the Minkowski momentum with the canonical momentum of the light. We exploit the Heaviside-Larmor symmetry of Maxwell's equations to obtain, very directly, the Einsetin-Laub force density for action on a magneto-dielectric. The surface and bulk contributions to the radiation pressure are calculated for the passage of an optical pulse into a semi-infinite sample.
Introduction
At the heart of the problem of radiation pressure is the famous Abraham-Minkowski dilemma concerning the correct form of the electromagnetic momentum in a material medium [1,2,3,4]; a problem which, despite of its longevity, continues to attract attention [5,6,7,8,9]. The resolution of this dilemma lies is the identification of the two momenta, due to Abraham and Minkowski, with the kinetic and canonical momenta of the light, respectively [10]. It serves to indicate, moreover, why the different momenta are apparent in different physical situations [9]. In this paper we shall be concerned with manifestations of optical momentum in radiation pressure on media and, in particular, on magneto-dielectric media.
Most existing work on the theory of radiation pressure, as reviewed in [5,6,7,8,9], treated non-magnetic materials but there has been recent progress in the partial determination of the effects of material magnetisation. The rise of interest in metamaterials and in particular those with negative refractive index adds urgency to addressing this point [11].
We consider magneto-dielectric materials in which both the electric permittivity ε and the magnetic permeability µ are isotropic functions of the angular frequency ω. The quantum theory of the electromagnetic field for such media is now well-developed, with analyses in the literature based on Green's functions [12,13] and also on Hopfieldlike models based on coupling to a harmonic polarisation and magnetisation field [14,15,16,17]. The approach we shall adopt is to work with the elementary excitations within the medium, which are polarition coupled-modes of the electromagnetic field with the electric and magnetic resonances. We develop the associated classical linearresponse theory and use this to determine the electric and magnetic field-fluctuation spectra. The corresponding quantised field operators are expressed in terms of the polariton creation and destruction operators and we show that these satisfy the required standard commutation relations. We use these to construct the Minkowski and Abraham momentum operators and also the corresponding angular momenta, and show how the commutators of these with the vector potential facilitate the interpretation of the rival momenta [10]. The known form of the magnetic Lorentz force is confirmed and used to explore the momentum transfer from light to a half-space sample and so provide the extension to permeable medium of earlier work on dielectrics [18].
Classical theory
It suffices for our purposes to ignore the effects of absorption and so work with a real permittivity and permeability. These arise naturally from the dispersion relations in a Hopfield-type model in which the electromagnetic field is coupled to harmonic polarization and magnetisation fields associated with the host medium [19].
Linear response
The relative permittivity and permeability of a (lossless) magneto-dielectric at angular frequency ω may be written in the simple forms [20] where n e and n m are the numbers of electric and magnetic dipole resonances in the medium, associated with longitudinal and transverse frequencies indicated by the subscripts L and T. It follows that the values of the permittivity and permeability at zero and infinite frequencies are in accord with the generalized Lyddane-Sachs-Teller relation [21]. The phase refractive index is as usual [22]. We consider a magneto-dielectric medium in the absence of free charges and currents, so that our electric and magnetic fields are governed by Maxwell's equations in the form The electromagnetic fields described by these propagate as transverse plane-waves, with an evolution described by the complex factor exp(ik·r−iωt), with k = |k| and ω related by the appropriate dispersion relation. The four complex electric and magnetic fields are related by [22] where P and M are, respectively, the medium's polarization and magnetization. Henceforth, for the sake of brevity, we omit explicit reference to the frequency from our fields, permittivities and permeabilities. Now suppose that external stimuli p and m with the same frequency are applied parallel to E and H respectively, so that the relations (5) become It is convenient to introduce 6-component field and stimulus variables defined to be where Z 0 = µ 0 /ε 0 is the usual free-space impedance and V is a suitably chosen sample volume. The energy of interaction between the field and the stimulus components is then The solutions for the field components obtained by substitution of (6) into Maxwell's equations can be written where the linear response matrix is with the common denominator the zeroes of which give the dispersion relation for the field. The elements of the matrix T agree with and extend partial results obtained previously for non-magnetic media [23,24].
Field fluctuations
The frequency and wave-vector fluctuation spectra at zero temperature are obtained from the Nyquist formula [23] where the required imaginary parts occur automatically in magneto-dielectric models that include damping mechanisms or, in the limit of zero damping, by the inclusion of a vanishingly small imaginary part in ω. In the latter case, which applies to our model, we find The total fluctuations are obtained by integration where the frequency ω and the three-diemnsional (3D) wave vector k are taken as continuous and independent variables. Consider plane-wave propagation parallel to the z-axis in a sample of length L and cross-sectional area A. The frequency fluctuation spectra are then obtained from the one-dimensional (1D) version of (14) as The contributions of the component x-and y-polarised waves give with the use of (10) and (13). The E and H spectra, of course, have the usual relative magnitude for a magneto-dielectric material. The total fluctuations are obtained by integration of the spectra over ω as in (14).
Polariton modes
The combined excitation modes of the material dipoles and the electromagnetic field are the polaritons [25]. The transverse polaritons for a material of cubic symmetry, as assumed here, are twofold degenerate corresponding to the two independent polarisation directions. Their dispersion relation is determined by the poles in the linear response function, the components of which are given in (10). The vanishing denominator (Den) gives which is the desired transverse dispersion relation. For our relative permittivity and permeability, there are n e + n m + 1 transverse polariton frequencies for each wave vector and these are independent of the direction of k. They jointly involve all of the resonant frequencies in the electric permittivity and the magnetic permeability. It is convenient to enumerate the transverse polariton branches by a discrete index u, and there is no overlap in frequency ω ku between the different branches. There are also n e + n m longitudinal modes at frequencies ω Le and ω Lm , independent of the wave vector, but these will concern us no further.
Subsequent calculations involve the polariton group index η g , defined in terms of the phase index η p : for η p = ck/ω. The two refractive indices satisfy a satisfy a variety of sum rules over the polariton branches, given by [26,27,28] u η p η g ku = 1 u 1 η p η g ku = 1 (19) and by [29] u ε η p η g ku = 1 where all of the optical variables are evaluated at frequency ω ku , so the sums run over every frequency ω ku that is a solution of the dispersion relation (17) for a given wave vector k.
Field quantisation
The polaritons are bosonic modes, quantised by standard methods in terms of creation and destruction operators with the commutation relation The Dirac and Kronecker delta functions have their usual properties. The electromagnetic field quantisation derived previously for dispersive dielectric media [30] can be adapted for our magneto-dielectric medium by insertion of polariton branch labels and by conversion to SI units and continuous wave vectors. It is convenient to write all of the field operators in the form where the second term is the Hermitian conjugate of the first. The deduced form of the transverse (or Coulomb gauge) vector potential operator is then given bŷ whereâ The unit polarisation vector e k is assumed to be the same for all of the polariton branches at wave vector k. The two transverse polarisations for each wave vector are not shown explicitly here but they are important for the simplification of mode summations [31] and they need to be kept in mind. For non-magnetic materials, with µ = 1, the vector potential (23) agrees with equation (5) from [32] and also with equation (A12) in [33] when the summation over polariton branches is removed. The electric field, E, and magnetic flux density or induction, B, field operators are readily obtained from the vector potential aŝ The remaining field operators, the displacement field D and the magnetic field H, follow from the quantum equivalents of (5) aŝ The above four operator expressions are consistent with previous work [30]. The field quantisation for a magneto-dielectric medium has also been performed in a quantummechanical linear-response approach [12,13] that assumes arbitrary complex forms for the permittivity and permeability. The resulting expressions for the E and B operators contain the polariton denominator (11) in their transverse parts. A theory along these lines has been further developed by a more microscopic approach in which the explicit forms of the electric and magnetic susceptibilities are derived [14,15].
Field commutation relations
The validity of the field operators derived here (or at least their self-consistency) is demonstrated by the confirmation that they satisfy the required equal-time commutation relations [31,34]. Thus it follows from forms of our operators, given above, that with recognition that the two exponents in the first step give the same contributions and a crucial use of the first sum rule from equation (19). The transverse delta function, δ ⊥ ij (r − r ′ ), in the final step [31,34,35] relies on the implicit summation of the contributions of the two transverse polarisations. The common time t is omitted from the field operators in the commutators here and subsequently as it does not appear in the final results. An alternative canonical commutator is that for the vector potential and the electric field: with the use of a sum rule from equation (20). It follows from the operator form of the first relation in equation (5) that the vectors potential and the polarisation commute: This is most satisfactory as the two operators are associated with properties of different physical entities: the electromagnetic field and the medium respectively. We note that the commutator in equation (29) has been given previously [10] for non-magnetic dielectric media. The remaining non-vanishing field commutators are where we have used the sum rules given in equations (19) and (20). Here ǫ ijh is the familiar permutation symbol [36,37] and the repeated index h is summed over the three cartesian coordinates x, y and z. The first of these results generalises a commutator previously derived for fields in vacuo [34]. Note that together these commutation relations require that the polarisation operator commutes with the magnetic field operator and that the magnetisation also commutes with the electric field operator: which is a physical consequence of the fact that the two pairs of operators in each commutator are associated with properties of different systems: the medium and the electromagnetic field.
Parallel beams: 3D to 1D conversion
For parallel light beams, it is sometimes convenient to work with field operators defined for dependence on a single spatial coordinate z with a one-dimensional wave vector k. Thus, for a beam of cross-sectional area A, conversion from 3D to 1D is achieved by making the substitutions The vector potential operator (23) for polarisation parallel to the x-axis is converted in this way to wherex is the unit vector in the x-direction. The orthogonal degenerate polariton modes give a vector potential parallel toỹ. The four field operators retain their forms given in equations (25) to (28) except for appropriate changes in the first square-root factors and vector directions. The electric and displacement fields are parallel to the xaxis, while the magnetic field and induction are parallel to the y-axis. The commutation relation for the creation and destruction operators retains the form given in equation (21) but with k replaced by k. For µ = 1 and a single-resonance dielectric, the vector potential (35) agrees with equation (3.25) in [28] and equation (9) in [26]. The single-coordinate vector potential can also be expressed as an integral over frequency by means of the conversions There are no overlaps in frequency between the different twofold-degenerate polariton branches and the summation over u is accordingly removed from the vector potential, which becomes whereâ This destruction operator and the associated creation operator satisfy the continuum commutation relation The field operators obtained by conversion in this way of (25) to (28) arê There is again agreement with previously defined expressions [28,26] for µ = 1. We can use these field operators to calculate the vacuum fluctuations and find in exact agreement with results obtained in section 2.2.
We shall make use of the 1D field operators derived here to calculate the force exerted by a photon on a magneto-dielectric medium, but first return to the full 3D description to investigate the electromagnetic momentum.
Minkowski and Abraham
The much debated Abraham-Minkowski dilemma is most simply stated as a question: which of two eminently plausible momentum densities, D × B and c −2 E × H, is the true or preferred value [7,9]? We amplify upon the answer given in [10], making special reference to the effects of a magneto-dielectric medium.
Two rival forms for the electromagnetic energy-momentum tensors were derived by Minkowski [1] and Abraham [2,3,4]. Their original formulations considered electromagnetic fields in moving bodies, but it suffices for our our purposes to set the material velocity equal to zero. These results continue to hold for the magneto-dielectric media of interest to us. The two formulations differ principally in their expressions for the electromagnetic momentum and we consider here the respective quantised versions of these.
The Minkowski momentum is quite difficult to find in the paper [1], but it can be deduced trom his expressions for other quantities. Its quantised form, is represented by the operatorĜ where equations (26) and (27) have been used. The diagonal part of this momentum operator isĜ The fact that k = η p ω/c leads us to identify a single-photon momentum with the individual polariton mode ku. This form of the Minkowski single-photon momentum has been derived previously [38] for a single-resonance non-magnetic material. There are good reasons, however as we shall see below, not to assign this value to the Minkowski momentum. The corresponding form of the Abraham momentum operator, proportional to the Poynting vector for energy flow, is [2,3,4] which differs from the corresponding expression forĜ M (t) only in the the two squareroot factors that occur in the field operators (25) and (28). The diagonal part of this momentum operator iŝ so that the associated single-photon momentum is which is the usual Abraham value.
The subscript M appears in brackets in equation (44) because there is an alternative form for the Minkowski momentum, given by is observed in experiments sufficiently accurate to distinguish between the phase and group refractive indices, particularly the submerged mirror measurements of Jones and Leslie [39]. Note that the difference between the two candidate momenta. p (M and p M can be very large and even, the case of media with a negative refractive index, point in opposite directions [40]. The resolution of the apparent conflict between the two forms of Minkowski momentum is discussed in section 4.3.
Vector potential-momentum commutators
It is instructive to evaluate the commutators of the two momentum operators with the vector potential. We can do this either by using the expressions for the fields in terms of the polariton creation and destruction operators or, more directly, by using the field commutation relations (29) and (30), together with the fact that the vector potential commutes with both the magnetic field and the induction. For the Minkowski momentum we find where we have used the summation convention so that repeated indices are summed over the three cartesian directions. It should also be noted that the remaining fields will have the same form of commutation relation with the Minkowski momentum, for example for the electric field we have This follows directly from the relationships between these fields and the vector potential together with the fact that the vector potential commutes with the polarisation and the magnetisation.
For the Abraham momentum we can exploit our calculation for the Minkowski momentum to find the commutator The second term, with its integration over the magnetisation, means that the commutator depends on both a field property, the vector potential, and a medium property, the magnetisation [10].
Interpretation
The identification of the Minkowski and Abraham photon momenta respectively with the electromagnetic canonical and kinetic momenta has been proposed in the past [38,41,42,43], but rigorously proven only more recently [7,10]. We need add here only a few brief remarks. The commutation relation (49) satisfied by the Minkowski momentum operator resembles the familiar canonical commutator of the particle momentum operator, where F(r) is an arbitrary vector function of position. Thus, analogously to its particle counterpart, the Minkowski momentum operator for the electromagnetic field generates a spatial translation, in this case of the vector potential and the electric and magnetic fields. The operator therefore indeed represents the canonical momentum of the field and it is the observed momentum in experiments that measure the displacement of a body embedded in a material host, as has been seen for a mirror immersed in a dielectric liquid [39], for the transfer of momentum to charge carriers in the photon drag effect [44] and in the recoil of an atom in a host gas [45]. The simple spatial derivative that occurs on the right of equation (49) shows that the measured single-photon momentum should have the Minkowski form in (48) and not that given in equation (44). The kinetic momentum of a material body is the simple product of its mass and velocity. The form of the Abraham single-photon momentum in equation (47) is verified by thought experiments of the Einstein-box variety [46,47]. These use the principle of uniform motion of the centre of mass-energy as a single-photon pulse passes through a transparent dielectric slab and they reliably produce the Abraham momentum. The calculations remain valid with no essential modifications when the slab is made from a magneto-dielectric material.
More detailed analyses of the coupled material and electromagnetic momenta [7,10] show that the total momentum is unique but that this can be formed as the sum of alternative material and electromagnetic field contributionŝ where theP operators represent the collective momenta of all the electric and magnetic dipoles that constitute the medium. The total momentum is the same for both the canonical and kinetic varieties, both being conserved in the interactions between electromagnetic waves and material media.
Angular momentum
The electromagnetic field carries not only energy and linear momentum but also angular momentum and it is natural to introduce angular momenta derived from the Minkowski and Abraham momenta in the formŝ A careful analysis of a light beam carrying angular momentum entering a dielectric medium shows that, in contrast with the linear momentum, the Minkowski angular momentum is the same inside and outside the medium, but that the Abraham angular momentum is reduced in comparison to its free-space value by the product of the phase and group indices [48]. The analogue of the Einstein-box argument suggests that light carrying angular momentum entering a medium exerts a torque on it, inducing a rotation on propagation through it. An object imbedded in the host, however, may be expected to experience the influence of the same angular momentum as in free space and, indeed, this is what is seen in experiment [49]. The canonical or Minkowski angular momentum should be expected to induce a rotation of the electromagnetic fields, which requires both a rotation of the coordinate and also of the direction of the field. The requirement to provide both of these transformations provides a stringent test of the identification of the Minkowski and canonical momenta. It is convenient to first rewrite the Minkowski linear momentum density in a new form: where we have used the first Maxwell equation, ∇ jDj = 0. We can insert this form into our expression forĴ M and, on performing an integration by parts and discarding a physically-unimportant boundary term we find which, in the absence of the medium reduces to the form obtained by Darwin [50,51]. It is tempting, even natural, to associate the two contributions in the integrand with the orbital and spin angular momentum components of the total angular momentum. This is indeed reasonable, but it should be noted that neither part alone is a true angular momentum [52, 53, 54]. It is instructive to consider the commutation relation with a single component of the angular momentum and so consider the operator θ ·Ĵ M : The orbital and spin parts rotate, as far as is possible given the constraints of transversality, the amplitude and direction of the potential [52, 53, 54]. The combination of both of these gives the required transformation. The commutator (57) gives the first order rotation of the vector potential about an axis parallel to θ through the small angle θ, as the canonical angular momentum should.
Magnetic Lorentz force
It remains to determine the radiation pressure due to a light field on our magnetodielectric medium. To complete this task we adopt the method used previously of evaluating the force exerted by a single-photon plane-wave pulse normally incident on the medium [18,55]. Before we can complete the calculation, however, we need to determine a suitable form for the electromagnetic force density.
Heaviside-Larmor symmetry
Maxwell's equations in the absence of free charges and currents (4) exhibit the so-called Heaviside-Larmor symmetry [56,57], with the forms that are invariant under rotational duality transformations given by [22] for any value of ξ and where Z 0 is again the impedance of free space. It is readily verified that the four Maxwell equations are converted to the same set of equations in the primed fields. The various physical properties of the electromagnetic field must also be unchanged by the transformations [58]. We note, in particular, that the Minkowski and Abraham momentum operators,Ĝ M andĜ A given in equations (42) and (45) and also the usual expressions for the electromagnetic field energy density and Poynting vector are all invariant under the transformation (58). The standard form of the Lorentz force law in a non-magnetic dielectric is [59] with terms proportional to the electric polarisation and its time derivative ‡. For a magneto-dielectric medium we need to add the force due to the magnetisation and to ‡ This is usually written in terms of the magnetic induction as [59] In a magnetic medium, however, we need to distinguish between µ 0 H and B. That it should be the do so in a manner that gives a force density that is invariant under the Heaviside-Larmor transformation. It follows from the transformation (58) that the polarisation and magnetisation are similarly transformed: The required form of the force density, satisfying the Heaviside-Larmor symmetry is The invariance of this expression under the Heavisde-Larmor transformation is easily shown. This form of the force density was derived by Einstein and Laub over 100 years ago [60] and there have since been several independent re-derivations of it [61,62,63,64,65,66]. The final term in equation (61) has been given special attention in [67], where it is treated as a manifestation of the so-called 'hidden momentum' given by ε 0 µ 0 M × E. This line of thought has attracted a series of publications, with several listed on page 616 of [22]. Omission of the final term leads to difficulties, not the least of which is the identification of a momentum density that does not satisfy the Heaviside-Larmor symmetry [68,69] §. The above derivation of the Einstein-Laub force density shows how the magnetic terms follow from the polarisation terms by simple symmetry arguments and so provides a new perspective on the complete force density. It is interesting to note, moreover, that the force density appropriate for a dielectric medium (59) may be obtained by consideration of the action on the individual dipoles making up the medium [59] and that the most direct way to arrive at the Einstein-Laub force density is to obtain the magnetisation part by treating a collection of Gilbertian magnetic dipoles [76]. It is also shown in the original paper [60] that the classical form of the Abraham momentum and the classical force density satisfy the conservation condition ∂ ∂t The integration is taken over all space and the relation is valid for fields that vanish at infinity. This equality of the rate of change of the Abraham momentum of the light to minus the total Einstein-Laub force on the medium, or rate of change of material momentum, is as expected on physical grounds and further underlines the identification of the Abraham momentum with the kinetic momentum of the light [7,10]. Equation (62) is simply an expression of the conservation of total momentum. former that appears in the force density follows on consideration of the screening effect of surrounding magnetic dipoles in the medium, in much the same way as electric dipoles screen the electric field in a medium. § It is by no means straightforward to obtain the Einstein-Laub force density, in particular the final hidden-momentm related term, from the microscopic Lorentz force law and it has been suggested, for this reason, that the latter is incorrect [70]. A relativistic treatment, however, reveals that the required hidden-momentum contribution arises quite naturally from the Lorentz force law [71,72,73,74,75].
Momentum transfer to a half-space magneto-dielectric
We consider a single-photon pulse normally incident from free space at z < 0 on the flat surface of a semi-infinite magneto-delectric that fills the half space z > 0. The pulse is assumed to have a narrow range of frequencies centred on ω 0 . Its amplitude and reflection coefficients, the same as in classical theory, are [22] where all of the optical parameters depend on the frequency. The damping parameters in the imaginary parts of ε and µ, as they occur in R and T , are assumed negligible but they should be sufficient for the attenuation length to give complete absorption of the pulse over its semi-infinite propagation distance in the medium. The momentum transfer to the medium as a whole is entirely determined by the free-space single-photon momenta, before and after reflection of the pulse, as with the small imaginary parts of the optical parameters again ignored. It is often useful to re-express the momentum transfer in terms of a single transmitted photon with energyhω 0 at z = 0 + . This quantity is obtained by removal of the transmitted fraction of the pulse energy, given by the bracketed term on the right of (65), as in agreement with previous work [64,77]. The remainder of the section considers the separation of this total transfer of momentum to the magneto-dielectric into its surface and bulk contributions. Previous calculations [18,55] of the radiation pressure on a semi-infinite dielectric were made by evaluation of the Lorentz force on the material. This method is generalised here for a magneto-dielectric medium. For a transverse plane-wave pulse propagated parallel to the z-axis, with electric and magnetic fields parallel to the x and y axes respectively, the relevant component of the Einstein-Laub force from (61) has the quantised formf The 1D field operators from section 3.3, given in (40), are appropriate here. The singlephoton pulse is represented by the state where |0 is the vacuum state. This single-photon state is normalised if we require our function ξ(ω) to satisfy dω |ξ(ω)| 2 = 1 .
The states, from their construction, satisfŷ A simple choice for the pulse amplitude is with c/L ≪ ω 0 . The narrowness of the spectrum of this pulse means that ω can often be set equal to ω 0 . The radiation pressure of the magneto-dielectric is obtained by evaluation of the expectation value of the Einstein-Laub force (67) for the single-photon pulse. The normal-order part of the force operator, indicated by colons, is used to eliminate unwanted vacuum contributions. A calculation similar to that carried out in [18] for a dielectric medium gives where the real ε, µ, and their derivatives in the group velocity are evaluated at frequency ω 0 and their relatively small imaginary parts survive only in the attenuation length ℓ. It is readily verified by integration over t then z that this expression regenerates the total momentum transfer in equation (66). The force on the entire material at time t is with appropriate approximations neglecting small terms in the exponents for the long attenuation-length regime with L ≪ ℓ. This expression, with error function and exponential contributions, has the same overall structure as found in other radiation pressure problems [18,55]. The two terms in the large bracket are respectively the bulk and surface contributions, with time-integrated values This is again in agreement with equation (66) and it also reduces to equation (5.21) of [18] for a non-magnetic material, where the momentum transfer can be written entirely in terms of the phase and group indices. The simple Abraham photon momentum again represents that available for transfer to the bulk material, once the transmitted part of the pulse has cleared the surface, while the more complicated surface momentum transfer depends on both ε and µ, together with their functional forms embodied in the phase and group refractive indices. We conclude this section by noting that the Heaviside-Larmor symmetry retains a presence in all of the forces and force densities obtained here, in that their forms are unchanged if we interchange, everywhere, the relative permittivity and permeability.
Conclusion
Much of the content of the paper presents the generalisations to magneto-dielectrics of results previously established for non-magnetic materials with µ = 1. We believe that the classical linear response theory in section 2 is novel; it allows, in particular, direct calculation of the electric and magnetic field-fluctuation spectra. The elementary excitations for a medium with multiple electric and magnetic resonances are the polaritons, whose phase and group velocities obey generalised sum rules for magnetodielectrics [29].
The quantum theory in section 3 introduces electromagnetic field operators based on the multiple-branch polariton creation and destruction operators. It is shown that the vacuum fluctuations of the quantised electric and magnetic fields reproduce the spectra obtained from our classical linear-response theory. The generalised field operators are shown to satisfy the same required canonical commutation relations as their simpler counterparts that hold in vacuum.
The Minkowski and Abraham electromagnetic momentum operators are introduced in section 4 and their associated single-photon momenta are identified.
The commutators of these momentum and the vector-potential operators, previously calculated, rely on the canonical commutation relations and, through these, on the polariton sum rules. An extension to angular momentum, both canonical and kinetic, is achieved by introducing angular momentum densities that are the cross product of the position and the Minkowski and Abraham momentum densities respectively.
Throughout our work we are guided by the Heaviside-Larmor symmetry between electric and magnetic fields. We show, in section 5, that application of this symmetry leads directly to the Einstein-Laub force density [60]. Our final result identifies the surface and bulk contributions in the force on a semi-infinite magneto-dielectric for the transmission of a single-photon pulse through its surface. | 7,539 | 2015-02-18T00:00:00.000 | [
"Physics"
] |
Modeling of ion dynamics in the inner geospace during enhanced magnetospheric activity
We investigate the effect of magnetic disturbances on the ring current buildup and the dynamics of the current systems in the inner geospace by means of numerical simulations of ion orbits during enhanced magnetospheric activity. For this purpose, we developed a particle-tracing model that solves for the ion motion in a dynamic geomagnetic field and an electric field due to convection, corotation and Faraday induction and which mimics reconfigurations typical to such events. The kinematic data of the test particles is used for analyzing the dependence of the system on the initial conditions, as well as for mapping the different ion species to the magnetospheric currents. Furthermore, an estimation of Dst is given in terms of the ensemble-averaged ring and tail currents. The presented model may serve as a tool in a Sunto-Earth modeling chain of major solar eruptions, providing an estimation of the inner geospace response.
Introduction
During each solar cycle, sequences of eruptive flares are followed by coronal mass ejections and interplanetary shocks, some of which arrive near Earth.At times when the solar wind enters into Earth's magnetosphere, these solar eruptions modify the dynamic conditions in geospace and trigger space weather effects like geomagnetic storms and magnetospheric substorms (Daglis, 2004;Schwenn, 2006;Pulkkinen, 2007).Geomagnetic storms occur when the energy transfer from the Sun to geospace intensifies, as a result of the occurrence of magnetic reconnection at the dayside magnetopause during periods when the interplanetary magnetic field (IMF) has a strong and prolonged southward component (Akasofu, 1981).Magnetospheric substorms are caused by the variability in the north-south orientation of the IMF and evolve as energy loading-dissipation cycles (Baker et al., 1996).This kind of activity brings up a configuration change in the magnetosphere, including the ionosphere, and enhances the ring current and corresponding current systems flowing on the magnetopause, along the magnetotail and in the Birkeland regions (see in Pulkkinen et al., 2005).The associated dynamic processes evolve in a variety of timescales, from days for geomagnetic storms and hours for magnetospheric substorms down to minutes, or even seconds, for local plasma instabilities.
Solar eruptions are characterized as geoeffective when the magnetospheric response amounts to large electromagnetic perturbations, with severe consequences for the performance of ground-based power and communication networks as well as for spacecraft and weather satellites (Daglis, 2004).A major goal in space weather research is to predict the dynamic state of the geospace from measured solar wind and IMF data, so as to timely distinguish those events that are harmful.In this respect, the simulation of physical processes dominating extreme space weather conditions, such as magnetic reconnection, convective plasma transport and charged particle acceleration, is required (Daglis et al., 2009).For numerous events, magnetospheric activity can be described by means of a few geomagnetic indices, like Kp and Dst, which can in principle be derived from solar wind and IMF values.However, these indices include systematic and/or statistical errors which limit the capability to establish consistent (causal) correlations (Rostoker, 1972).Therefore, more detailed, large-scale numerical solvers of the coupled solar wind-magnetosphere system may be employed; such models have advanced with the increased availability of computer resources.
There are cases where global fluid and magnetohydrodynamic (MHD) simulations reproduce the observed changes in the magnetic topology to quite good accuracy (Moretto et al., 2006;Honkonen et al., 2013).However, their use has limitations due to missing physics for the description of non-collisional processes in a multi-species plasma.Kinetic solvers and test-particle simulations, with a description of the plasma motions in adjustable physics detail, increase the reliability at smaller scales by properly addressing effects like thermal instabilities and anomalous transport (Buneman et al., 1992;Khazanov, 2010).The drawback of microscopic models for global simulations is the large demand on computer resources; to cope with this, it is customary to separately model each source playing an important role in the dynamics: the ring current, the near-Earth tail currents, the radiation belts and the magnetopause.In this frame, the origin and transport of ring current and radiation belt particles during storm time, their interaction with the tail current, the escape of high-energy particles and the dynamic connection with the substorm phases have been addressed through the analysis of observations and dedicated numerical simulations (Chen and Schulz, 1996).
For the description of the ring current dynamics, the plasma current distributions at the near-Earth region have been modeled in terms of the bounce-averaged, drift-kinetic equation.Fok et al. (2001) described the particle drifts in the storm-time field in terms of the initial and boundary particle distributions, with the coefficients in the kinetic equation calculated from the Hamiltonian description of motion.Jordanova et al. (2010) modeled the radially diffusive plasma dynamics in self-consistence with the fields by coupling the kinetic equation with a 3-D, force-balanced magnetic equilibrium code and a MHD solver for the convection electric field.In another self-consistent treatment, Lemon et al. (2004) employed a collisionless kinetic code together with a model for the electrostatic potential, taking into account the current closure with the ionosphere.The specific model has been coupled to the code of Fok et al., where it serves as the solver for the electric field.
The method we adopt in this work is to directly follow the 3-D particle trajectories under the effect of the electric and magnetic forces during the dynamic phases of the disturbance (see, for example, Delcourt et al., 1990;Ganushkina and Pulkkinen, 2002;Ebihara et al., 2003).An advantage of studying the individual particle motions is the physics insight gained, as well as the statistics built from ensembles of particles.In such models, the Lorentz equation of motion is solved, either in its full form or reduced in terms of the guiding-center approximation, and the driving forces are (as above) the dynamic magnetic field coming from the superposition of the Earth's terrestrial magnet with the fields gen-erated by the magnetospheric current sources (Tsyganenko, 2013), and the electric field due to large-scale plasma convection and corotation with the Earth (Volland, 1973).An important factor, however, is the modeling of the electric field component induced by the time variation in the magnetic field.The specific field is involved in the strong acceleration of charged particles which is observed during geomagnetic disturbances; however, there are a relatively low number of test-particle-based studies which have been performed in this direction (like, for example, Delcourt, 2002).
It becomes apparent that the modeling of the near-Earth plasma response to geoeffective solar events is of major importance for the improvement of space weather prediction.The model requirements are a consistent description of the geomagnetic and electric fields, the computation of the Sun-driven plasma dynamics and the assessment of the numerical data for the estimation of parameters related to space weather, including benchmarks against ground-based and satellite observations.In this paper, we present results from the simulation of the electric and magnetic fields and of the energetic particles in the inner magnetosphere, focusing on the ring current buildup and decay when disturbances are occurring.The physics of our model for the forces driving the plasma dynamics are cast in a form suitable for use with 3-D test-particle codes.Provided that there are suitable simulation data, a statistical evaluation for the dynamics of the different ion types is performed over the initial conditions, and an ensemble-averaged estimation of the Dst index stemming from the ring and tail current populations is given.
The structure of the paper is as follows: in Sect.2, the physics model for the geomagnetic and electric fields is explained, accompanied by field-line tracing and equipotential contour simulations, and, following that, we describe the main aspects of the particle-tracing model.In Sect. 3 we present the numerical results: the different types of ion motion found in the disturbed magnetosphere, the statistical analysis of the particle dynamics and the estimation of the Dst index.Finally, in the concluding section, the merits of this work are summarized, the limitations of our model are discussed and further studies are proposed.
Geomagnetic field
The magnetic field in geospace is expressed as the sum of two contributions: the first one is from the Earth's terrestrial field, whereas the second comes from the external field generated by the electric currents flowing inside the magnetosphere (including the magnetopause).The Earth's magnetic field is well approximated as the one of a tilted dipole magnet with inverse polarity (Parks, 1991).In geocentric solar magnetospheric (GSM) Cartesian coordinates, the expression of the dipole field is where R E = 6378 km is the Earth radius, B E = 31 000 nT is the value of the magnetic field on the surface, r = r/r is the direction vector and θ t is the tilt angle.We note here that, since B ter varies very slowly (through B E and θ t ) in comparison to the solar activity and its geomagnetic response, it may be considered time-independent in the context of our computations.
The second component of the geomagnetic field, denoted by B ext , is generated by the electric currents which result from the interaction of the magnetospheric plasma with the solar wind (Parks, 1991).The most important components are (i) The magnetopause current, which is controlled by the solar wind's dynamic pressure P dyn , (ii) the magnetotail currents, which extend from 10 R E to well beyond 100 R E , and (iii) the ring current, flowing around the Earth inside a toroidal band approximately within [3,9] R E .The external field's spatial dependence is defined by the distribution of the current sources, and its time dependence by the evolution of these sources during quiet time and the events.
The mainstream of models for the static part of B, due to the Tsyganenko algorithms T89, T96 and TS05 (see Tsyganenko, 2013, and references therein), follows a data-based approach towards correlation with parameters of geomagnetic activity like P dyn , the IMF vector B, the planetary index Kp and the disturbance storm-time index Dst (Rostoker, 1972).In T89, a physics-based description of the magnetospheric currents and the corresponding vector potential was introduced; the T96 model improved T89 in the description of the magnetopause geometry and the equatorial tail physics, whereas TS05 upgraded T96 with the inclusion of storm and substorm dynamics.All models require specific parameter values at input: in T89, Kp and θ t are to be given; in T96, apart from θ t , it is P dyn , Dst and B y , B z , whereas TS05 requires the input of T96 plus six additional parameters, S 1 , S 2 , . .., S 6 , related to the storm-time effects.
For the visualization of the magnetic field, one employs the standard field-line tracing technique (Parks, 1991).In 2-D, the field-line map is a clear picture only on the x − z planes, as a result from the existing symmetries of the magnetosphere's geometry in the GSM system: the x axis is the line connecting Sun and Earth, whereas the z axis may always be placed on the magnetic dipole axis.In Fig. 1 we show the map of the total magnetic field on the x − z plane defined by the meridian y = 0, as computed with the T89 model.We present two cases with different values of the Kp index, one relevant to quiet time (Kp = 1) and one reflecting storm-time conditions (Kp = 5), for the typical inclination of the Earth's dipole (θ t = 11.5 • ).The typical properties of the geomagnetic field appear in the results; for example, in the second case, where Kp is larger, the field lines are more dense close to the Equator and towards Earth due to the rise in the convection intensity.
For proper application of the Tsyganenko models, the role of the differences between the models and the properties of the computed physics, especially in strongly disturbed cases, has to be investigated.The benchmark of these models against observations is an issue that has been addressed by a number of authors.Woodfield et al. (2007) performed a comparison of T89 and T96 with magnetic field data from the Cluster mission, and the results have shown noticeable deviations only in the outer ring current region on the nightside and near the cusp.Boschini et al. (2013) utilized T96 and TS05 in a geomagnetic backtracing code and benchmarked against AMS-02 data, finding significant differences near Earth only for storm conditions (Kp > 5 or P dyn > 3 nPa).Also, McCollough et al. (2008) performed a detailed statistical comparison of all established models and, based on the results, the use of models that include magnetospheric asymmetry is encouraged when Kp > 4 in regions including the dayside and the dawn-dusk neighborhood.
A comparison of T89, T96 and TS05 is presented in Fig. 2. The reference case involves a strongly perturbed, non-tilted dipole, and the exact input for each model is given in Table 1.The differences in the computation of B by the different models are quantified in terms of the relative de- Within the limits set by the differences in the input of T89 with respect to the other models, the quantitative comparison does not exhibit very large deviations in most of the region of interest (approximately within [−25R E , 10R E ] along x and [−15R E , 15R E ] along z, and always inside the magnetopause boundary).Noticeable deviations, ranging from 50 % to 150 %, appear in the outer region on the nightside, near the cusp on the dayside and in the far-Earth magnetopause, which is in agreement with the benchmarks presented above.Consequently, the specific choice of field model is not expected to play a crucial role in the test-particle results.
The dynamic part of the magnetic field is determined by the modification of the geomagnetic parameters in time.With introduction of the vector G = [G j ], with components the input parameters required for each Tsyganenko model (e.g., G = [θ t ,Kp] for T89), the partial derivative of where the functions G j (t) may be specified analytically, in terms of an approximation by continuous functions, or directly as a time series of observations (the derivatives then being computed as discrete-time finite differences).In principle, the terms ∂B ext /∂G j are not available in analytic form; these could be discretized and computed by repetitive usage of the numerical field model for the parameter values of interest, but, in this fashion, the computing cost heavily increases.
In order to simplify the computation, these terms are approximated by the variation in B ext within the start and stop times of the event t i and t f , and by a normalized profile function F B (t).In this frame, the total field is expressed as In our model, an event starts at time t i , stops at t f and, during this interval, it evolves in phases described by the function F B and the values of G at t i , t f .F B (t) is defined on the basis of the properties of the magnetic field, as observed in measurements.Here, we refer to events which have an initial "growth" period where the field strength is increasing to high values, followed by a (shorter) "relaxation" phase where B returns to its previous levels (Metallinou, 2008).To this end, F B is chosen to be In the above, ) is a product of Heaviside step functions and t g is the time stamp of the growth phase, whereas w j are fitting coefficients.In Fig. 3 we illustrate F B (t) for a substorm event with time stamps t i = 0 min, t g = 30 min, and t f = t r = 35 min, and five fitting parameters, w 1 = w 2 = 0, w 3 = 10, w 4 = −15, and w 5 = 6.
Electric field
The electric field is divided into three components (Russell, 2000): the first one is due to plasma convection in the magnetosphere, the second stems from near-Earth plasma corotation, and the third one is generated by the dynamic variation in the geomagnetic field during the events.The slow timescale of the convection and corotation processes in comparison to the overall plasma dynamics allows for their consideration as electrostatic.A variety of models have been developed for the calculation of the electrostatic potential cc that generates the convection-corotation field: (i) the Volland-Stern-Maynard-Chen (VSMC) model (Volland, 1973;Stern, 1975), based on an empirical dawn-dusk potential distribution with Kp dependence and magnetopause shielding; (ii) the E5D model, a transpolar, Kp-driven analytical approximation of the convection potential (McIlwain, 1986); (iii) the Boyle-Reiff-Hairston (BRH) model, which describes the convection field with a polar-cap potential function driven by the solar wind and the IMF (Boyle et al., 1997); and (iv) the Weimer (WM) model, which is derived from a combination of low-altitude measurements of the convection velocities at high latitudes (Weimer, 2005).
The effect of the model differences to the computed dynamics in the transition to stormy conditions is again put under question.In Khazanov et al. (2004), against the background of plasma kinetic simulations, the BRH and VSMC models were compared and the results did not yield measurable differences, except from regions near the magnetopause and the distant tail.In the same manner, in the context of MHD plasmapause simulations (Pierrard et al., 2008), the comparisons between the VSMC, E5D and WM models con-cluded in a similar picture for the near-Earth convection.An indirect benchmark of the VSMC and E5D models was performed by using these, together with the Tsyganenko models, as input to gyro-particle simulations (Woelffle et al., 2011).It was shown that the differences in the magnetic field do not influence the computation as much as the ones in the electric field, which was highlighted by significant variations in the particle trajectory shape and the energy variation during transport.One concludes that the choice of model should be made according to the performance under conditions implied by the event under study; for example, VSMC offers a good global description of transport in the plasma sheet, whereas E5D predicts the magnetopause position better.
From the aforementioned tools we choose to employ the VSMC model, which combines sufficient accuracy in the physics description with simplicity in the computer imple- In Eq. ( 5), ω E = 2π/24 rad h −1 is Earth's rotation frequency; γ is the magnetopause shielding factor; and υ 1 , υ 2 , and υ 3 are constant parameters, which are calculated in terms of data fitting over magnetic field measurements in the inner tail region.On the right-hand side of Eq. ( 5), the leftmost term represents the potential for the convection field, in which the fraction involving Kp determines the field intensity, whereas the rightmost term is the potential generating the corotation field.
Vector fields coming from a scalar potential are represented in terms of their equipotential (contour) surfaces.The contour surfaces of cc are calculated by solving Eq. ( 5) with respect to the coordinates on a certain potential level, i.e. cc (x, y, z) = l .In Fig. 4 we perform a 2-D visualization of the contour lines on the equatorial plane (z = 0), for geospace-related parameter values γ = 2, υ 1 = 0.045 kV m −2 , υ 2 = 0.0093 and υ 3 = −0.159, in two cases of solar activity level with different intensity: (a) quiet time (Kp = 1) and (b) disturbed (Kp = 5).The main physics properties of the convection field are well reproduced by the model, like, for example, the global increase in the field values as Kp increases.
The role of the electric field component induced by the dynamic variation in the magnetic field in properly modeling the solar-driven perturbations is very important.This is due to the fact that it has a short space/timescale, which is effective in accelerating ions to very high energies (as observed during storms and substorms), whereas the convection process forms a distribution of plasma currents of comparatively low energy.In this context, the total electric field is expressed in terms of the potentials According to Eq. ( 6), the calculation requires knowledge of the vector potential A ext , which is the generating function of It is known, however, that, given arbitrary magnetic field, an analytic solution for the vector potential is, in most cases, not possible.T89 involves simplifications in the description of the plasma current sources which allow the analytic calculation of A ext , whereas the later models are based on a more complicated formulation, including spherical harmonic expansion and integrals of special functions, and thus cannot fall in this category.
Particle tracing
The test-particle model computes the near-Earth ion dynamics during the geomagnetic disturbance by following the 3-D trajectories under the effect of the associated electric and magnetic fields.The particle trajectory is traced by solving numerically the Lorentz equation including the gravitational force In Eq. ( 7), g = g ER 2 E r/r 2 is the gravitational acceleration (g E = 9.81 m s −2 , its value on Earth's surface) and m and q are the particle mass and electric charge.For electrons it is m e = 9.31 × 10 −31 kg and q e = −1.6 × 10 −19 Cb, while for an ion of atomic mass A i and ionization rate s i it is m i = 1837A i m e and q i = s i |q e |.
The particle motions may also be evaluated in terms of a reduction to the full model, depending on the validity of the guiding-center (GC) approximation over the simulated region.The GC trajectory describes the overall motion well in cases where the electric/magnetic field variations remain sufficiently small over each revolution (Parks, 1991).This translates to relations of the Larmor radius ρ L and the rotation frequency f L with the spatiotemporal scales of E, B Equation ( 8) suggests that the GC approach is invalid when the field-line curvature is comparable to the Larmor radius, as well as for heavy ions that exhibit large periods of gyration.
If the approximation is valid, the GC equation is employed in the following form (Northrop, 1961) with v gc the velocity of the GC (the symbols || and ⊥ refer to the parallel and perpendicular components) and µ gc = v 2 gc,⊥ /(2B) the particle's magnetic moment, which here is an adiabatic invariant.In Eq. ( 9), the terms on the right-hand side refer to the effect of the electric field, the gravitational force, the magnetic field gradient and the magnetic curvature on the GC drift motion.
The particle-tracing scheme combines the models presented so far: T89 is employed for the static part of B and the function of Eq. ( 4) for its time variation, VSMC is used for the electric field due to convection and the induced part is computed on the guidelines described near (Eq.6), and the particle motion is followed by solving the Lorentz equation or by adopting the GC model, with the ability to interplay between the two orbit solvers.The computation is interrupted if the particle leaves far from the inner magnetosphere, either by crashing onto Earth (r ≤ R E ), crossing the magnetopause or reaching a tailward distance further than 70R E , with different stop codes so that each case is distinguished.
The orbits may be traced with the Lorentz equation, with no simplification adopted at any stage of the computation.The GC model, in the regions where it is valid according to the conditions (Eq.8), is an efficient method to provide a simpler trajectory calculation.In such a scheme, in principle, the GC conditions of validity should be checked at every time step and, depending on the outcome, the physics model to be applied should be chosen.However, since this tactic radically decreases the code speed, in practice the orbit solver is interchanged whenever the radial position of the particle becomes less that an empirically set limit R fm .In this frame, an issue which should be investigated is the differences in the orbits with respect to the computation using the full model, particularly in conditions of amplified disturbances.
In the literature, the benchmarking between the different magnetospheric particle solvers in the presence of intense electric and magnetic fields is not sufficiently extensive and the results are contradictory.Cladis and Francis (1989) performed a comparison of the GC and the Lorentz solutions for heavy ions under the effect of a geomagnetic field given by the T89 model, and the results showed acceptable deviations in the particle flux rates and the drift paths.However, Shibahara and Nose (2009) computed energetic ion motions using the TS05 and VSMC models for the fields, and found measurable differences in the occurrence of large ion gyroradii and pitch angle values close to π/2.In such cases, some of the adiabatic invariants are broken and the validity of the GC approximation becomes questionable.
In order to clarify this issue, we compute a specific trajectory for several values of the threshold distance R fm , and compare the results in Fig. 5.We evaluate the orbit with R fm ranging from R E (full orbit) up to 60R E , and for a plain GC simulation we set R fm = 100R E .In Fig. 5a and 5b, the position r and the kinetic energy E k are plotted vs. t for several values of R fm .One observes that the deviations between the Lorentz, GC and mixed computations appear after the event has ended and are measurable for E k and less important for r.The deviation of the results for E k from intermediate values of R fm exhibits an irregular behavior; indicatively, using R fm = 18R E one is still close to the full model, whereas for R fm = 10R E the deviation is larger and for R fm = 8R E the particle follows a completely different orbit.This picture is verified by Figure 5c, where the maximum relative error from all orbit quantities is computed as a function of the threshold radius.Due the sensitivity of the results on the interchanging procedure, one should be cautious with the choice of R fm ; for this, we choose to use only the Lorentz model in order to provide the most reliable approach.
Numerical results
In this section, the results from test-particle simulations are shown and analyzed.The space weather scenario under study involves the occurrence of a single magnetospheric disturbance.The growth phase of the event starts at t i = 0 with a quiet magnetosphere, indexed with Kp (t i ) = 1, and completes after t g − t i = 30 min by reaching a disturbed state with index Kp (t g ) = 5.Then, the relaxation phase follows immediately and completes after t r − t g = 5 min, during which Kp returns to its initial value, i.e.Kp (t r ) = Kp(t i ).The particle starts its flight at the time stamp t 0 , which may be before (t 0 − t i < 0) or after (t 0 − t i > 0) the onset of the growth phase, interacts with the disturbance until t r and continues moving under the effect of the restored fields until the time stamp t 1 .
In the disturbed magnetosphere, three primary types of ion trajectories are met: (i) orbits which become trapped inside the ring current, (ii) orbits that precipitate into Earth's atmosphere, and (iii) orbits escaping tailward or by crossing the magnetopause.We have computed these types by sampling various initial conditions for the ion position and energy, and the results are shown in GSM coordinates in Figs. 6 and 7.In Fig. 6, we have the planar projections of the orbit of an O + ion that eventually integrates to the ring current.The motion initiates at t 0 = −8 min, before the event, with initial radius r(t 0 ) = 20R E , magnetic local time (MLT) φ(t 0 ) = 24 h, latitude θ (t 0 ) = 25 • , pitch angle α(t 0 ) = π/2 and kinetic energy E k (t 0 ) = 4 keV, and is followed until t 1 = 180 min (∼ 2.5 h after the event).In Fig. 7, we represent the other types of motion in 3-D space for different ion species with the same input as before except for E k : the precipitating orbit is of an H + ion with E k (t 0 ) = 0.5 keV, whereas the escaping orbit is of an O + ion with E k (t 0 ) = 7 keV.
In Fig. 6, the O + ion is launched from the plasma sheet, driven towards Earth by the disturbed electric fields, and finally gets trapped in the ring current.A careful examination of the numerical data yields that, in its course to the ring current region, the ion is considerably accelerated from the energy exchange with the electric field, whereas its pitch angle has a random behavior before the entrance to the ring current and afterwards varies periodically.In the case of the H + ion that crashes onto Earth, the orbit of which is shown in Fig. 7a, the particle begins with a low initial energy and is intensely accelerated, and probably due to the relation of its pitch angle with the loss cone it ends up on the terrestrial atmosphere at t = 33 min (well before t 1 ), with a final energy as large as in the previous case.Finally, in Fig. 7b, the O + ion starts with a relatively high value of energy and escapes from the inner magnetosphere, along the meridian at 22:00 MLT, before t 1 (at t = 40 min) with a velocity gain.The different behavior of the oxygen ions for different initial energy is an indicator of the sensitivity of the ion dynamics to the initial conditions.
Regarding the particle acceleration, in Fig. 8 we examine the kinetic energy and the pitch angle for the motions in Figs. 6 and 7.In Fig. 8a, the energy of the trapped O + ion appears to have a gain of about 1.3 orders of magnitude.The largest part of the increase occurs during the relaxation phase (t g < t < t r ), where the magnetic field exhibits a steep decrease and, consequently, the induced electric field attains large values and accelerates the ions.Another incidence of energy gain occurs a little after t = 2 h well inside the ring current region.This is connected to an intense pitch angle variation, as seen in Fig. 8b, where α(t) is displayed, which is induced by the structure (i.e. the gradients over time and space) of the local fields at the specific time.The kinetic energy of the H + ion that crashes onto Earth also appears to have a sizeable gain (almost 2.6 orders of magnitude) at the time of reaching the atmosphere, after nearly 45 min of flight, whereas the energy of the escaping O + ion appears a gain of nearly 1.6 orders of magnitude at the time it crosses the magnetopause, close to the end of the relaxation phase.
For the statistical analysis of the motions, numerical data have been produced over the trajectories of the ion species keV, moving until t 1 = 3 h during a disturbance with t g = 30 min, t r = 35 min, Kp (t i ) = 1, and Kp (t g ) = 5.
relevant to each territory of the magnetosphere, in loops of different initial conditions for r, φ, θ , α and E k where, each time, only one or more of these quantities were varied.In Fig. 9 we present results for the final kinetic energy and the pitch angle from different simulations with an ensemble of N ens = 1000 O + ions, where the initial conditions varied are r(t 0 ) and E k (t 0 ).In the first computation, r(t 0 ) took values in a loop from 2 R E to 30 R E , whereas, in the other case, E k (t 0 ) ranged from 0.5 to 20 keV, and all the rest of the input quantities were equal to the values already defined: t 0 = −8 min, φ(t 0 ) = 24 h, θ (t 0 ) = 25 • , and α(t 0 ) = π/2.The general picture (also implied from the above results) is that the dependence of the particle dynamics on the initial conditions is very sensitive, which is imprinted in the wide regions where E k (t 1 ) and α(t 1 ) exhibit an irregular, non-smooth variation over the values at t = t 0 (see especially Fig. 9a).However, in all cases one identifies consecutive regions where the ions either get accelerated or remain at low energy.In Fig. 9a, there is a spatial region from 14 R E to almost 17 R E where all injected ions gain significant amounts of energy, as well as one within 21 R E and 25 R E where nearly all particles do not appear to have a net energization.In Fig. 9b the probability of acceleration appears to be larger for O + ions with low energy at the event start (E k (t 0 ) < 6 keV) than for initially energetic ions (having, for example, E k (t 0 ) > 15 keV).The specific result reveals the role of the plasma sheet as a reservoir of oxygen ions which, on the course of storms/substorms, get accelerated and enhance the ring current (Metallinou, 2008, and references therein).Finally, Fig. 9c, there are regions where α(t 0 ) varies rapidly, suggesting rotational behavior, as well as regions where the variation is slow, denoting motions close to ballistic.Most of the former ions, as implied by the inbound direction of motion driven by the large parallel velocities of the specific pitch angles, are candidates of joining the ring current.
We also analyze the kinematics of H + ions using an ensemble of 1000 particles with varying r(t 0 ) and E k (t 0 ) for the same input as above.In Fig. 10, we plot the final kinetic energy and pitch angle as a function of the initial values.The overall behavior resembles that of the (heavier) O + ions; notice, for example, the regions of quasiperiodic and quasiballistic motion in Fig. 10c, similar to the ones in Fig. 9c.Nevertheless, the effect of acceleration, as imprinted in Fig. 10a, is found to be much weaker.This is connected to the difference in charge / mass ratio of the different ion species, and verifies the known storm-time composition for the energy density of the ring current, which is dominated by the O + ions coming from the plasma sheet, in contrast to the situation in quiet time where H + is the majority species (see, for example, Korth et al., 2002).In Fig. 10c one observes that the regions of periodic-like pitch angle behavior are now more narrow, which is in accordance with the fact that hydrogen ions launched from the plasma sheet are not effective in assimilating into the ring current region.Going one step further, we estimate the statistical weight of each one of the populations formed by distributing the ions launched from the plasma sheet to the types of orbits described in the above (ring current, near-Earth tail, precipitating and escaping), and the outcome is shown in Table 2.The simulations involved two different ensembles of oxygen and hydrogen ions with N ens = 10 000 particles each, which were injected from the plasma sheet with random initial conditions for r(t 0 ) within [18R E , 22R E ], for E k (t 0 ) within [2, 6] keV and for θ (t 0 ) within [20,30] • , and all the other input being the same as above.One should notice that the conclusions drawn from Figs. 9 and 10, on the basis of single-particle dynamics, are verified.We highlight that, according to the computations, 46 % of the O + ions of the ensemble are incorporated into the ring current, in contrast to nearly 4 % of the H + ions, whereas a little more than 20 % in both species occupy the near-tail region; however, 25 % of the O + particles and 39 % of the H + ones escape the inner geospace.
The magnetic perturbation and the connection of Dst to the ring curren, as well as the contribution of each current source during the event phases, are under debate.In many cases, Dst is assumed to be correlated with the ring current energy from storm maximum well into recovery, on the basis that ring current ions provide the primary contribution to the storm-time Dst depression (Greenspan and Hamilton, 2000).However, it is suggested that Dst is also related to other sources, the effect of which may become important during disturbances.Based on ground measurements, Arykov and Maltsev (1993) indicate circumstances where the tail currents dominate the Dst development during storms.Turner et al. (2000) assess the effect of the tail currents on Dst by introducing a correction to the total current density, in terms of subtracting the magnetic curl in the tail regions as calculated by T89 and T96.The tail current was found to be most dominant in the end of the growth phase, and the accretion to Dst was estimated to scale up to 25 %.
A straightforward approach to calculate the Dst index from test particles involves the computation of the electric current densities from the particle velocities and the derivation of the generated magnetic fields (according to Ampere's law).However, the increased difficulty in the computation of surface current densities from particle orbits and the complexity of calculating the magnetic field from the electric currents, as well as the requirement of including the real positions of the ground-based sensors, imply a poor modeling performance.For studies related to the inner magnetosphere, the connection of Dst with the energy of the ring current has been described in terms of the Dessler-Parker-Sckopke (DPS) relation (Dessler and Parker, 1959;Sckopke, 1966).The advantage here is that, at input, the kinetic energy of the local plasma is required, which is a scalar quantity and simple to deduct from the test-particle results.
The original DPS relation, which connects the energy E pp stored in a specific plasma population of the magnetosphere with the associated magnetic field perturbation, takes into account only those energetic particles that gyrate around the magnetic field lines and drift longitudinally due to the field gradient.In this context, the DPS formalism provides a sufficient estimation of the perturbations due to the ring current dynamics, as well as a well-balanced one of the near-Earth tail current contribution.With the introduction of a correction term for the magnetopause current in the original relation (O'Brien and McPherron, 2000), one obtains a modified equation for the Dst index, where b dps is associated to the magnetopause correction and c dps to the quiet-time energy level.Equation ( 10) yields that, in order to estimate Dst, with the values of P dyn , b dps and c dps for a specific event or scenario, one only requires the computation of E pp for the ring and tail plasma populations.
In the simulations, the ring current particles are assumed to be confined inside a torus with radii R rc = 6R E and r rc = 3R E (i.e.extending from 3 R E to 9 R E ), whereas the near-Earth tail region is defined as the remaining area in the simulation box ranging within [r rc , r rc +R ntl ] along the Sun-Earth axis and [−R ntl , R ntl ] in the other two directions, with R ntl = 20R E .The energies E rc and E ntl of the ring and near-tail current particles are described over the average energy of H + and O + test ions that belong to these currents.This is quantified by E pp = V pp i n pp,i E k pp,i , where n pp,i is the plasma density of each ion species in each population and V pp is the volumes of the regions occupied by the plasma populations, accordingly given by V rc = 2π 2 R rc r 2 rc and V ntl = R 3 ntl − V rc .In the formula for E pp , the (ensemble) average value of the kinetic energy for the O + and H + ions in each current is computed, at each time step, over the particles contained inside the specific region at that time.
The results of the Dst computation using the scheme described above are presented in Fig. 11.The event scenario explored is again the one introduced in Sect.3, i.e. a single disturbance that begins at t = 0 from a quiet state with Kp = 1, reaches its maximum level Kp = 5 at t = 30 min and returns to the quiet state until t = 35 min.Two thousand test particles were used for the computation, divided in two different ensembles: one of 1000 oxygen ions launched from the plasma sheet, and one of 1000 hydrogen ions started in the ring current.The initial conditions for the MLT and the pitch angle were the same for both species, φ(t 0 ) = 24 h and α(t 0 ) = 90 • , whereas the initial radii, latitudes and kinetic energies were assigned randomly within different ranges for each species: for O + , r(t 0 ) ranged in [18R E , 22R E ], θ (t 0 ) in [20,30] • and E k (t 0 ) in [2, 6] keV, while for H + the corresponding intervals were [5R E , 7R E ], [0, 5] • and [1, 3] keV.The test ions were traced from 8 min before the beginning of the event until t = 120 min, and at each time step the Dst index was computed from Eq. ( 10) and associated relations, where it was assumed that P dyn = 4 nPa, b dps = 7.26 nT/nPa 1/2 , c dps = −11 nT, n rc,O = n rc,H = 10 cm −3 and n ntl,O = n ntl,H = 1 cm −3 .
In Fig. 11a we plot the dynamic evolution of Dst, and the contributions from the ring current and the near-Earth tail plasma distributions to its value are given for comparisons.A qualitative agreement with the usual evolution of Dst during a substorm is seen: during the growth phase, Dst decreases continuously, with the most rapid variation being around the interchange from growth to relaxation phase, and values of Dst indicating magnetic perturbation persist for some time after the event termination.Overall, the contribution of the ring current to Dst is larger than the one coming from the energetic particles in the near-Earth tail region.The contribution of the tail current is measurable up to t = 1.2 h, with a maximum near the end of the event growth (t = 0.55 h), and from there on the Dst is essentially determined only by the ring current; this is in agreement with the behavior stated in Arykov and Maltsev (1993) and Greenspan and Hamilton (2000).The contribution to Dst by the tail current is found equal to 30 % on the average.This is a little larger than the reported figure of 25 % in the literature; however, such deviations are justified considering the adopted assumptions in these models.
A comparison of Kp and Dst during geomagnetic events is necessary for assessing their differences in response to different storm-time current systems.In cases where the dynamic pressure and the IMF both refer to the same category in storm magnitude, the minimum Dst is expected to decrease as a function of Kp.This has been verified in terms of an additional computation, where the maximum value of Kp during the event, attained right at the end of the growth phase, was modified from 1 to 7 (these are the minimum and maximum disturbance levels allowed by the T89 model) and, in each case, the minimum Dst value was recorded.
The correlation of the maximum values of |Dst| and Kp is shown in Fig. 11b and, as expected, has an increasing monotony.In the same figure, our result is compared to the linear regression curves derived from the statistical evaluation of data from substorm events during 1987-1996(Rostocker, 2000) ) and storms in the period 1996-1999 (Huttunen et al., 2002).The comparison with the results of Rostocker shows an agreement only in the range of values 4 < Kp < 5, and with Huttunen et al.only for Kp > 5+, which correspond to moderate and intense events.The main sources of disagreement in the other ranges are probably connected to the difference of the reference values of the geomagnetic disturbance (dynamic pressure, IMF, plasma density) in the analyzed data with respect to the input given to the code, as well as to the differences with the corresponding values in the data set employed by the T89 model.
Discussion and conclusions
In this paper, we employ a collection of models for the electric and magnetic field in the inner magnetosphere for the investigation of the dynamic evolution of the ring current and the near-Earth tail ion population during the occurrence of magnetospheric disturbances.Within this research framework, we have developed an orbit-solving code which computes the test-particle motion due to convection, corotation and Faraday induction in the dynamic magnetic and electric fields of the magnetosphere.We have used the code to study the ion dynamics, and in particular the dependence of ion acceleration on the initial conditions.Furthermore, we performed a numerical estimation of the Dst index based on the test-particle energies.The results of all computations have been found to be in qualitative agreement with previous studies on the ring current evolution during magnetospheric activity.
The ion motions have been traced by solving the nonrelativistic Lorentz equation, without adopting simplifications at any stage of the computation.In practice, one usually shifts to the GC equations when the particle reaches a distance smaller than a threshold radius, from where on the GC approximation is empirically assumed to be valid.In this respect, the choice of retaining the full-orbit description prevents inaccuracies from occurring in cases where some of the adiabatic invariants are broken.During intense disturbances, such cases have the potential to occur locally in space/time, and we have verified this situation by finding major deviations in the computation of a specific trajectory for several values of the threshold distance.
The analysis of test-particle orbits reveals fragments of the ion dynamics during the disturbance.We have identified three main types of ion orbits: orbits getting trapped around Earth, orbits precipitating in the Earth's atmosphere, and others escaping from the inner geospace.During the event, a percentage of oxygen ions launched from the plasma sheet are found to be accelerated and become trapped in the ring current.However, hydrogen ions (which are known to populate the ring current during quiet times), mainly escape from the inner geospace when launched from the plasma sheet.The largest part of the O + acceleration occurs during the relaxation phase, where the magnetic field exhibits a steep decrease and, consequently, the induced electric field attains large values.The addition of this component to the convection field provides a mechanism for the observed energization levels of ions which drift towards the ring current region, contrary to an electric field purely due to plasma convection (a similar result was found in Fok et al., 1999).
Further analysis of the ion motions reveals a sensitive dependence of the particle dynamics on the initial conditions.We have found regions in geospace, including the plasma sheet, from where injected oxygen ions get preferentially accelerated, while ions starting from other regions may or may not appear a net energization depending on the initial energy.For O + launched from the plasma sheet, the possibility for acceleration is found to be larger for ions having low energy at the beginning of the event.Consequently, the composition of the ring current may be modified by oxygen ions, the majority of which are initially in specific phase-space regions, which get accelerated and drift towards Earth.These findings are consistent with the results of previous studies on the role of substorms on the ring current dynamics, and have been verified here by an additional simulation.
For the effect of each current source to the Dst index during the event phases, we have concluded that one should, in principle, also account for the dependence of Dst on other effective sources apart from the ring current energy.Our computation of the Dst, in terms of the Dessler-Parker-Sckopke relation and test-particle results, indicates a measurable contribution from the near-Earth tail current of 30 % on the average, and yields a fair agreement with other estimations indicated in the literature (∼ 25 %).In the course of the event, the largest contribution of the tail current occurs during the growth phase, and persists for some time past its maximum.Thereafter, the effect of the tail currents gradually fades away, and the value of Dst is driven only by the ring current.Dst retains small values (related to meaningful disturbances) for long times after the event termination.A more accurate estimation of Dst may be achieved with the inclusion of the physics of loss mechanisms (collisions, cyclotron emission) and wave-particle interactions.
The present work may serve as the final link in a Sunto-Earth modeling chain of major solar eruptions, providing an estimation of the inner geospace response once the solar burst reaches Earth.In this frame, a comparison of our results with other available models (e.g., Fok et al., 2001;Jordanova et al., 2010), as well as with data from observations, may act as further validation.In a relevant work by Ganushkina et al. (2012), a benchmark of models was conducted, and the results showed that the computed ring current, for moderate and intense disturbances, depends on the field models.In our work, the benchmark of the T89 model has shown few differences within the simulated region in comparison to the later models, the facilitation of which may, however, increase the accuracy in the magnetic field.Also, the Kp index is used as a parameter to describe the geomagnetic field during the disturbances, which is partially in contrast to the global character of the specific index (see, for example, Rostoker, 1972).In order to assess our results, the correlation of Kp and Dst was followed and a connection was found with previous studies.
Figure 1 .
Figure 1.Geomagnetic field map on the GSM x −z plane, as calculated with the T89 model, for θ t = 11.5 • in a case where the magnetosphere is (a) quiet (Kp = 1) and (b) disturbed (Kp = 5).
Table 1 .
Input to the Tsyganenko models T89, T96 and TS05 for the benchmark case computations presented in Fig.2.Model Kp DstP dyn B y , B z S 1 , S 2 , . . .,
Figure 2 .
Figure 2. Computation of the relative deviation between the computations of the geomagnetic field, for zero tilt angle in a disturbed magnetosphere (Kp = 5 and Dst = −70 nT), using the models (a) T89 and T96 and (b) T89 and TS05.
Figure 5 .
Figure 5. Application of the model interchanging technique in testparticle computations for different values of the threshold radius: (a) r t, (b) E k vs. t, and(c) maximum relative error vs. R fm .
Figure 7 .
Figure 7. Three-dimensional plot of ion orbits which conclude outside the inner magnetosphere, with initial conditions same as in Fig. 6 apart from (a) E k (t 0 ) = 0.5 keV for H + and (b) E k (t 0 ) = 7 keV for O + .
Figure 8 .
Figure 8. Dynamic evolution of the (a) kinetic energy for the ion orbits analyzed in Figs. 6 and 7 and (b) pitch angle for the trapped O + orbit of Fig. 6.
Figure 10 .
Figure 10.Final kinetic energy of N ens = 1000 H + ions as a function of the initial (a) radial coordinate, (b) kinetic energy, and (c) final pitch angle as a function of initial kinetic energy, for varying initial conditions and all the remaining quantities same as in Fig. 9.
Figure 11 .
Figure 11.Analysis of Dst based on an ensemble of 2000 test ions, 1000 O + launched from the plasma sheet and 1000 H + in the ring current, during the event introduced in Sect.2: (a) dynamic evolution of Dst and of its ring/tail current contributions and (b) correlation of the maxima of |Dst| and Kp, computed by varying only Kp in (a), as compared to formerly derived results.
Table 2 .
Distribution to the ring current (N rc ), near-tail (N ntl ), precipitating (N pr ) and escaping (N esc ) O + and H + populations of test ions injected from the plasma sheet, with initial conditions for r in [18R E , 22R E ], E k in[2, 6]keV and θ in[20, 30]• . | 12,313 | 2016-02-03T00:00:00.000 | [
"Physics"
] |
The effect of echoes interference on phonon attenuation in a nanophononic membrane
Nanophononic materials are characterized by a periodic nanostructuration, which may lead to coherent scattering of phonons, enabling interference and resulting in modified phonon dispersions. We have used the extreme ultraviolet transient grating technique to measure phonon frequencies and lifetimes in a low-roughness nanoporous phononic membrane of SiN at wavelengths between 50 and 100 nm, comparable to the nanostructure lengthscale. Surprisingly, phonon frequencies are only slightly modified upon nanostructuration, while phonon lifetime is strongly reduced. Finite element calculations indicate that this is due to coherent phonon interference, which becomes dominant for wavelengths between ~ half and twice the inter-pores distance. Despite this, vibrational energy transport is ensured through an energy flow among the coherent modes created by reflections. This interference of phonon echos from periodic interfaces is likely another aspect of the mutual coherence effects recently highlighted in amorphous and complex crystalline materials and, in this context, could be used to tailor transport properties of nanostructured materials.
angle by three focusing toroidal mirrors; the three beams lye in the same (horizontal) plane.
The crossing angle of the two pump beams was 2θ =27.6°, with the sample surface orthogonal to their bisector, while the angle of incidence of the probe beam was 4.6°with respect to the normal of the sample surface.This geometry correspond to the Bragg angle for the probe's transient diffraction when λ ex /λ pr =3, which corresponds to one of the used experimental conditions.When λ ex /λ pr ̸ =3 the Bragg conditions are not satisfied and, consequently, the efficiency of the EUV TG process decreases.However, in light of the short absorption length of the pump (L abs ≤ 54 nm) and the employed range in L T G ≥ 56 nm, such a decrease in efficiency was within an ecceptable level (> 10 %) [1].
The spot size on the sample was 300x280µm 2 FWHM and 280x230µm 2 FWHM for the two pumps and 450x330µm 2 FWHM for the probe.The mismatch between spot sizes is not critical as long as the fluence level required by the experiment can be achieved, since the EUV TG signal arises from the overlap region and propagates in a background free direction.The transmission of each pump breanchline was 0.04, 0.015 and 0.006 (including the aluminum filter) at λ ex =26.6, 39.9 and 53.2 nm, respectively, resulting a fluence range F = 0.12 − 0.34 mJ/cm 2 .We did not observe any appreciable sample damage, also after prolonged illumination (several hours).The EUV TG signal was finally detected by a CCD camera (PI-MTE), equipped with a light-tight zirconium filter to reject room-light that may leak into the experimental chamber and spurious EUV light at λ ex , which may arise from diffuse scattering from the sample and stray-light from the beamline.To further reduce background around the signal, we introduced a beamstop to create a shadow on that region of the CCD.The FEL photon transport, sample environment and detection was in high vacuum in order to allow propagation of EUV beams; further details on the setup can be found elsewhere [2].
The main experimental parameters are summarized in Supplementary Table I.
II. TG SIGNAL FIT
As described in the main text, the signal is the superposition of a relaxing thermal signal and coherent phonon oscillations.The number of phonon modes is not fixed, as the EUV TG can in pronciple excite all phonons that can couple with its wavevector.In order to identify case by case the number of excited modes, we have performed the Fourier transform prior to any fit.Supplementary Fig. 1 reports the FFT for our experimental TG periods in both samples.At λ ph = 106.7 nm, in both samples a single peak at around 100GHz was identified, together with its overtone at about 200 GHz. 2 peaks can be identified at λ ph = 83.7 nm in both samples and at λ ph = 55.8 nm in the uniform membrane, while in the waveform of the nanostructured sample at λ ph = 55.8 nm, one can identifty 6 peaks (see vertical segments in the figure).However, a good fit can be obtained using 5 out of these 6 peaks.
Once the possible phonon modes are identified, the fit is performed with the function reported in the main text (Eq. 1).Supplementary Table II summarizes all fitting parameters for all EUV TG waveforms.Thermal parameters (A th and τ th ) are reported only once for a given sample and value of L T G , since they do not depend on the phonon branch.
Supplementary Fig. 2 reports the values of τ , as obtained from the fitting procedure, for phonons belonging to the S 3 branch in the two membranes, not shown in the main article.A decrease as large as an order of magnitude is found in the NS sample.While in both samples τ decreases with ν, the slope of this decrease is reduced in the NS sample.
However, since we have only two points we cannot further comment on the dependence on ν.In table II, the mean free path ℓ is also calculated for the phonons belonging to branches S 2 and S 3 , using the velocities obtained from the analytically calculated Lamb dispersions.Within error bars, the velocity at these frequencies does not change with nanostructuration, thus the mean free path reduction reflects the one of the lifetime.The smallest mean free path is found at λ ph =55.8 nm, for which it is ℓ =0.08(1) µm.Supplementary Figure 3.The thermal relaxation rate as a function of q.The thermal relaxation rate, as obtained from the fit of our experimental spectra, is reported as a function of k in logarithmic scale for the uniform (blue circles) and nanostructured (red squares) membranes.Error bars come from the uncertainty of the fit of the experimental spectra.Blue solid and red dotted lines are the fit giving, respectively, D U th = (0.5 ± 0.05)m 2 s −1 and D N P th = (0.55 ± 0.05)m 2 s −1 .Dashed black and dot-dashed green lines are the expected trends using the thermal diffusivity measured in 1.4µm and 600 nm thick membranes, respectively [4].
III. THERMAL RELAXATION
The thermal relaxation rate, as extracted from the best fit of EUV TG waveforms with Eq. ( 1) in the main text, is reported in Supplementary Fig. 3 as a function of k for both the uniform and nanostructured membranes Given the pump absorption lengths reported in table I of the Methods section in the main text, thermal transport takes place both cross-plane, from the front surface illuminated by the laser towards the back one, and in-plane, from hot to cold interference fringes of the TG.In our experimental geometry we are mostly sensitive to the in-plane direction, where the equivalent heat transport distance is The heat diffusion theory, which is valid at macroscopic length scale (i.e. for values of L th significantly larger than the average phonon mean free path), predicts a k 2 dependence of the thermal relaxation rate: with D th the thermal diffusivity, which can be calculated from the thermal conductivity k T , C and ρ: D th = k T /ρC.
For bulk SiN, using the literature values and our nominal density, one would get D th = 1.48x10 −6 m 2 s −1 , in fair agreement with values reported in thick membranes [4].Despite in our case L th is expected to be comparable to the phonon mean free paths [6,7], we clearly observe a diffusive thermal , differently from what reported in other (crystalline) materials [8,9].However, our estimated value of D th ∼ (0.5 ± 0.05)10 −6 m 2 s −1 is significantly smaller than the bulk one, confirming previous reports of D th in amorphous SiN membranes of 50 and 100 nm thickness at comparable values of L th [10,11].It is worth noticing in these latter works, the fluence, and thus the sample temperature, was much higher: this points to a weak temperature dependence of D th in this material.Surprisingly, nanostructuration does not change significantly τ −1 th in the probed L th range (i.e.18-35 nm): we still observe a diffusive behavior, with only a 10% increase in D th .The lack of substantial change is in agreement with previous reports on macroscopic thermal conductivity on similar membranes, for which a sizable reduction was observed only for small necks values [7].It's worth noticing that, in Supplementary Fig. 3, the smallest k point is clearly out of the trend, indicating a larger value of D th at L T G = 109.6nm.As this data point corresponds to the shortest absorption length for the pump laser, the effects of the finite penetration depth of the EUV pulses in cross-plane heat transport is expectedly more sizable.Even if we are mostly probing in-plane thermal transport, we cannot exclude a cross-talking between heat fluxes along these two directions.However, with the data in hand, we cannot draw any conclusion on this aspect; further studies are needed to assess the nanoscale thermal relaxation dynamics in these membranes.
IV. THEORETICAL RESULTS AND ANALYSIS
To analyse the wavepacket propagation in our sample, we have measured the envelope of the kinetic energy induced in the system by the propagation of the wavepacket in the x direction (see Supplementary Fig. 5 in the main article), parallel to the EUV TG wavevector, averaged over both y and z directions.The energy envelope is defined for each excitation frequency ν as where E k (x, t) is the instantaneous kinetic energy supported by the frame located in x with width ∆x = 2 Å.
Since the simulations are performed at constant energy (no damping term in the numerical scheme), no attenuation would be observed in a uniform membrane, and the one arising in the nanostructured material is due to a redistribution of the kinetic energy in directions different from the one (x-axis) of the initially excited wavepacket, which we call propagation direction, so that an effective reduction of the kinetic energy E kin (x, t) along x appears.It has been shown [12,13] that, depending on the nanostructure geometrical and elastic parameters, and on the wavepacket wavelength, different scenarios can arise.If the wavepacket keeps its propagative nature, a global exponential attenuation of P ν (x) along the propagation direction x, similar to a Beer-Lambert law, is observed: with ℓ env the envelope mean free path.However, if scattering is predominantly diffusive, the observed behavior will be [14] Finally, for materials very different on the two sides of an interface, the attenuation can be extremely efficient leading to an energy localization.
Supplementary Fig. 4 reports the envelopes for all the simulated wavepackets, divided in two panels: from 58 to 102 nm wavelengths on the left hand side and from 102 to 502 nm on the right hand side.The representation is in a semi-logarithmic scale, so that a propagative behavior can be recognized by a linear dependence of E kin on the distance (x) (see Eq. 3), with a slope which is directly the inverse of ℓ env for the envelope of the considered wavepacket [12].It is evident how data are smoother at short λ ph 's and becomes noisy at larger λ ph 's.In addition, for λ ph < 102 nm, periodic peaks can be identified, which progressively disappear on increasing λ ph for reappearing again above λ ph = 402 nm.They may have two origins: i) the mass difference between regions with and without holes (which leads to a periodic mass profile function along x) and ii) constructive interference between the main peak and the backreflected one.To account for this trend, we fit the curves in Supplementary Fig. 4 with an exponential decay plus a sinusoidal oscillation wherever peaks are well distinguishable, and only the exponential decay elsewhere: with a, b and ϕ the amplitude, periodicity and phase of the oscillations.The slope progressively increases on increasing λ ph up to 102 nm, and then becomes almost constant with λ ph , starting to decrease again at λ ph = 402 nm.
As such, λ ph ≈100 nm marks the transition between different regimes: above ≈ 100 nm and below 402 nm the slope does not change significantly, but also the signal appears noisy and the periodic peaks are definitely disrupted.Such disrpution is likely related to the longer spatial extension of the wavepacket.The temporal coherence length of the wavepacket is t 0 = 3 2ν = 3 2 T with T the period (see Methods section in the main article), which gives wave-packets with 9 periods, with a total time extension of about 6t 0 .If we only get the half maximum width of the wave-packet, such length is about 3.5 periods, thus ∆t F W HM = 3.5 2 3 t 0 .Using this temporal extension and the velocity (whose derivation is described later in this section), we have calculated the spatial extension and reported it in table III.It may be seen that this latter, which represents an estimation of the coherence length, becomes definitely larger than the neck for λ ph > 102 nm; as such, the overlap between main and backreflected peak spreads all over the neck and no well-defined maximum can be identified.The situation for λ ph > 402 nm is different: while the coherence length is much larger than the neck, we can still clearly identify peaks.These peaks are most likely a combined effect of the mass difference between holey and uniform regions and the oscillations within the wavepacket, which has now a very large spatial extent.
In Supplementary Fig. 5 we report the mean free path ℓ env obtained fitting the curves with Eq. 5, as well as the periodicity b for the wavelengths where oscillations are distinguishable.We first comment shortly on this latter parameter: one would expect b ∼ 377 nm, i.e. a periodicity reflecting the one of the pores lattice, as due to the mass difference between regions with and without holes .This is true for 83 ≤ λ ph ≤ 100 nm.Surprisingly, for λ ph = 58 nm, we find b = 251.1(6)nm, pointing to an energy periodically localized at the borders of the holes, where the peak and its backreflection can contructively interfere within the short coherence length of the wavepacket.λ ≥ 402 nm have a periodicity half the one of the lattice.As the kinetic energy corresponds to the squared intensity of the generated wavepacket, the wavelength of the wavepacket oscillations here will be half the phonon wavelength, i.e. 201-251 nm.
The oscillating signal is thus most likely the envelope of the wavepacket oscillations with the function representing the mass profile along x.
Coming now to the mean free path, a steep reduction settles in just above 84 nm, pointing to a sudden effect of the nanostructure for wavelengths between 84 and 102 nm, at which point the mean free path reaches a minimum, before starting increasing again above 201 nm.In order to understand what's happening, we look at the time dependence of the wavepacket, following the time evolution of the kinetic energy at 18 positions in the sample: 9 at x positions corresponding to the center of the holes and 9 at the center of the neck.We report it in Supplementary Fig. 6, where the wavepacket can be followed in time at the 18 positions through the different coloured curves, at all simulated wavelengths, in a time window longer than the time needed for the first pulse to travel through the whole sample.
From a global look, it is evident that the only wavelength which propagates really well, with a single peak gradually loosing intensity because of attenuation, is λ = 84 nm.At all other wavelengths, at each position in the sample, we can observe secondary peaks, due to interference, appearing later in time and which, at longer travelled distances, may become even more intense than the first pulse.This can be better seen in Supplementary Fig. 7 and Supplementary Fig. 8 , where the temporal dependence is reported for all wavelengths at the same two central positions about 1300 nm from the beginning of the sample: at the center of the neck in the first case, and of the hole in the second.
Here we can see that even at 84 nm there are actually interference peaks, which, however, remain well below the intensity of the primary pulse.This is not the case for the other wavelengths, except for λ ph = 502 nm, where, despite the interference peaks are intense, they remain slightly weaker than the main one.It is worth noticing that the presence of clearly defined interference peaks is only possible for multiple scattering events which keep the phase of the wavepacket, otherwise we should have a non-structured background, as the one present at 58 nm after the second peak in Supplementary Figs. 7 and 8.As such, we are definitely in presence of a coherent interference.
Looking at these figures, we can better understand the envelopes of Supplementary Fig. 4, which show the maximum of the kinetic energy over all times (see Eq. 2): at long λ's, high intensity interference peaks appear, which may become more intense than the first peak, modifying the global slope of P ν (x).As such, at small wavelengths, the envelope attenuation reflects the attenuation of the main wavepacket, but it won't be so at large wavelengths, as for a given position x, the maximum over all times can belong to a peak coming later in time and not the first one.
For the wavelength range experimentally investigated, the propagation remains quite smooth, so we have fit the amplitude of the first peak as a function of time to get the wavepacket lifetime (τ coh ) and compare it with our experimental data.From the position of the first peak as a function of time we can also get the wavepacket velocity, and then calculate the mean free path for the first peak, ℓ coh .We have compared it with the one obtained from the envelope analysis, ℓ env in Supplementary Fig. 6 of the main article, showing that at our experimental wavelengths the values are very close.Increasing wavelength above 102 nm and below 502 nm, the values of the mean free path obtained using the two approaches significantly differ: ℓ env is systematically larger than ℓ coh , due to the interference peaks which are, over all times, more intense than the first one and thus dominate the slope of P ν (x).This means that there is an important constructive interference, which contributes to reducing the wavepacket lifetime, while increasing the energy which remains in the sample and propagates through the reflections.At λ ph =502 nm, this is not true anymore: we start to be less sensitive to the effects of the nanostructure, as the wavelength becomes larger than both neck and pitch.In these conditions, the values of the mean free path obtained in both ways are again consistent.
Results are reported in Supplementary Table III for all wavelengths.In the main work, we have used the lifetime obtained from the temporal analysis of the first peak.If instead, we use the ones obtained from the envelope mean free path and the theoretical velocity, we find that the result does not significantly change: our experimental data are still well reproduced using directly these theoretical lifetimes together with the experimental ones from the uniform Supplementary Figure 6.Time dependence of the kinetic energy over the 18 positions in the sample The kinetic energy is followed as a function of time at 18 positions in the sample, corresponding to the centers of the necks and of the holes.Different colours identify the different positions.The first pulse can be identified and followed in its propagation through the whole sample at λ = 58 and 84 nm, above which wavelengths, the backrelected peaks start to become as important as the first peak in the center of the sample. membrane.
One point of interest is the behavior observed for λ ph = 58 nm: here we have a single peak later in time, which is more intense than the first one.It is likely due to the backreflected peak from the first interface, which constructively interferes with the incoming first peak.Still, the absence of other peaks would indicate that interferences with successive echos are destructive.Moreover, the arrival time of this peak is about 60 ps after the first one, which, at the velocity of this wavepacket, corresponds to a travelled distance of about 505 nm, i.e. twice the neck, which would be consistent with two backreflections between pores.The result is again a difference between ℓ env and ℓ coh .This could indicate the arising of another phenomenology for wavelengths shorter than the ones here probed, with the loss of interference effects except for specific wavelengths.More simulations are needed to understand this behavior and the role of coherent and incoherent contributions at shorter wavelengths.
Supplementary Table III.Simulated wavepacket properties.τ coh is the lifetime of the main wavepacket obtained fitting its amplitude as a function of time.The velocity has been obtained fitting the position of the main wavepacket with respect to time.ℓ coh is the mean free path calculated as ℓ coh = vτ coh .ℓenv is the mean free path form the fit of the kinetic energy envelopes with Eq. 5, while Lc is the coherence length calculated from the temporal extension t0 of the wavepacket and its velocity: Lc = ∆tF W HM v with ∆tF W HM the time extension of the wavepacket at half its maximum intensity.As reported in the main text, we have fitted the experimental phonon lifetime in the uniform membrane using two contributions, the anharmonic phonon-phonon scattering, modelled using the Akhiezer approach [15], and the boundary scattering from top and bottom surfaces, which depends on the roughness of these latter [16].As for the nanostructured membrane we have added to these two contributions, the one from the nanostructure, as simulated, and which we have called τ coh .This modelling could seem simplistic, since a more proper theoretical description of the experiment would imply considering roughness and anharmonicity in the calculations, as well as the finite penetration depth of the excitation.Still, its advantage is really the fact that, being the material ideal, with no roughness and no anharmonicity, the observed effect is clearly a coherent effect from the nanostructure geometry, leaving no ambiguity on its origin.
In Supplementary Table IV we report all the fitted contributions for the two samples.It may be seen that in the nanostructured one the coherent lifetime is longer than the total incoherent contribution at λ ph = 55.8 nm, by almost a factor of 2, but then it decreases, becoming smaller than the incoherent contribution already at λ ph = 83.7 nm (about 75% of it) and it is half the incoherent contribution at λ ph = 109.7 nm, clearly dominating the total phonon lifetime.
In order to quantify the coherent character, we calculate the relative coherent contribution to the total phonon decay rate: where Γ coh = τ −1 coh , Γ ph−ph = τ −1 ph−ph and Γ b = τ −1 b .We found that coherent mechanisms account for 36(2), 57.6 (5) and 65(1)% of the total phonon attenuation for λ ph =55.IV.Contributions to phonon lifetime as obtained from the fit of the experimental data.τ ph−ph is the phonon lifetime due to phonon-phonon scattering, τ b is the phonon lifetime due to boundary scattering.τinc is the total phonon lifetime for these two contributons.τ coh is the coherent contribution to phonon lifetime as simulated by finite elements calculations.
Supplementary Figure 4 .
Simulated kinetic energy envelopes The envelope of E kin as a function of the x coordinate (i.e.parallel to the EUV TG wavevector) is reported for λ ph ranging from 58 to 102 nm (figure on the left hand side) and form 102 to 502 nm (figure on the right hand side).
Supplementary Figure 5 .
(a): Simulated wavepacket mean free path.The value of ℓenv, as obtained from fitting the E kin envelopes with Eq. 5, are reported as a function of λ ph .A steep decrease sets in above 84 nm (marked by an arrow) and leads to a shallow minimum around 150 nm.A second arrow marks the change of regime at ∼ 100 nm from a decreasing mean free path with λ ph to an almost constant mean free path (see text).(b): Periodicity of local maxima in the kinetic envelope: the parameter b from Eq. 5 is reported as a function of λ ph for the signals where oscillations are distinguishable: λ ph ≤ 102 nm and λ ph ≥ 402 nm.For this latter range, this parameter results from the combined effect of the pulse oscillations and the mass modulation.Both in (a) and (b), error bars come from fit uncertainties.
Table I .
Experimental parameters: U stands for uniform sample, while NS for the nanostructured one.The values of LT G, λex, L abs are given together with the F and the estimated values of Ts.Details and more parameters can be found in the Supplementary Material
.
Fit parameters for the uniform and nanostructured membranes.U and NS stand for Uniform and Nanostructured.A ph ,ν ph ,τ ph are amplitude, frequency and lifetime for the give phonon, A th and τ th are the thermal relaxation amplitude and characteristic time, reported only once for a given sample and value of LT G. Velocities are calculated as the tangent to the analytical Lamb dispersions, and the mean free path as ℓ = vτ ph .
8, 83.7 and 109.7 nm, respectively, confirming their increasing relevance at our largest value of λ ph . | 5,958.4 | 2024-02-13T00:00:00.000 | [
"Materials Science",
"Physics"
] |
microRNAs Facilitate Comprehensive Responses of Bathymodiolinae Mussel Against Symbiotic and Nonsymbiotic Bacteria Stimulation
Background:As the dominant species inhabiting both cold seeps and hydrothermal vents,Bathymodiolinae mussels are one of the most successful megafauna in the deep sea.They thrive in dark and food-insucient environmentsby harboring sulfur-oxidizing bacteria (SOB)and/or methane-oxidizing bacteria (MOB)ingill bacteriocytesand obtain the majority of their nutrition from them.Many attempts have been made to decode the mechanisms underlying their symbiosis, which yetremained largely undisclosedfor years due to the lack of cultivable symbionts. In the present study,the globalexpression pattern of immune-related genes and miRNAswere surveyed inGigantidasplatifronsduring bacterial challengesusing enriched symbiontsor nonsymbioticVibrio in attempting to reveal the molecular mechanisms underlying chemosynthetic symbiosis. Results: Multiple PRRs such as TLRs, LRRs and C1q were found vigorously modulated during challenges whiledistinctly clusteredbetween symbiotic and nonsymbiotic bacteria stimulation. As downstream of the immune response,dozens of immune effectors including HSP70, P450, CD82 andvacuolar protein sorting-associated proteinwere modulated simultaneously, contributing to the ne tuning of cellular homeostasis, lysosome activity and bacteria engulfment in either symbiotic and nonsymbiotic bacteria challenge.A total of 459 miRNAs were identied in gill tissue of G. platifrons while dozens of themwere differentially expressedduring the challenge.Among these miRNAs, some were also found in differentexpression patternbetween symbiont or nonsymbiontchallenges and targeting apoptosis and phagosome maturation-related genes, including caspase8, inhibitor of apoptosis, cAMP-responsive element-binding protein,IκB, Rab and integrin. Conclusion:It was suggested that G. platifrons PRRs might function cooperativelyto facilitate the specialized immune recognition to MOBs or nonsymbioticbacteria. Meanwhile, a shared expression pattern of immune effectorswas observed between bacterial challenges, indicatingthe conservative response of Bathymodiolinae mussels in promoting the adhesion andengulfment of symbionts and nonsymbiont. Nevertheless, the differentially expressed miRNAs were yet suggested to facilitate specialized modulationinsymbiosis by repressing apoptosis- and phagosome maturation-related genes.With the orchestra of immune-related genes and miRNAs, G. platifronsmussels could therefore maintain arobust immune response against invading pathogens while establishing symbiosis with chemosynthetic bacteria.
Introduction
Symbiosis between microorganisms and animals or plants is considered to be an ingenious innovation of life [1]. Many multicellular organisms can live together with bacteria in symbiotic relationships, either loosely or tightly and in epi-symbiosis or endosymbiosis) [2]. It has been demonstrated that organisms can bene t signi cantly from symbiosis, gaining more metabolic potential, enlarging terrestrial habitats, and even receiving shielding from pathogens or predators [3]. In the wild, there is a diverse range of types of symbiotic relationships between host and bacteria (symbiosis in the present study is de ned exclusively as mutualism rather than commensalism or parasitism). All symbionts evolved from freeliving ancestors before coevolutionary processes occurred that resulted in a mutualistic relationship with a host [4]. The decoding of symbiosis, especially the biological processes underlying the establishment and maintenance of symbiosis, is therefore regarded as crucial for understanding the adaptation and evolution of life and has attracted much attention ever since [1,3].
As the front line in defending the self from non-self, the immune system of a host plays an indispensable role in controlling both the establishment and maintenance of symbiosis [5]. It has been demonstrated that symbionts can be acquired by hosts either horizontally from the environment or vertically from the germ cells of parents (mainly maternal) after immune recognition. Multiple molecular or cellular immune processes, including the excretion of antimicrobial peptides, phagocytosis, and apoptosis, can be vigorously modulated simultaneously, promoting colonization in symbiotic tissue or cells such as the light organ (squid), bacteriome (aphid), and trophosome (tubeworm) [6][7][8]. Once colonized, symbionts are further monitored and controlled by the host immune system, avoiding their overgrowth or drastic decline and maintaining the balance of symbiosis [5,9,10]. It is therefore interesting to know how symbionts were initially discriminated from nonsymbionts and how immune processes were further modulated during establishment or maintenance of symbiosis [11]. With the help of state-of-the-art molecular tools such as genome and transcriptome sequencing, more molecules involved in the immune recognition and signal transduction have now been identi ed and were found diversi ed greatly across species.
As a class of endogenously encoded small non-coding RNAs, microRNAs (miRNAs) are known to play indispensable roles in the post-transcriptional modulation of gene expression [12]. To date, more than 38,000 mature miRNAs have been identi ed across about 271 species (according to miRBase.org) and majority of veri ed miRNAs could repress the translation of target genes after binding with 3'-UTR region [12]. Accordingly, a diversity of biological processes including cell proliferation, growth, differentiation and immune response could be further modulated by miRNAs [13]. Interestingly, some recent studies have also demonstrated the participation of miRNAs in host-symbiont interaction especially in plants. For example, dozens of plant miRNA including miR-171, miR-393 and miR-396 have been found playing crucial role in the symbiosis between root and fungal by targeting nodule signaling pathway, auxin signaling pathway and etc [14]. In contrast with the thorough investigations in plants, few symbiosisrelated miRNAs has been found in animal disregarding the large number of miRNAs identi ed to date.
Recently, some miRNAs in aphid or coral were found highly expressed in symbiont-housing tissue or in response to endosymbiont infection, which shielded new insight in their interaction with symbionts [15,16]. However, how exactly these miRNAs participate in symbiosis has remained largely uninvestigated.
As one of the dominant species in both cold seeps and hydrothermal vents, Bathymodiolinae mussels (Mytilidae: Bathymodiolinae) have been found in symbiosis with bacteria, bearing sulfur-oxidizing bacteria (SOB) and/or methane-oxidizing bacteria (MOB) in specialized epithelium cells of their gill tissue (bacteriocytes) [17]. It has been reported that Bathymodiolinae mussels could acquire symbionts horizontally since their settlement and regain them throughout their life span including the adulthood [18]. Moreover, the symbionts were found rstly distributed in both mantle and gills in juvenile before gradually restricted within gill bacteriocytes [19]. Holobionts of Bathymodiolinae mussel and chemosynthetic bacteria was therefore regarded as an ideal model in investigating both the symbiosis and deep sea adaptation against the extreme environment of seeps and vents (cold, dark, insu cient photosynthesis based organic matter, but rich in methane or H 2 S) [20]. Many studies have been thereafter undertaken to determine the mechanisms beneath their symbiosis [20][21][22][23][24]. For instance, several reports have revealed the participation of PRRs in the innate response of Bathymodiolinae mussels against a Vibrio challenge or long-term acclimatization [25][26][27][28]. However, few studies were conducted with symbiont challenge due to the lack of cultivable symbiotic SOBs or MOBs, leaving the immune recognition and signal transduction underlying the onset and maintenance of symbiosis largely unknown.
Since it was rst discovered in 1987 in Sagami Bay, Gigantidas platifrons (formerly named as Bathymodiolus platifrons) has been found to be dominant in cold seeps and hydrothermal vents of Okinawa Trough and Formosa Ridge of the South China Sea [29][30][31]. It was found that G. platifrons only harbored MOBs in their bacteriocytes, making them an ideal model for investigating the immune response against symbionts. Recent studies found that multiple PRRs, including immunoglobin domain containing proteins, PGRPs, Toll-like receptors (TLRs), and C1qDC proteins were present extensively in the genome of G. platifrons and might play a crucial role in symbiosis [20,32]. Besides, works by our lab have further surveyed the global immune response of G. platifrons after short-time decolonization (symbiont depletion) and found multiple PRRs (such as leucine-rich repeat protein or LRRs) that respond to simultaneous MOBs or nonsymbiotic bacteria challenge [21,23]. However, given that regaining of symbionts in adult Bathymodiolinae was most likely accomplished in symbiotic state instead of aposymbiotic state, it is still necessary to know whether the host immune recognition could differ and whether symbionts could render the host with more robust immune response. Moreover, given the crucial role of miRNAs in host-symbiont interaction across plants and animals, it's also interesting whether Bathymodiolinae mussels could encode miRNAs modulating symbiosis-related process by targeting immune-related genes. In this study, the Bathymodiolinae mussel G. platifrons collected from the Formosa ridge in the South China Sea were challenged with either symbiotic MOBs or nonsymbiotic Vibrio bacteria and subjected to both miRNA and transcriptome sequencing. The aim of the study was to (1) investigate the expression pattern of immune related genes as well as miRNAs in G. platifrons holobionts against both symbionts and nonsymbiotic Vibrio bacteria challenges; (2) decode subsequent immune effects mediated by genes that response to symbiont challenge; and (3) survey potential modulation on symbiosis-related process endowed by Bathymodiolinae miRNAs, hopefully providing more information in the interaction between Bathymodiolinae mussels with their chemosynthetic bacteria.
Results
Overview of G. platifrons gill tissue and transcriptome/miRNA sequencing Some of the deep sea mussel G. platifrons were dissected immediately after retrieve by ROV to clarify the tness of samples. As observed, tissues of fresh collected mussels remained intact while the gill were found composed by numerous homorhabdic laments (Fig. S1 A). With 4', 6-diamidino-2-phenylindole (DAPI) staining, we could see that gill laments were made of monolayer epithelium cells overlying a central lumen containing haemocytes ( Fig. S1 B C). The symbiotic MOBs were distributing exclusively in the apical region of bacteriocytes. Transmission electron micrograph further demonstrated that majority of the gill cells were bacteriocytes while other cells such as ciliated cells and mucous cells were also observed. Noticeable, though most symbionts were within membrane-delimited vacuoles, some were engulfed by lysosomes ( Fig. S1 D E).
After on board acclimation, Gigantidas mussels were then challenged with sterilized seawater, enriched symbiont MOBs ( Table 1). After ltration with reads containing adapters or with over 10% unknown nucleotides or more than 50% low quality bases, 118.40 Gbp quali ed data were retained and mapped against the Gigantidas genome by HISAT2. Consequently, over 69.17% of the sequencing reads were successfully aligned with the genome, while the mapping rate of each group ranged from 56.27-82.25% (Supplementary Table 1). Comparatively, a total of 331.79 M clean reads from eighteen libraries were obtained for miRNA sequencing and 232.54 M reads were retained after quality control and engaged for genome mapping. As a result, about 189.73 M reads were mapped with genome and suitable for subsequent analysis such as miRNA identi cation and expressional evaluation (Supplementary Table 1).
Differentially Expressed Genes And Mirnas During Bacterial Challenge
All the aligned reads were then processed for transcripts assembly and expressional evaluation. As a result, 24,595 genes out of 33,962 were found to be expressing among all groups (Fig. 1A, Supplementary Table 2). In comparison with CT12 group where mussels were injected with sterilized seawater for 12 h, a sum of 95 genes were found signi cantly up-regulated in the EN12 group, where Gigantidas were challenged with enriched MOB symbionts for 12 h (Supplementary Table 3). Meanwhile, a total of 59 genes were also down-regulated in the EN12 group. When Bathymodiolinae mussels were challenged with nonsymbiotic V. alginolyticus, about 182 genes were found signi cantly increased at 12 h (VA12 group, in comparison with CT12 group) while 61 genes decreased. When mussels were challenged with symbionts for 24 h, only 73 genes were found robustly up-regulated (EN24) while the transcripts of 65 genes were down-regulated (in comparison with the CT24 group). Comparatively, transcripts of 206 genes were promoted at 24 h post V. alginolyticus challenge while that of 82 genes were down-regulated (VA24 group, Supplementary Table 3). A Veen diagram of these DEGs was subsequently constructed (Fig. 1B). It transpired that about 39 genes were responsive in both the EN12 and VA12 groups, while 33 genes were regulated remarkably in both the EN24 and VA24 groups. Only 17 of 269 genes were found to have been vigorously modulated in both the EN12 and EN24 groups, while 51 of 471 genes were found to be responsive in both the VA12 and VA24 groups.
For miRNA sequencing, a total of 459 miRNAs were identi ed in gill tissue of G. platifrons (Supplementary Table 4). Among these miRNAs, 386 miRNAs were found conserved across species by sharing same seed region and therefore designated as known miRNAs. A total of 73 miRNAs were rst reported given the seed region and suggested as novel ones. Moreover, about 105 miRNAs were found with two more precursors in genome (up to six for gpl-miR-544a, See Fig. S3). The overall expression level of all miRNAs in each groups were then compared using box plot given the log 10 (TPM + 1) values ( Fig. 1C). The bottom and top of the box represented the rst and third quartiles of in corresponding group while the line insides the box stood for the median value. It transpired that the median quartile in CT12, EN12 and VA24 groups were similar while that in VA12, CT24 and EN24 groups were similar.
Noticeably, the third quartile of EN24 group was signi cantly lower than the rest groups while that in VA24 group were markedly higher.
The differentially expressed miRNAs (DE miRNAs) were then determined (Supplementary Table 5, Fig. S4). Consequently, the expression levels of 30 miRNAs were promoted in EN12 group while that of 31 miRNAs were repressed when compared with CT12 group. Comparatively, a total of 13 miRNAs were upregulated in VA12 group and 24 were down-regulated. When challenged for 24 h, only 21 miRNAs were differentially expressed in EN24 group, including 11 increased ones and 10 decreased ones. Similarly, 20 miRNAs were vigorously modulated in VA24 group, among which 14 miRNAs were promoted and six miRNAs were repressed. Among these DE miRNAs, 19 miRNAs were responsive to both EN12 and VA12 group, among which three miRNAs were found in opposite pattern (gpl-novel-47, gpl-miR-7538 and gpl-miR-4981, increased in EN12 group yet decreased in VA12 group). For the rest 16 DE miRNAs that share similar pattern, only four of them were up-regulated (gpl-novel-49, gpl-miR-479a, gpl-novel-72 and gpl-miR-3610). At the meantime, about seven miRNAs were found differentially expressed in both EN24 and VA24 group and none was in opposite pattern. In detail, four miRNA including gpl-miR-9570, gpl-miR-9272, gpl-miR-190 and gpl-miR-4981 were up-regulated while gpl-miR-D16, gpl-miR-5324 and gpl-miR-100 were down-regulated.
Functional Annotation Of Degs And Targets Of De Mirnas
GO annotation of all DEGs was subsequently conducted by Blast2GO and visualized by WEGO. As a result, immune-related functions and processes, such as signal transduction, the cellular response to stimuli, immune responses, and cell death, were found and were suggested to have been modulated during both symbiotic and nonsymbiotic bacterial challenges (Fig. S5). Moreover, genes involved in neurotransmitter binding, transcription regulation, cellular communication, and biological adhesion were also vigorously regulated by Bathymodiolinae mussels in an MOB challenge at 24 h (Fig. S5 B). It was also found that more immune-related processes, such as scavenger receptor activity, hormone metabolic processes, and cell killing, were vigorously modulated during a V. alginolyticus challenge (Fig. S5 C, D).
The target genes of all DE miRNAs were then predicted (Supplementary Table 6). Consequently, a total of 744 unique genes were predicted as putative targets of the DE miRNAs. GO distribution analysis further demonstrated that multiple immune-related processes, such as immune system process and response to stimulus could be modulated by above DE miRNAs (Fig. S6).
Distinct expression pattern of PRRs in symbiotic and nonsymbiotic bacterial challenges As important molecules in immune recognition, 29 PRRs, including 17 C1q proteins, two IL17, three lowdensity lipoprotein receptor-related protein (LRPs), PGRP_scaffold2290, LRR_Scaffold_175.36, LRR74_Scaffold_93.19, TLR2_scaffold1476, CD209_Scaffold_209.75, and low a nity immunoglobulin epsilon Fc receptor (FCER_Scaffold_21.25) were also identi ed as immune responsive genes in either MOB or V. alginolyticus challenges (Supplementary Table 7). The expression patterns of the above PRRs were therefore surveyed. It was found that immune responsive PRRs could cluster distinctly between the EN and VA groups given their expression level (Fig. 2). The PRRs in the EN12 and EN24 groups were found to be initially branched together before clustering with the VA12 and VA24 groups.
Expression Pattern Of Immune Effectors Responsive To Bacterial Challenges As described previously, multiple immune-related processes could be modulated in MOB or V. alginolyticus challenges. The expression patterns of immune effectors during challenges were surveyed for further con rmation. As a result, multiple genes involved in immune signaling transduction, cytokine expression, cell migration, and adhesion and oxidation-redox homeostasis were found to be vigorously regulated (Fig. 3, Supplementary Table 8). In detail, three mammalian ependymin-related proteins (EPDRs), two GTPase IMAP family member 4 (GIMA4) genes, two calmodulin (CaM) genes, and two cytochrome P450 genes, along with the caspase8 (Casp8), heat shock protein 70 (HSP70), and the cathepsin L (catL) genes were signi cantly modulated in the EN12 group. However, only three HSP70 genes and two E3 ubiquitin-protein ligase TRIM genes, along with the protein mab-21, neuronal acetylcholine receptor (nAChR), and the baculoviral IAP repeat-containing protein (BIRC) genes were found to be vigorously modulated at 24 h post MOB challenge. In addition to the genes mentioned above, immune genes such as the inhibitor of apoptosis (IAP), endoplasmic reticulum resident protein (ERP), Gprotein coupled receptor (GPR), macrophage migration inhibitory factor (MIF), and lipopolysaccharideinduced TNF-alpha factor (LITAF) were also found responsive at 12 h post V. alginolyticus challenge. Only two P450 genes, three HSP70 genes, and four TRIM genes, along with the nAChR, BIRC, and the superoxide dismutase (SOD) genes were vigorously modulated when Gigantidas was stressed by V. alginolyticus for 24 h.
Diversity of immune-related signal transducers were targeted by DE miRNAs responsive to bacterial challenges Besides these differentially expressed PRRs and immune effectors, there were also multiple immunerelated genes that were targeted by DE miRNAs from either MOB challenge or nonsymbiont challenge. In detail, four PRRs including TLR4 (TLR4_scaffold2249) and LRRs (LRR_Scaffold_832.4, LRR_ Scaffold_405.8 and LRR74_Scaffold_342.14) in addition with two immune effectors including lysosomal protective protein (CSTA) and matrix metalloproteinase-2 (MMP) were found targeted by miRNAs that differentially expressed in EN12 group (Fig. 4A). Meanwhile, phagocytosis-related receptors or signal transducers such as CD82, vacuolar protein sorting-associated protein 33 (VSP33), Ras-related protein Rab-5C (Rab5C) and integrin beta (INTB), along with apoptosis modulators including IAP, Ras-responsive element-binding protein (RREB1) and cAMP-responsive element-binding protein 2 (CREB2) were also suggested as targets of DE miRNAs in EN12 group. When the Bathymodiolinae mussels were challenged by MOB for 24 h, only one PRR (LRR74_Scaffold_342.14) were found being continuously targeted (Fig. 4B). Notwithstanding, diversity of immune-related transducers such as CaM, NF-kappa-B inhibitor alpha (IκB) and TNF receptor-associated factor 6 (TRAF6), along with caspase8, VSP33 were now putatively being modulated.
Global immune response of Gigantidas against MOBs and nonsymbiotic bacteria
It has been demonstrated that all G. platifrons are in a tight association with type I methanotrophs in their bacteriocytes and can obtain nutrition directly from them [33]. This close relationship between the host and symbiont makes them an ideal model for understanding how organisms recognize their chemosynthetic symbionts [20]. However, the mechanisms controlling the symbiosis between G. platifrons and symbiotic MOBs still remain largely unknown due to the unavailability of cultivable symbionts and accessible mussels. Several methods have been used to harvest symbionts from Bathymodiolinae mussels to date, including enrichment by differential centrifugation and density gradient centrifugation [33][34][35][36]. It has been found that far fewer MOBs can be yielded from density gradient centrifugation compared to differential centrifugation, although their purity is better. In the present study, a modi ed method based on differential centrifugation was applied to obtain symbiotic MOBs, improving the purity with little loss of yield. Enlighted by other immunological studies, an extra step of heating at 56℃ for 30 min was applied for MOBs before they were used for a challenge [37,38]. This procedure deactivated the host proteins without denaturing their tertiary structure, which could minimize in uences brought by byproducts of the MOBs enrichment, such as cytokines and complements while maximize immune response induced solely by MOBs. MOBs successively harvested as described above, along with heat treated V. alginolyticus, were then quanti ed and subjected to injection (Fig. S1, S2).
Recent studies have investigated the expression pattern of the immune-related genes of Bathymodioline during bacterial challenge by qRT-PCR, demonstrating the robust response of the host immune system [25,27,28,39]. However, these studies failed to show the host response globally without state-of-the-art molecular tools. The successful application of next generation sequencing in deep sea mussels now provides a better solution [26,32]. In the present study, expressional alternations of Gigantidas genes during either symbiotic MOB or nonsymbiotic bacterial challenges were surveyed globally. Interestingly, it was found that overall immune responsive genes of Gigantidas mussels against symbiont challenge was far less than that during nonsymbiotic bacterial challenge or in symbionts-depleted Gigantidas mussels or in immune response of shallow mussels such as Mytilus coruscus [23,40]. Notwithstanding, similar phenomenon were also observed in other holobionts such as coral where only dozens of DEG were annotated after symbiont challenge [41,42]. As suggested by Gross et al., the interaction between host and symbionts could undergo pathogenic colonizing stage at rst and then a bene cial stage [5]. Meanwhile, unlike the pathogens, host immune response against symbionts could be highly adapted to protect symbionts rather than eliminating them, which therefore might result in minimized immune response observed here. The mild response caused by symbionts could also be energy-saving as the main purpose of symbiosis is to improve the nutritional state of the two partners. Interestingly, about 5%-15% Gigantidas miRNAs were responsive to either MOB or V. alginolyticus challenges (Fig. 1D). miRNAs are known as crucial modulators for gene expression at post-transcriptional level. These DE miRNAs could also strengthen the immune response of host. Moreover, though only hundreds of genes or miRNAs were found responsive to bacteria, most of them displayed a spatiotemporal-speci c expression pattern between groups. For example, only 39 out of 358 DEGs and 19 out of 79 DE miRNAs were the same at 12 h post MOBs and V. alginolyticus challenges (the number altered to 33 out of 393 genes and 7 out of 34 miRNA at 24 h post challenge) (Fig. 1B,D). It was therefore concluded that Gigantidas might respond against MOBs and V. alginolyticus in two distinct ways. Nevertheless, GO analysis of DEGs and targets of DE miRNAs demonstrated that multiple immune-related processes, such as signal transduction, cellular response to stimuli, immune responses, and cell death, were modulated in both the MOB and V. alginolyticus challenge groups. Noticeably, some immune-related processes and functions were only found in certain groups. For example, immune processes such as neurotransmitter binding, transcription regulation, cellular communication, and biological adhesion, were only regulated after the MOB challenge, while scavenger receptor activity, hormone metabolic processes, and cell killing were vigorously modulated after the V. alginolyticus challenge. These ndings con rmed the involvement of cell communication and cell adhesion in Bathymodiolinae symbiosis, which could also be observed in holobionts such as coral and squid [2,[43][44][45]. Comparatively, scavenger receptor activity and cell killing were also well known in pathogen induced immune response with indispensable role in the elimination of pathogens [44][45][46][47].
Distinct expressional pattern of Gigantidas PRRs against symbiotic MOBs
While diverse immune processes were modulated after MOB challenge, how were they trigged yet remained largely unknown. It is well known that deep sea mussel Gigantidas can only rely on innate immunity for either symbiosis or pathogen elimination while PRRs could play an irreplaceable role by recognizing symbionts or pathogens and further activating the subsequent immune processes [48]. Without immunoglobulins or acquired immunity, how Gigantidas discriminate MOBs and non-symbionts remained largely unknown. The expression pattern of PRRs during both the MOB and V. alginolyticus challenges was then investigated. Consequently, it was found that Gigantidas PRRs were differentially modulated between challenges either at transcriptional level or post-transcriptional level, while the PRRs in the EN12 and EN24 groups shared a more similar pattern than those in the VA12 and VA24 groups (Figs. 2, S7). It seems that different combination of PRRs might function cooperatively as "immunoglobulins" to speci cally recognize different bacteria. Similar results have been reported in other molluscs such as the oyster Crassostrea gigas, where some PRRs were found to be responsive against multiple PAMPs, while others was merely responsive to certain PAMPs or pathogens [49][50][51]. More interestingly, there were 33 PRRs (four PRRs were potentially up-regulated by miRNAs) found to be dramatically up-regulated during the MOB and V. alginolyticus challenges. Among these PRRs, multiple C1q proteins, TLR2_scaffold1476, along with LRR74_Scaffold_342.14 were found to be vigorously modulated after the MOB challenge, while PGRP_scaffold2290, LRR_Scaffold_175.36, LRR74_Scaffold_93.19, TLR2_scaffold1476, TLR4_scaffold2249, and VLR_ Scaffold_1558.11 as well as the remaining C1q proteins were only responsive against the V. alginolyticus challenge. It was suggested that these PRRs might facilitate the specialized immune recognition to MOBs or nonsymbiotic bacteria correspondingly. Though few of them were veri ed in vivo or in vitro, our previous research has found the participation of both C1q, TLR2 and LRR in decolonization of Gigantidas or bacteria challenge, recon rming their role in symbionts recognition [23]. Comparatively, though previously found involved in symbiont recognition of B. septemdierum, B. azoricus, Hydra spp and E. scolopes, LRRs and PGRPs were more likely involved in the recognition of nonsymbionts in G. platifrons given their expression pattern [22,24,52,53]. What's more, C1q proteins were found ubiquitously modulated during the immune response. As reported, C1q proteins were widely expressed and massively expanded in mollucs including mussels [20,54,55]. Given that C1q proteins could bind with diversity of immune-related proteins and acting in concert triggering subsequent immune processes, it was suggested that C1q proteins could function as scaffold of PRRs and dedicate to the immune recognition of either MOBs or nonsymbiotic bacteria correspondingly [56,57].
It is well known that the interaction between PRRs and PAMPs relies greatly on their spatial structure and could therefore vary considerably [58,59]. All of the up-regulated PRRs were clustered according to their protein sequence. Interestingly, all C1q proteins were found to be divided into two clusters, while the majority of proteins in the upper cluster (six of eight proteins) were MOB-responsive. On the other hand, about ve of eight C1q proteins in another cluster were V. alginolyticus-responsive (Fig. 2). Given aforesaid speculation that C1q proteins might act as scaffold of other PRRs and contribute to the recognition of symbionts or nonsymbionts, it was therefore of interest to know how these PRRs were modulated and how the structural divergence in uence the recognition with symbionts. However, the present study failed to answer these questions due to the limited investigation. Nevertheless, the differentially expressed PRRs would undoubtedly modulate the expression of immune effectors and modulators. miRNAs facilitate the host with comprehensive modulation network in symbiosis-related process As mentioned previously, multiple immune processes could be vigorously regulated by Gigantidas either during the symbiosis or pathogen elimination. The expressional alternations of immune-related genes were surveyed manually to recon rm the above conclusion. As a result, multiple immune effectors and modulators, including Casp8, IAP, BIRC, LITAF, MIF, CaM, nAChR, ERP, and catL, were found vigorously modulated at transcriptional level after the challenge. Interestingly, though distinct expressional pattern were found in PRR genes, multiple immune effectors with similar function were found responsive to both MOB and V. alginolyticus challenges. For example, several P450 and HSP70 genes, along with BIRC genes, were down-regulated in both challenges while EPDR genes and nAChR genes were up-regulated during the stress period. Given their conserved function across species, it was therefore suggested that that the robust increase of P450 and HSP70 could dedicate synergistically to the maintenance of homeostasis of Gigantidas during symbiosis or pathogen elimination [60][61][62]. Similarly, EPDR genes are known to play an indispensable role in promoting matrix-mediated cell adhesion while nAChR has a crucial role in the ACh-mediated neuroendocrine-immune system of either vertebrates or invertebrates [63][64][65][66]. The shared expression pattern of the above genes indicated the conservative response of Bathymodiolinae mussels, which could promote the adhesion to MOB and nonsymbiotic V. alginolyticus, facilitate the subsequent symbiosis or elimination. Noticeably, some miRNAs were also responsive to both MOB and V. alginolyticus challenge while putatively promoting the expression level of HSPs (gpl-miR-8386, gpl-novel-28, gpl-miR-4045, gpl-miR-2320), CD82 (gpl-miR-4045), VSP33 (gpl-miR-8386), CDPKs (gpl-miR-2469, gpl-miR-4045, gpl-novel-49) and CREB2 (gpl-miR-4627). Among these genes CD82 and VSP33 are known as crucial receptors or regulators in phagocytosis, and are therefore suggested to promote the engulfment of either symbionts or nonsymbionts.
Besides these shared genes, expressional discriminations of multiple positive regulators of cytokines, such as the MIF, LITAF, and some apoptosis-related genes such as caspase8 could also be observed in different stimulus groups. More interestingly, it was found that multiple Gigantidas miRNAs that targeting above process or genes demonstrated opposite expression pattern in MOB and V. alginolyticus challenges, resulting in a more distinct immune response. For instance, apoptosis has been suggested as an effective way of eliminating pathogens during massive infection, sacri cing the minority and protecting the majority [67][68][69]. Comparatively, repression of apoptosis after recognizing symbionts were therefore expected as symbionts should be unthreatening, if not bene cial, to host cells. Consequently, it was found that Caspase8, an upstream protease that activates the cascade of caspases responsible for cell death, was repressed at transcriptional level after the MOB challenge, while IAPs, which are crucial negative regulators of apoptosis, were repressed in the V. alginolyticus challenge [67,70,71]. In consistency with transcriptome results, it was found that miRNAs targeting anti-apoptotic genes such as IAP (gpl-miR-5887) and CREB2 were repressed after MOB challenge while miRNAs targeting pro-apoptotic genes including caspase8 (gpl-miR-9272) and IκB (gpl-miR-27) were promoted [72]. Considering that most miRNAs were negative regulators in gene translation, apoptosis of host cell were therefore suggested to be repressed when stimulated by MOB instead of nonsymbionts. In addition, genes involved in phagosome localization and lysosome-mediated degradation were also differentially modulated between challenges. For instance, Rab5C and INTB are important molecules in the maturation and translocation phagosome and were suggested repressed by Gigatidas miRNAs (gpl-novel-12, gpl-novel-49) that up-regulated in MOB challenge [73,74]. In comparison, INTB are suggested promoted in nonsymbiont challenge. On the other hand, though phagosome maturation and translocation were speculated repressed by MOB challenge, one catL gene was found signi cantly up-regulated simultaneously. As an important cysteine protease, catL has been found to have a crucial function in lysosome-mediated degradation [75,76]. It has been suggested that lysosome-mediated degradation could play an indispensable role in the nutrition acquisition of Bathymodiolinae mussels as well as the population control of symbionts [20,77,78]. The drastic increase in catL transcripts could therefore enhance the bioactivity of lysosomes in host gills after contacting MOBs and promote above process directly. Actually, modulations on cell apoptosis and lysosome-mediated degradation could also be observed in our previous study during decolonization [23]. It was found that massive IAPs were upregulated while multiple catL genes were down-regulated. Their expression pattern indicated repressed cell apoptosis and lysosome activity during symbiont-depletion and further con rmed our ndings herein.
Conclusions
The present study has demonstrated how Gigantidas respond to symbionts or nonsymbionts by investigating the expression pattern of either protein-coding genes or miRNAs. It is worthy to note that Gigantidas PRRs were differentially modulated in responding to symbiotic MOBs while multiple immunerelated transducers or effectors could be recruited promoting the homeostasis and lysosome activity of host and engulfment of symbionts. Notwithstanding, diversity of immune-related pathways were shared between symbiont-induced or nonsymbiont-induced responses. However, the distinct expression pattern of symbiont-induced miRNAs could further facilitate a more comprehensive modulation network for symbiosis by repressing apoptosis and phagosome maturation. Though the interaction between miRNAs and Gigantidas genes were insu ciently veri ed, the present results have demonstrated the complexity in the symbiosis between Gigantidas mussels and MOBs.
Materials And Methods
Animal collection, maintenance, and bacterial challenge The G. platifrons specimens used in the study were collected from cold seeps in the Formosa Ridge in the South China Sea (22°06'N, 119°17'E) during an expedition by the R/V Kexue in 2017. After retrieval on deck, following collection by a remote operated vehicle (ROV) Faxian, mussels were transferred immediately into the onboard aquarium and maintained at atmospheric pressure in ltered circulating seawater (4℃), with a CH 4 supplement [21]. After acclimation for 48 h, 54 similarly sized G. platifrons mussels (length ranging from 70 to 100 mm) were randomly collected and designed as CT, EN, and VA group correspondingly for subsequent bacterial challenges. Some mussels were subjected to a tissue dissection soon after retrieval and gill tissue was collected for subsequent para n sections, transmission electron microscopy, and MOB puri cation.
Symbiotic MOBs were puri ed using a method reported previously with some modi cations [33]. In short, gill tissue was homogenized on ice and ltered using sterilized gauze and nylon lters, with successive pore sizes of 10, 5, and 3 µm. Flow-though was initially centrifuged under conditions of 300 g at 4℃ for 5 min to discard host cells and then 4,000 g at 4℃ for 15 min to collect symbiotic MOBs. After washing three times using sterilized seawater, enriched MOBs were suspended in sterilized seawater for use. The purity of enriched MOBs was further analyzed by scanning electron microscopy. The genomic DNA of the MOBs was also extracted using a Mollusc DNA kit (Omega) and subjected to fragment cloning and a quantitative real time polymerase chain reaction (qRT-PCR) of the pmoA gene. A standard curve of the pmoA gene was generated simultaneously, given a copy number and Ct value, and used for the quanti cation of enriched MOBs. The nonsymbiotic bacteria, Vibrio alginolyticus (isolated from the macro fauna of Formosa ridge cold seep and kindly provided by Dr. Li Sun from the Institute of Oceanology, Chinese Academy of Sciences), was cultured overnight using 2216E medium at 18℃ and collected by centrifugation. As demonstrated, majority of proteins could be neutralized in activity by heating at 56℃ for 30 min without collapsing its tertiary structure. Therefore, both MOBs and V. alginolyticus were further subjected to heat treatment to deactivate the host protein or extracellular products produced during enrichment or culture. Then, the bacteria (MOB and V. alginolyticus) were diluted to a concentration of 1 × 10 6 copy/mL using ltered seawater before use. Mussels in the CT, EN or VA groups were then challenged with sterilized seawater, MOB and V. alginolyticus (100 µL per individual) respectively and maintained in independent tanks (10 L) before sampling. No mortality was observed during the experiment while gill tissues from three random mussels in each group were then collected at 12 and 24 h post injection. All samples employed for mRNA or small RNA extraction were stored with Trizol reagent (Invitrogen) or liquid nitrogen. Each trial was conducted with three replicates.
Rna Extraction, Library Construction, And Rna-seq Of All Samples
Total RNA for transcriptome sequencing was extracted with Trizol reagent (Invitrogen). Meanwhile, small RNA extraction from gill tissue was conducted using Purelink miRNA isolation kit (Invitrogen) according to the manual. The integrity of total RNA was rst con rmed by both agarose gels and a Bioanalyzer 2100 (Agilent). The purity and concentration of total RNA were then determined using a NanoPhotometer spectrophotometer (Implen) and Qubit Fluorometer (Invitrogen). Quali ed RNA samples were subsequently used for library construction, following the instructions for the Illumina HiSeq 2500 platform. In brief, total RNA for transcriptome sequencing was initially enriched by Oligo(dT) beads for mRNA and subjected to fragmentation afterward. After in vitro transcription for rst-stand cDNA and synthesis for the second-strand, the products were then ligated with sequencing adapters. After PCR ampli cation, all cDNA libraries were nally sequenced by the Illumina HiSeq 2500 platform with pair-end reads. Comparatively, quali ed small RNA was rst subjected to 3' and 5' adapter ligation and ampli ed by PCR. After size selection, the puri ed products were also sequenced using Illumina HiSeq 2500 platform. The resulting sequencing data were then uploaded and deposited at The National Center for Biotechnology Information (https://www.ncbi.nlm.nih.gov/, BioProject NO. PRJNA540074, PRJNA613553).
Genome mapping and identi cation of differentially expressed genes (DEGs) or miRNA (DE miRNAs) For transcriptome sequencing, the quality control procedure was rst conducted using the raw data by FASTP (https://github.com/OpenGene/fastp). Reads with adapters or containing more than 10% of unknown nucleotides or more than 50% of bases that Q-value ≤ 20 were suggested as low quality and removed automatically. The remaining reads were then mapped with G. platifrons genome using HISAT2 with default parameters. The genome was originally reported by Sun et al and updated by our lab [20]. All mapped reads were subsequently assembled for transcripts by Cu inks while the expression levels of all Gigantidas genes were estimated after normalization by the fragments per kilobase of transcript per million (FPKM) value. The differentially expressed genes (DEGs) between the stimulation and control groups were nally determined by Cuffdiff, with fold changes ≥ 2 and a false discovery rate-adjusted P < 0.05.
For miRNA sequencing, the raw data were also ltered by FASTP toolkit to remove low quality reads (reads containing more than one base that Q-value ≤ 20 or containing unknown nucleotides) or reads with 5' adapter/polyA or without 3' adapter or shorter than 18 nt. The clean tags were then aligned with small RNA deposited in GeneBank database along with Rfam database to remove rRNA, scRNA, snoRNA, snRNA and tRNA. The remaining reads were further mapped with the reference genome of G. platifrons using bowtie-1.00 software to discard these located at exon or intron region. The resulting reads were nally subjected by miRDeep2 software for miRNA identi cation. Noticeably, mature miRNAs and precursor sequences from other species deposited in miRBase were employed for miRDeep2 as references to identify the known and novel miRNAs in G. platifrons. Meanwhile, miRNA candidates with raw count number less than 18 were regarded as low abundance and excluded from subsequent expression analysis. The expression level of all miRNAs were calculated and normalized by transcripts per million (TPM) value. Differentially expressed miRNA (DE miRNAs) between groups were further determined if fold change ≥ 2 and a false discovery rate-adjusted P value ≤ 0.05.
Target prediction of DE miRNAs and Gene ontology (GO) analysis of miRNA targets or DEGs
The target genes of all DE miRNAs were predicated by miRanda using 3'-UTR information of all G. platifrons genes given the genome annotation. Given our experience, stringent parameters with the score threshold raised to 155 and energy threshold adjusted to -23 were set for miRanda software when conducting the prediction.
A GO distribution analysis of all candidate targets or DEGs was conducted by Blast2GO (https://www.blast2go.com/) and further visualized by WEGO (http://wego.genomics.org.cn/). The annotation le of the Gigantidas genome was deposited in Figshare (https:// gshare.com/) under the le name of bapl_v4_annt_with_gene_ID_txt. Immune-related genes were selected manually and subjected to expressional clustering by the pheatmap package (https://cran.rproject.org/web/packages/pheatmap/index.html). Full-length protein sequences of PRR genes were then retrieved based on genome information and subjected to a phylogenetic analysis by Seaview Competing interests the study, and assisted with the analysis and interpretation of the results. ZSZ helped with sample collection and morphological analysis of both mussels and bacteria. CL and LC conducted the mussel sampling during the cruise. LCL conceived the study, coordinatedthe experiment, and helped draft the manuscript. All authorsgave their nal approval for publication.
Figure 1
Overview of differentially expressed genes (DEGs)or miRNAs across samples. (A) A total of 95 and 59 genes were found to be either up-or down-regulated in the EN12 group where Gigantidasplatifronswere challenged with enriched methane-oxidizing bacteria (MOB) symbionts for 12 h and compared to a control (CT12) group where mussels were injected with sterilized seawater for 12 h. Atotal of 182 and 61 genes were found to be signi cantly increased or decreased in the VA12 group where Bathymodiolinae mussels were challenged with nonsymbioticV. alginolyticus when compared to the CT12 group. When mussels were challenged with symbionts for 24 h, the expression levels of 73 and 65 genes were found to be robustly up-or down-regulated. A total of 206 and 82 genes were signi cantly increased and decreased, respectively, in the VA24 group. (B) A veen plot of the above DEGs was subsequently constructed. A total of 36 genes were found to be responsive in both the EN12 and VA12 groups, while 33 genes were regulated vigorously in both the EN24 and VA24 groups. Sixteen of 269 genes were vigorously modulated in both the EN12 and EN24 groups, while51 of 471 genes were found to be responsive in both the VA12 and VA24 groups. Phylogenetic analysis of differentially expressed pattern recognition receptors (PRRs). All differentially expressed PRRs were subjected to a phylogenetic analysis using their protein sequences by Seaview (maximum likelihood method and LG model with 1000 bootstrap samples). PRRs with similar protein sequence could cluster rstly. The expression pattern of PRRs was also illustrated with colored markings when they were vigorously modulated in corresponding group in comparison with CT12/24 group. As found, many PRRs wereexclusively responsive in the EN groups or VA groups while two PRRs were responsive in both EN and VA challenges. Besides, some PRRs with similar protein sequences could respond synchronously to either MOB or nonsymbioticbacteria challenge.
Figure 3
Heat map and hierarchical clustering of differentially expressed immune effectors. The differentially expressed immune effectors in each group were clustered according to their expression pattern. The alternations of transcripts in each group were also marked with different colors (green if decreased and red if increased). Schematic diagram of miRNA-mediated immunomodulation network of G.platifronsagainst Vibrio alginolyticus challenge. (A) When the Bathymodiolinae mussels were stimulated by V. alginolyticus for 12 h, more immune-related target genes were found. In detail, PRRs including TLR2, TLR4, LRR74 and VLR were suggested as putative targets of miRNAs decreased in VA12 group. Meanwhile, phagocytosisrelated genes, apoptosis-related genes, along with some immune-related transducers or effectors were also putatively modulated by DE miRNAs in VA12 group. (B) When the stimulus continued to 24 h, only TLR4, TBK1, caspase8, RREB1, CRCT1, MMP and BPI were found being targeted by DE miRNAs in VA24 group. Consistently, target genes were marked in red arrow if miRNAs were down-regulated in VA12 or VA24 group while marked in green when miRNAs were up-regulated.
Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download. | 9,700.6 | 2020-05-20T00:00:00.000 | [
"Biology"
] |
Fatigue of short fibre reinforced polymers: from material process to fatigue life of industrial components
For many years, SFRPs (short fibre reinforced polyamides) have been used in the automotive industry as a means to reduce vehicle weight. However, their complex anisotropic and heterogeneous microstructure requires sophisticated material characterisation and simulation. This study aims at presenting the simulation strategy adopted by an automotive company address these challenges. The manufacturing process is first simulated and correlated with tomography analysis. Then, based on the numerical microstructure, integrative simulation is used to analyse and predict mechanical behaviour of fatigue coupons and industrial parts. Lastly, two fatigue criteria based on strain energy density are presented and fatigue lives of coupons and industrial parts assessed.
Introduction
Faced with the environmental challenges of the 21st century, the automotive industry has had to adapt its material strategy.Indeed, current and future regulations are ever more demanding.To minimize the environmental footprint, vehicle weight has to be kept to a minimum to lower gas consumption, while engine performance has to be maximized to increase vehicle efficiency.
Plastics in general, and specifically short fibre reinforced plastics (SFRP) are often a good choice for more mechanically demanding applications: they combine a good specific mechanical strength, high production rate and low manufacturing costs, while providing the capability to be moulded in complex geometries.
However, designing such mechanical parts has generated new difficulties: the material microstructure caused by the fibre orientation produces material anisotropy.All other things being equal, the average fibre orientation can vastly modify mechanical characteristics [1].
In order to take into account the effect of the microstructure on the mechanical behaviour of SFRP, new lifetime assessment methods based on finite element analysis (FEA) have been developed.Firstly, the injection moulding is simulated and the fibre orientation in the part estimated.Secondly, integrative simulation is performed: this allows capture of the local anisotropy of the material and provides reliable mechanical fields (stress and strain).Lastly, since mechanical behaviour is here considered uncoupled with damage, the results can be post-processed and, using an appropriate fatigue criterion, the fatigue life assessed.In this study, the different steps of this fatigue life assessment are presented on fatigue coupons and tested on industrial prototyped parts.
Material and testing
For confidentiality reasons, the results have been normalized.In order to remain intelligible, stress values have all been normalized by the same value (ultimate tensile stress at 23°C, 50% relative humidity of the 0° specimen).
Coupons material and preparations
The material used in this study is a polyamide reinforced with 50% weight glass fibres (hereafter referred to as PA66GF50).Further details on material are not disclosed for confidentiality reasons.
The fatigue test coupon geometry is presented in Figure 1, accordingly to ISO 527B standard.
These are machined from injected plates (see Figure 2).By using this method, different specimen "orientations" can be used.The "0°" orientation hence corresponds to the specimen machined following the injection direction, while the "90°" one is taken perpendicular to it.The "45°" specimen is cut at a 45 degree angle from the injection direction.
Considering the sensitivity of the material to humidity (see [2]), the material is first conditioned at MATEC Web of Conferences 165, 08003 (2018) https://doi.org/10.1051/matecconf/201816508003FATIGUE 2018 50% relative humidity using an accelerated conditioning method described in the ISO 1110 standard.
Industrial parts
The part used in this study is a prototype engine mount.As for the fatigue test coupons, the parts are conditioned using ISO 1110 standard.
Testing
Fatigue coupon testing was carried out at HBM Prenscia's testing facilities using a compressive tensile machine.
To ensure test repeatability and representativeness, the tests were done in a climatic chamber at 23°C and 50% relative humidity.The tests were all conducted in load control using 1 to 3Hz test frequency (see part 3.1 for more details).For such long fatigue specimens, undesirable buckling in compression can often occur and a R=0.1 load ratio was therefore used.Further tests, using shorter specimens could be used to investigate compressive loadings.For all tests, the fatigue life is considered as the specimen failure.In order to test the engine mount, a specific testing rig was developed (see Figure 5).It allows testing of the part in different orientations.The tests were performed in load control using a load ratio of R=0.1 and a test frequency of 3Hz.Test were conducted until part failure (severe drop in part rigidity).It must be noted the applied load shown here does not represents loadings occurring in real part situations and was mainly designed to investigate the accuracy of the fatigue criterion.Most of the tests were also monitored using a thermographic camera in order to detect crack initiation (see section 4.2 for more details).
Injection moulding process 3.1 Tomography Observations
In order to better understand the mechanical tests, tomography observation and simulation of the moulding process is of assistance.Samples of material were taken from the central zone of the fatigue specimen (see Figure 1) and analysed by X-Ray tomography.The resulting scan has a 4µm resolution and covers a volume approximately 3x3x3mm.A slice of the sample analysed can be seen Figure 6.It shows the typical structure of fibre reinforced thermoplastics: • Surface: while difficult to see on the figure, the surface effect can be seen in 3D, close to the plate surface, the fibres are randomly oriented.• Skin: this section constitutes the main part of our volume.The fibres are mainly oriented in the injection direction • Core: this section, of approximately 500µm thick, has fibres perpendicular to the injection direction.Based on this tomographic data, the fibre tracing algorithm of FEI® Avizo® software and a specific data processing routine was used to study quantitatively the fibre orientation in the specimen.The xx component of the experimentally determined fibre orientation tensor (corresponding to the degree of fibre alignment in the flow direction) is plotted against depth in Figure 7.The 0mm depth corresponds to the plate top surface while 3mm is the bottom surface.The different sections can clearly be seen (surface, skin and core)
Injection moulding simulation
While tomography analysis helps better understand the material microstructure, it remains technically difficult to analyse large volumes, and even more so for parts with complex geometry such as the engine mount studied here, where machining samples can be cumbersome.Therefore, the injection process of the plates and engine mounts was simulated using Autodesk® Moldflow®.The moulding simulation generates estimated fibre orientation tensors which for the moulded plates may be compared with the experimental data as shown in Figure 7.The agreement is quite good: the injection simulation displays the same sections as the observed ones, and orientation values appear close.However, the simulation appears to slightly underestimate the degree of fibre orientation in the different layers.Using the same method, the industrial part injection process was also simulated.The results, shown in Figure 8, can then be used as an input to the integrative mechanical simulation.
Specimen heating
Polyamide based materials are highly sensitive to loading frequency effects [3], even for the lower load levels (i.e. the higher fatigue lives) due at least partly to hysteretic heating.In order to maintain the heating level below a certain threshold, a series of incremental tests were first undertaken to define testing frequency: increasing loads were applied to the specimen and, using thermal imaging, the average surface temperature was measured after temperature stabilisation (10000 cycles).These tests were done at 3Hz and provide a design of experiment for the fatigue tests.For a given load, if the temperature rise is higher than 5°C at 3Hz, the test frequency is lowered and set to 1Hz.The results of these tests can be seen in Figure 9 for all three orientations considered.
Fatigue results
The results of the coupon fatigue tests are presented in Figure 10 for the three different specimen orientations.The 0°specimen, which is aligned with the injection direction has the best fatigue resistance.Using the microstructure analysis presented in section 3, the influence of fibre orientation appears clearly: when the skin fibres are mainly aligned with the loading direction, the material exhibits the best fatigue resistance.For 45° specimens, the fatigue resistance lies between that of the 0° and the 90° specimens, although closer to 90° ones.The results of the fatigue tests on engine mounts are shown in Figure 11, while thermoelasticimetry of the crack initiation location is shown in Figure 12.Rather than standard thermal imaging, thermoelasticimetry allows detection of small variations in stress (see [4] [5]) by synchronizing the measurement signal (here the thermal imaging) with the applied loading signal.Indeed, for a homogenous elastic solid under adiabatic conditions, the temperature variation ∆ can be linked to the stress tensor : ∆ = −..Tr() (1 Where is the temperature, Tr the trace operator and the thermoelastic constant.(This last constant was not measured so the numerical values of the stress change in Figure 12 are not meaningful.)The interest of this measurement is twofold.Firstly, it allows us to detect hot spots in the structure, which may becompared to the simulation results in section 5.3.Secondly, it allows for a precise detection of crack initiation, which is key to fatigue life assessment.It must be noted Figure 11 is in logarithmic scale.The crack propagation phase, for this specific configuration, represents between 33 and 54% (average 44%) of the total fatigue life, the part failure being considered as a severe loss of part rigidity.This shows for SFRP parts and long fatigue lives, the crack propagation phases cannot be neglected and can represent half of the total fatigue life.
Simulation model
In order to simulate the mechanical tests of fatigue coupons and parts, an integrative simulation method provided by Digimat® and coupled with Abaqus® was used.Based on Mori-Tanaka homogenization models, it allows us to take into account the local microstructure of the material for each layer.All simulations are done using elastic material hypothesis.
The fatigue coupons and the engine mounts were meshed using 2D shell elements.Each element contains 12 layers (where layer 1 is the top surface layer, 6-7 the central layers of the plate, and 12 the bottom surface layer), each defined using the local fibre orientation tensor.Additional microstructural and material data are required: fibre length and aspect ratio, fibre density, Young modulus and Poisson ratio of fibre.The Young modulus of the matrix was not used directly, but rather as a parameter optimised to provide the best fit between test and simulation for the 0° specimens.Results are shown Table 1 and shows the model is able to correctly simulate the different specimen orientations.The optimised material data was then used to simulate the engine mount test.The part stiffness for this loading configuration was found to be very close to the experimental results (less than 10% error in the part stiffness for the maximum load used in the fatigue tests).
Fatigue criteria
As seen in Figure 10, using equivalent stress (i.e.load divided by specimen section) does not yield satisfactory results.Therefore, a local strain energy density approach is used.The elastic strain energy density ∆ is defined by: Where is the stress tensor and the elastic strain tensor.The fatigue life can then be estimated by: Where and are material parameters.Even though the parameter can be modified to take into account the stress ratio, it is not detailed in this study, since all tests are done using identical stress ratios.Using HBM Prenscia's nCode DesignLife® software, the elastic strain energy quantity was calculated for the 0, 45 and 90° specimens.Two approaches were then used to calculate the maximum value: 1.The "W max" approach: in this case, the critical value for a specimen is taken as the maximum value all over the elements and all the layers.2. The "W mean" approach: in this non-local approach, the results are averaged over the specimen thickness (i.e.across all layers).Indeed, each element contains 12 different stress/strain tensors values corresponding to the different layers.For this method, the elastic strain energy density of an element is first calculated by averaging the 12 layers.The critical value of the specimen is then chosen as the maximum value over the specimen.
Experimental vs simulation results
The abilities of the two methods to correlate the coupon test results are shown Figure 13 and Figure 14.These must first be compared to results of stress-life ones (shown in Figure 10): the elastic strain energy density methods minimize differences between the different specimen orientations, making them suitable for complex simulations of industrial parts.Indeed, for these more complex simulations, it is impossible to define an orientation and any fatigue criteria needs unify the test results.For fatigue coupons specimens, no significant differences between the "W max" and the "W mean" approach appear.The two different approaches were then applied to the engine mount parts; see Figure 15 and Figure 16.While results of the "W max" approach appear overconservative, the "W mean" show promising results.Especially for the longer fatigue lives, which are of interest, the results using this last criteria are close to the /10 x10 scatter band.An explanation of the different results between coupons and part simulation could be attributed to the different mechanical loadings: while coupons are loaded in uniaxial tensile loadings, the engine mount undergoes multiaxial and bending loads.Other effects that might be considered are the notch effect (local plasticity and size effect), and mean stress relaxation at stress concentrations when 0 < R < 1 due to accumulating creep strain.13: Fatigue results of coupons using the "W max" approach Figure 14: Fatigue results of coupons using the "W mean" approach Figure 15: Fatigue results of parts using the "W max" approach Figure 16: Fatigue results of parts using the "W mean" approach
Conclusions
An integrative simulation method is used to simulate the fatigue life of a short fibre reinforced plastic (PA66GF50).First, by using X-Ray laboratory tomography and specific image analysis software, the microstructure of the material is investigated, and compared to the results of the moulding injection simulation.Secondly, based on the simulated microstructure, the anisotropic and heterogeneous mechanical behaviour is identified.It shows integrative simulation methods are able to capture and simulate different fibre orientations of specimens.Lastly, using a strain energy based fatigue criterion, the mechanical simulations are used to calibrate the fatigue model based on coupon tests, and assess the life of structural parts.Both local and non-local criteria show promising results.
Further investigation should focus on detailed description of material mechanical behaviour.Indeed, the tests conducted in this study show the material has a strong creep behaviour, especially for the lesser orientated specimens (45 and 90° specimens), for which the matrix behaviour prevails.Recent studies (see for example [6]) have shown the precise study of this creepfatigue behaviour could help better understand fatigue life assessment.
Figure 2 :
Figure 2: Plate injection and fatigue coupon machining
Figure 5 :
Figure 5: Engine mount testing rig showing loading direction.
Figure 6 :
Figure 6: Slice of sample analysed by tomography
Figure 7 :
Figure 7: Experimentally observed fibre orientation tensor.The X direction matches the injection direction.
Figure 8 :
Figure 8: Injection moulding simulation of engine mount showing fibre alignment proportion with local principal direction.A higher value (red) means the fibres are locally aligned, whereas for the lower values (blue), the fibres are locally more randomly oriented.
Figure 11 :
Figure 11: Fatigue test results of engine mount.
Table 1 :
Comparison of experimental and simulated equivalent Young Modulus of tensile tests.The results are normalized by the value of 0° specimen. | 3,526.6 | 2018-01-01T00:00:00.000 | [
"Materials Science"
] |
Haemin pre‐treatment augments the cardiac protection of mesenchymal stem cells by inhibiting mitochondrial fission and improving survival
Abstract The cardiac protection of mesenchymal stem cell (MSC) transplantation for myocardial infarction (MI) is largely hampered by low cell survival. Haem oxygenase 1 (HO‐1) plays a critical role in regulation of cell survival under many stress conditions. This study aimed to investigate whether pre‐treatment with haemin, a potent HO‐1 inducer, would promote the survival of MSCs under serum deprivation and hypoxia (SD/H) and enhance the cardioprotective effects of MSCs in MI. Bone marrow (BM)‐MSCs were pretreated with or without haemin and then exposed to SD/H. The mitochondrial morphology of MSCs was determined by MitoTracker staining. BM‐MSCs and haemin‐pretreated BM‐MSCs were transplanted into the peri‐infarct region in MI mice. SD/H induced mitochondrial fragmentation, as shown by increased mitochondrial fission and apoptosis of BM‐MSCs. Pre‐treatment with haemin greatly inhibited SD/H‐induced mitochondrial fragmentation and apoptosis of BM‐MSCs. These effects were partially abrogated by knocking down HO‐1. At 4 weeks after transplantation, compared with BM‐MSCs, haemin‐pretreated BM‐MSCs had greatly improved the heart function of mice with MI. These cardioprotective effects were associated with increased cell survival, decreased cardiomyocytes apoptosis and enhanced angiogenesis. Collectively, our study identifies haemin as a regulator of MSC survival and suggests a novel strategy for improving MSC‐based therapy for MI.
| INTRODUC TI ON
Despite the advanced developments in surgical treatment and pharmacological therapy, myocardial infarction (MI) is still a major cause of morbidity and mortality worldwide. 1 Mesenchymal stem cell (MSC)-based therapy has shown promising results in MI treatment because of the capacity of MSCs to differentiate into cardiomyocytes and confer paracrine effects. The efficacy of MSC-based therapy is nonetheless seriously restricted by poor cell survival in the hostile environment of the injured heart. [2][3][4] Oxidative stress in the ischaemic heart can quickly induce apoptosis of transplanted MSCs. 2 It has been reported that fewer than 1% of MSCs can survive in the ischaemic rat heart after MI at 24 hours after transplantation. 5 Therefore, exploring a novel strategy to enhance the retention and engraftment of MSCs in the ischaemic heart is urgently needed. Indeed, several pre-treatment strategies, including hypoxia and genetic modification, have shown to increase the survival of MSCs under hostile environment. 6,7 Cell death is mainly mediated by mitochondrial function, which is closely related to mitochondrial dynamics. 8 Mitochondria undergo fusion and fission to form a network for maintaining cell function. 9,10 Mitochondrial fusion is regulated by mitofusin 1 (Mfn1) and Mfn2, whereas mitochondrial fission is mainly regulated by mitochondrial fission protein dynamin-related protein 1 (Drp1) and mitochondrial fission 1 (Fis1). Converging evidence has shown that mitochondrial fission results in fragmented mitochondria and thus induces apoptosis. 11,12 Nevertheless, whether ischaemic conditions can induce mitochondrial fission and thus lead to apoptosis of transplanted MSCs has not been determined.
Haem oxygenase 1 (HO-1), an inducible stress protein, possesses cytoprotective defences including antioxidative stress, antiapoptosis and anti-inflammation functions during challenge by different stressors. 13,14 A previous study has shown that HO-1 up-regulation inhibits mitochondrial fission, thus attenuating apoptosis of cardiomyocytes induced by intermittent hypoxia. 15 Furthermore, cardiac-specific overexpression of HO-1 significantly reduces up-regulated mitochondrial fission and therefore protects against doxorubicin-induced dilated cardiomyopathy. 16 Given that HO-1 plays a critical role in regulating mitochondrial dynamics, we have been suggested that the ischaemic condition induces apoptosis of MSCs via up-regulation of mitochondrial fission which is regulated by HO-1. Therefore, pre-treatment with haemin, an HO-1 inducer, can increase the capability of MSCs to tolerate ischaemic conditions via inhibition of mitochondrial fission and thus enhance cardioprotective effects that ameliorate the damage from MI.
| Cell culture
Human bone marrow (BM)-MSCs were purchased from Cambrex BioScience (catalog no. PT-2501). BM-MSCs were routinely cultured as previously described. 17 Cells were passaged at a ratio of 1:3 when they reached confluence. The cells from passages 3-4 were used in the current study.
| Serum deprivation and hypoxia (SD/H)exposed cell culture and haemin pre-treatment
To mimic the ischaemic conditions in vitro, BM-MSCs were cultured under SD/H challenge. 18 In brief, when BM-MSCs reached 70%-80% confluence, the completed culture medium was changed to medium without foetal bovine serum (FBS) and then cultured under hypoxia (1% oxygen, 5% carbon dioxide and 94% nitrogen) for 48 hours. For haemin pre-treatment, BM-MSCs were cultured in complete medium with 10 µM haemin under normoxia (95% air and 5% carbon dioxide) for 24 hours prior to SD/H challenge.
| siRNA transfection
Control siRNA or HO-1 siRNA was used to transfect BM-MSCs using Lipofectamine RNAiMAX (13778-075; Invitrogen). Briefly, control siRNA or HO-1 siRNA was diluted with OptiMEM and mixed with the transfection reagent. Each mixture was added to BM-MSCs at 70%-80% confluence and then incubated for 24-48 hours. Finally, the transfection efficiency was examined by Western blot analysis.
| MitoTracker staining
The morphology of mitochondria was examined by MitoTracker staining as previously reported. 15
| TUNEL staining
Apoptosis of BM-MSC after different treatments was detected by terminal deoxynucleotidyl transferase-mediated dUTP nick end labelling (TUNEL) staining kit (11684795910; Roche). Briefly, after different treatments, the cells were washed with PBS, fixed and incubated with 1 µg/mL of Proteinase K/10 mmol/L Tris solution for 15 minutes at room temperature. Following washing with PBS twice, the cells were incubated with the TUNEL reaction mixture for 1 hour at 37°C in a dark place. Finally, the cells were washed and mounted with DAPI to stain the nuclei. Images of five different view fields for each slide were randomly captured (magnification of 20x). The apoptosis of BM-MSCs was calculated as the proportion of positive TUNEL cells to total DAPI-positive cells.
| Western blot analysis
The protein of each sample was extracted using RIPA buffer (9806, CST), and then the amount of concentrated protein was measured.
| Preparation of conditioned medium and HUVEC tube formation analysis
The conditioned medium (CdM) of MSCs was collected as previously described. 19 Briefly, BM-MSCs with or without haemin pretreatment were seeded in 6-well plated and cultured until 70%-80% confluence. Subsequently, the medium was replaced with 2 mL per well serum-free medium. After 48 hours culture, the CdM was collected, centrifuged and stored at −80°C until use. HUVECs (30 000 cells/well) were seeded in a 96-well plate coated with growth-factor-reduced matrigel (BD Biosciences, 356230). Next, HUVECs were treated with CdM derived from BM-MSCs and haemin-BM-MSCs.
After 6 hours of treatment, capillary-like tube formation was imaged (magnification of 10x). The endothelial tube length and branching points were analysed using ImageJ software. The experiments were repeated at least three times.
| Echocardiography assessment
The heart function of each mouse from the different groups was evaluated by transthoracic echocardiography (Ultramark 9; Soma TechnologyA) at 4 weeks after cell transplantation. The echocardiographic parameters were analysed using MATLAB R2011b software (MathWorks).
| Masson's trichrome staining
After echocardiography evaluation, all mice were killed, and the hearts were collected. The mouse hearts were fixed, embedded and sectioned into 5 μm sections. Fibrosis in the mouse hearts was detected by Masson's Trichrome Stain Kit (HT15; Sigma). Images of each slide were captured (magnification of 4x). The percentage of the infarct size was analysed as follows: (fibrosis area/total left ventricle area)×100%.
| Immunohistochemistry
Immunohistochemical staining was performed as previously described. 3 Briefly, the heart sections were hydrated, the antigen was retrieved, and the specimen was blocked with 5% bovine serum albumin for 30 minutes. Subsequently, heart sections were stained with the following primary antibodies, anti-HNA (ab191181, Abcam) and anti-CD31 (77 699, CST), at a 1:100 dilution and then incubated overnight at 4°C. After washing, the slides were incubated for 30 minutes with streptavidin peroxidase-conjugated secondary antibody (ab64264, Abcam) at room temperature. After this incubation, the slides were washed three times in PBS, and the antibody complexes were coloured with diaminobenzidine and then counterstained with haematoxylin. Five sections were randomly collected from each mouse, and six mice from each group were captured (magnification of 10x).
| Polymerase chain reaction
Human Alu-sx repeat sequences in the heart tissue from the different groups were evaluated by genomic polymerase chain reaction (PCR) as previously described. 3 The primer of human Alu-sx was F:5'-GGCGCGGTGGCTCACG-3', R:5'-TTTTTTGAGACGGAGTCTCGCTC-3.
The product was detected by electrophoresis in 1.5% agarose gel supplemented with ethidium bromide.
| Statistical analysis
Values are shown as the mean ± SEM. Statistical analyses were performed using Prism 5.04 software (GraphPad Software Inc.).
The comparison between two groups was analysed using unpaired Student's t tests and between multiple groups using one-way ANOVA followed by the Bonferroni test. A P value <0.05 was considered statistically significant.
| Haemin suppresses SD/H-induced mitochondrial fission and apoptosis of BM-MSCs
To test the protective effects of haemin on BM-MSCs, we pretreated BM-MSCs with different concentration of haemin (1, 5, 10, 20 μmol/L) for 24 hours and then exposed them to SD/H. The CCK-8 assay showed that haemin pre-treatment greatly enhanced the viability of BM-MSCs under SD/H in a dose-dependent manner and 10 μmol/L haemin pre-treatment exhibited the best protective effects ( Figure 1A). Furthermore, we pretreated BM-MSCs with 10 μmol/L haemin with different time (6, 12, 24, 48 hours) and then exposed them to SD/H. The CCK-8 assay also showed that
| Haemin inhibits mitochondrial fragmentation and apoptosis of BM-MSCs by regulating HO-1
As haemin is an HO-1 inducer, we investigated whether the protec-
| Haemin-pretreated BM-MSCs improved cell survival in mouse hearts following MI
We first performed anti-HNA staining to detect cell survival at 4 weeks after transplantation. Both BM-MSCs and haemin-pretreated BM-MSCs were detected in ischaemic heart tissue, with a
| Haemin-pretreated BM-MSCs inhibited the apoptosis of cardiomyocytes and improved angiogenesis in mouse hearts following MI
The apoptosis of cardiomyocytes among the different groups was assessed by TUNEL staining. Compared with the sham group, the apoptosis of cardiomyocytes was dramatically increased in the MI group ( Figure 5A,B). MSC transplantation greatly inhibited the apoptosis of cardiomyocytes, and haemin-BM-MSCs were superior to BM-MSCs in attenuating the apoptosis of cardiomyocytes in the ischaemic hearts of mice ( Figure 5A,B). The capillary density of the ischaemic area among the different groups was detected by CD31 staining. The capillary density was decreased in the MI group compared with the sham group ( Figure 5C,D). The capillary density of the ischaemic area increased following MSC treatment ( Figure 5C,D).
Notably, the haemin-BM-MSC group had a much higher capillary density than the BM-MSC group ( Figure 5C
| D ISCUSS I ON
This study presents several major findings ( Figure 6). MI is a major contributor to the mobility and mortality of people with cardiovascular diseases, accounting for 11.2% of deaths worldwide. 21 The ischaemic condition caused by insufficient blood flow leads to a marked loss of cardiomyocytes in the heart. Furthermore, Mitochondria dynamics play an essential role in inducing cell death. 30 Mitochondrial fusion leads to elongated mitochondria, whereas mitochondrial fission produces small round mitochondria. 10 There is a balance of mitochondrial fusion and fission in a healthy cell. However, this balance is disrupted under stress conditions, resulting in apoptosis. 31 In the current study, we found that the mi- This study has several limitations. First, in addition to Drp1 and Mfn2, whether haemin can affect other proteins related to mitochondrial dynamics has not been determined. Second, we only examined the survival of haemin-pretreated BM-MSCs at 4 weeks after transplantation; therefore, long-term cell survival needs to be examined in future studies. Third, the potential mechanisms behind HO-1 regulation of mitochondrial dynamics remain unclear. Haemin contains iron, which is released by HO activity, regulating the expression of various proteins. As mitochondria are the major iron handling organelles, whether haemin regulates mitochondrial dynamics via iron requires further investigation. Fourth, as SD/H enhances endogenous HO-1 expression level, it therefore would make scientific sense to silence basal HO-1 levels to verify our study.
In summary, our results demonstrated that haemin pre-treatment, via up-regulation of HO-1 levels, significantly enhanced BM-MSC survival under ischaemic conditions by inhibiting mitochondrial fission, thus improving the therapeutic effects for treating MI. Our study shows pharmacological pre-treatment modulating the HO-1 pathway as a novel approach for enhancing MSC-based therapy for cardiovascular diseases.
CO N FLI C T O F I NTE R E S T
The authors declare no conflicts of interest.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data sets used and/or analysed during the current study are available from the corresponding author on reasonable request. | 2,878.4 | 2019-10-29T00:00:00.000 | [
"Biology",
"Medicine"
] |
www.elsevier.com/locate/physd Wavelets meet Burgulence: CVS-filtered Burgers equation
Numerical experiments with the one-dimensional inviscid Burgers equation show that filtering the solution at each time step in a way similar to CVS (Coherent Vortex Simulation) gives the solution of the viscous Burgers equation. The CVS filter used here is based on a complex-valued translation-invariant wavelet representation of the velocity, from which one selects the wavelet coefficients having modulus larger than a threshold whose value is iteratively estimated. The flow evolution is computed from either deterministic or random initial conditions, considering both white noise and Brownian motion.
Introduction
The fully-developed turbulent regime is described by solutions of the Navier-Stokes equations for two or threedimensional incompressible fluids, in the limit where the kinematic viscosity becomes very small. By analogy, Burgulence is described by the solutions of Burgers equations for a one-dimensional fluid in the same limit, as first proposed by Burgers [3] and advocated by von Neumann [19]. This toy model for turbulence has been extensively used since then [1,13,15,21,23]; Frisch and Bec have proposed to name it: Burgulence [11].
We consider the one-dimensional Burgers equation in a periodic domain of support x ∈ [−1, 1], which describes the space-time evolution of the velocity u(x, t) of a onedimensional fluid flow: supplemented with a suitable initial condition and where ν denotes the kinematic viscosity. The solutions of (1) can be computed analytically using the Cole-Hopf transformation [4,6,14]. When ν → 0 the solutions of the viscous Burgers equation approach weak solutions of the inviscid problem. The uniqueness of these solutions stems from the condition that shocks have negative jumps, which guarantees energy dissipation. For Burgers equation, this condition is equivalent to an entropy condition [12,17,18,20]. The wavelet representation has been proposed for studying turbulence [7], since it preserves both the spatial and spectral structure of the flow by realizing an optimal compromise in regard of the uncertainty principle. We have found that projecting the vorticity field onto a wavelet basis, and retaining only the strongest coefficients, extracts the coherent structures out of fully-developed turbulent flows [8,9]. We have then proposed a computational method for solving the Navier-Stokes equations in wavelet space [8]. We have shown that extracting the coherent contribution at each time step preserves the nonlinear dynamics, whatever its scale of activity, while discarding the incoherent contribution corresponds to turbulent dissipation [22]. This is the principle of the CVS (Coherent Vortex Simulation) method we have proposed [8,10].
The aim of the present paper is to apply the CVS filter to the inviscid Burgers equation and check if this is equivalent Fig. 1. Deterministic initial conditions. Left: Time evolution of energy. Right: Energy spectrum at t = 5. We compare the Galerkin-truncated inviscid (square), viscous (triangle) and CVS-filtered inviscid (circle) cases. We observe that for the inviscid case (right) the wavelet spectrum (white line) better exhibits the energy equipartition than the Fourier spectrum (black line).
to solving the viscous Burgers equation. The outline is the following. First we recall the principle of CVS filtering and its extension using complex-valued translation-invariant wavelets. The numerical scheme is described briefly and the main part presents results of several numerical experiments, considering either deterministic or random initial conditions. Finally, we draw conclusions and propose some perspectives.
Numerical method
The Burgers equation (1) is discretized on N grid points using a Fourier spectral collocation methods, where U approximates (u(x 0 , t), u(x 1 , t), . . . , u(x N −1 , t)), D N stands for the Fourier collocation differentiation and · is the pointwise product of two vectors. The discretization of the nonlinear term in (2) is chosen in order to conserve the kinetic energy E = 1 2 1 −1 u 2 (x, t)dx when ν = 0 [5]. For time integration a fourth-order Runge-Kutta scheme is used.
At each time step we filter the solution using the CVS method, which we now recall briefly. Given orthogonal wavelets (ψ ji ) and the associated scaling function at the largest scale ϕ, the velocity can be expanded into where j is the scale index, i is the position index and the inner product is a | b = 1 −1 a(x) · b * (x)dx with b * denoting the complex conjugate of b. Since location in orthogonal wavelet space is sampled on a dyadic grid, this representation breaks the local translation invariance of (1) which may impair the stability of the numerical scheme. Therefore we prefer using, instead of real-valued wavelets, complex valued wavelets [16] which very closely preserve translation invariance. In this case, (3) still holds as long as we replace the right-hand side by its real part.
The CVS filter then consists in discarding the wavelet coefficients whose modulus is below a threshold T . In addition, wavelet coefficients at the finest scale are systematically filtered out to avoid aliasing errors. The resulting velocity u T is a nonlinear approximation of u.
Because the velocity field decays in time, the threshold has to be estimated at each time step in a self-consistent way. To do this, we follow the iterative method introduced in [2], which consists in imposing the ratio between the standard deviation of the discarded wavelet coefficients and the threshold itself, where H is the Heaviside step function and N T is the number of wavelet coefficients below the threshold. The solution of (4) is determined numerically using a fixed point iterative procedure [2], initialized with T 0 = 5E/N , where E is the total energy.
Deterministic initial condition
We consider Burgers equation (1) with the deterministic initial condition u(t = 0, x) = − sin(π x). We begin by comparing three computations: a Galerkin-truncated inviscid case (ν = 0), a viscous case (ν = 10 −4 ), and an inviscid case with the CVS filter applied at each time step. The solutions are computed up to time t = 5, using N = 4096 grid points.
By computing in the Galerkin-truncated inviscid case (ν = 0), we check that our numerical scheme conserves energy (Fig. 1, left) as theoretically predicted. We observe that the final solution at t = 5 exhibits energy equipartition (Fig. 1, right) with a Gaussian velocity PDF, as expected. Note that the white line in Fig. 1 (right) corresponds to the wavelet energy spectrum, i.e., the squared modulus of the wavelet coefficients computed with a complex-valued Morlet wavelet. It better exhibits the k 0 scaling, characteristic of the energy equipartition, than the highly oscillatory Fourier energy spectrum (black line). This illustrates the fact that the wavelet energy spectrum is more stable than the Fourier energy spectrum when we analyse only one realization of a stochastic process [7].
For the viscous and CVS-filtered inviscid cases, the energy remains basically constant until the shock forms at t = 1/π , but then decays with a t −2 law. In Fig. 1 (right) the energy spectra of the viscous and CVS-filtered inviscid cases exhibit a power law behaviour with slope −2. Fig. 2 shows the velocity at three time instants for the viscous and CVS-filtered inviscid cases. The CVS-filtered inviscid solution follows the same dynamics as the viscous one, except for the small overshoot we observe at x = 0 after the shock has formed. This Gibbs phenomenon is stronger but less oscillatory for the CVS-filtered inviscid case than for the viscous case (see the insets in Fig. 2).
The time evolution of the percentage of retained wavelet coefficients is presented in Fig. 3 (left). It shows that, with only relatively few coefficients (about 7%N ), we are able to track the nonlinear dynamics of the flow and this number remains almost constant after the shock formation. At t = 5, the retained wavelet coefficients are located around x = 0, the position of the shock, and span all scales there, as illustrated in Fig. 3 (right).
We now show that, when N increases, the filtered solutions converge towards the entropy solution u ref which solves the Burgers equation in the inviscid limit. For comparison, we also consider viscous solutions with viscosity depending on N (ν = 0.4096N −1 ), which are known to converge to u ref everywhere, except at x = 0. The entropy solution u ref is directly calculated using the method of characteristics.
First, we consider a global error estimate, the relative mean square error, defined as On Fig. 4(left) we plot N (t) for N = 4096. The error for the CVS-filtered inviscid case is larger but saturates after t 2. In contrast, the error for the viscous case keeps increasing because the finite viscosity smooths the shock away. Considering now t = 5 and varying N , we find that for both the viscous and CVS-filtered inviscid cases N decreases as N −1 (Fig. 4, right).
We now study the behaviour of the oscillations in the neighbourhood of the shock when the resolution N is increased. The total variation of a function f on [−1, 1] is defined by: To detect the presence of spurious oscillations, we compute the relative error on the total variation.
which is plotted as a function of N for t = 5 on Fig. 5 (left). For the viscous case, N is negative and converges towards zero when N increases. For the CVS-filtered inviscid case, N tends to a finite positive value close to 0.84. The overshoot that could be seen on Fig. 2 persists but becomes more and more localized around the singularity when N increases, thus ensuring mean square convergence. Let us end this section by a short discussion on the evolution of the compression rate when N increases. Fig. 5 (right) shows that the number of retained wavelet coefficients increases roughly logarithmically as a function of N . As a consequence, notice that for the filtered solution the relative mean square error N (t), if it is considered as a function of the number of retained coefficients only, converges to zero exponentially fast. However, to experience this promising rate of convergence in practice, we should compute the evolution of u using only the wavelet coefficients whose modulus remains above the threshold.
Random initial condition
In the previous section we demonstrated that the CVSfiltered inviscid Burgers equation exhibits an evolution similar to that of the viscous Burgers equation. We now would like to check if this is still verified in the context of Burgulence for both white noise [1] and Brownian motion [21].
White-noise initial condition
We take as initial velocity one realization of a Gaussian white noise computed at resolution N = 4096, which corresponds to a random non-intermittent initial condition. Fig. 6. White noise initial conditions. Left: Time evolution of energy. The inset shows the t −2/3 decay in log-log coordinates. Right: energy spectrum at t = 5. We compare the viscous (triangle) and CVS-filtered inviscid (circle) simulations. We observe that the wavelet spectrum (white lines) better exhibits the k −2 scaling of energy than the Fourier spectrum (black lines). Since the CVS filter removes the non-intermittent noisy contributions, if applied to a Gaussian white noise the latter would be completely filtered out. Therefore we first integrate the viscous equation with ν = 2 × 10 −5 without filtering, and wait until the flow intermittency has sufficiently developed before applying the filter. To check the flow intermittency we monitor the flatness of the velocity gradient until it reaches the value 20, which happens at t = 0.017 for the realization described here. Then, we reset t = 0 and integrate up to t = 5, both the viscous equation with ν = 2 × 10 −5 , and the CVSfiltered inviscid equation.
In Fig. 6 (left) we show that the energy, for both the CVSfiltered inviscid solution and the viscous solution, decays with a t −2/3 law, as found by Burgers [4,21]. In Fig. 6 (right) we observe at t = 5 that both energy spectra present the same k −2 scaling. Notice that the two white lines in Fig. 6 (right) correspond to the wavelet energy spectrum, which better exhibits the k −2 scaling of the energy than the highly oscillatory Fourier energy spectrum (black lines).
Finally, we show on Fig. 7 that the viscous and CVSfiltered inviscid solutions are almost identical in physical space, presenting a typical sawtooth profile as first noticed by Burgers [4].
Brownian motion initial condition
We use the same resolution N = 4096 as above, but only the initial condition changes. Since we have chosen periodic boundary conditions we approximate the Brownian motion by the Fourier series: where k = − N 2 + 1, − N 2 , . . . , N 2 − 1. We set u 0 = 0 and, for k = 0, we take for u k a complex Gaussian random variable with standard deviation 1/|k|.
The solution for the viscous case is computed with ν = 1.2 × 10 −4 . For the CVS-filtered inviscid case, as we did for the white noise initial condition, we do not filter before enough intermittency has developed. We thus integrate the viscous equation with ν = 1.2 × 10 −4 for 0.05 time units and then switch viscosity off. This procedure provides the initial velocity which, by construction, is the same for both methods (Fig. 8). The energy decay matches well between the CVS-filtered inviscid and the viscous solutions (Fig. 9, left). A k −2 power spectrum is also obtained for both at t = 5 (Fig. 9, right).
At t = 0.1 numerous small shocks are present in the viscous solution (Fig. 10, top left). All of them are correctly reproduced by the CVS-filtered inviscid solution (Fig. 10, bottom left).
At t = 5 the single remaining shock, which is still resolved in the viscous solution (Fig. 10, top right), is correctly reproduced in the CVS-filtered inviscid solution (Fig. 10, bottom right).
Conclusion
We have shown that CVS filtering at each time step the solution of the inviscid Burgers equation gives the same evolution as the viscous Burgers equation, for both deterministic and random initial conditions. As our contribution to Euler equations' 250th anniversary and Euler's 300th birthday, we conjecture that CVS filtering the Euler equation may be equivalent to solving the Navier-Stokes equations in the fully-developed turbulent regime, i.e., when dissipation has become independent of viscosity. We predict that the retained wavelet coefficients would preserve Euler's nonlinear dynamics, while discarding the weaker wavelet coefficients would model turbulent dissipation and give Navier-Stokes solutions. Since in the fully-developed turbulent regime turbulent dissipation strongly dominates molecular dissipation, there is no reason to model turbulent dissipation by a Laplace operator anymore. Indeed, turbulent dissipation is a property of the flow, while molecular dissipation is a property of the fluid and may no more play a role when turbulence is fullydeveloped. We think that in this regime the CVS filter could be a better way to model dissipation, replacing global by local smoothing, while preserving nonlinear interactions. In this paper we have chosen the simplest toy model to test this conjecture, although Burgers' equation, in contrast to Euler's equation, is neither chaotic nor produces randomness. Therefore we conjecture that the CVS-filter would work better for Euler/Navier-Stokes than for Burgers, since CVS is based on denoising which is justified when there is chaos and randomness. | 3,568.6 | 2008-08-15T00:00:00.000 | [
"Physics"
] |
Photon-counting optical coherence-domain reflectometry using superconducting single-photon detectors
We consider the use of single-photon counting detectors in coherence-domain imaging. Detectors operated in this mode exhibit reduced noise, which leads to increased sensitivity for weak light sources and weakly reflecting samples. In particular, we experimentally demonstrate the possibility of using superconducting single-photon detectors (SSPDs) for optical coherence-domain reflectometry (OCDR). These detectors are sensitive over the full spectral range that is useful for carrying out such imaging in biological samples. With counting rates as high as 100 MHz, SSPDs also offer a high rate of data acquisition if the light flux is sufficient.
Introduction
Over the past decade, optical coherence-domain techniques such as optical coherence-domain reflectometry (OCDR) and optical coherence tomography (OCT) have come into their own for use in biological imaging [1,2]. These techniques operate on interferometric principles and use heterodyne detection to achieve high detection sensitivity. In scattering tissue, they typically provide axial resolution of a few micrometers and imaging at depths of 2-3 millimeters.
The central wavelength of the light used in coherence-domain imaging is a key parameter of the system design. Optical scattering in biological tissue generally decreases with increasing wavelength. It is usually difficult to image deeply into tissue in the visible region so that most coherence-domain imaging systems make use of light sources with wavelengths longer than 700 nm. The long-wavelength limitation is governed by the absorption of water, which becomes problematical at about 1500 nm. Since the axial resolution of a coherencedomain imaging system improves as the spectral bandwidth of the light source increases, use of the entire wavelength range from 700 to 1500 nm yields a desirable combination of deep penetration and ultra-high resolution for biological tissue. Thus, broadband operation at a center wavelength near 1100 nm is advantageous for ultra-high-resolution coherence-domain imaging, assuming that there is a suitable detector in this region [3].
A number of high-axial-resolution coherence-domain imaging experiments using ultrabroadband light sources have indeed been reported over the past few years. However, because of the ready availability of commercial semiconductor photodetectors that operate near 800 nm and 1300 nm, most of these systems have been operated near one of these two wavelengths [4,5,6].
In this paper, we report the development of a photon-counting optical coherence-domain imaging system that makes use of superconducting single-photon detectors (SSPDs). Such detectors are sensitive over a broad wavelength band, including the region of interest for biological imaging, thus allowing for flexibility in the choice of operating wavelength. At the same time, they operate in a single-photon counting mode, which offers low detector noise and thereby provides high sensitivity even at low source powers.
Conventional OCDR
As indicated above, the high detection sensitivity of coherence-domain imaging results from the use of heterodyne detection. As illustrated in Fig. 1, the interference signal that results from the mixing of light from the reference and sample arms carries the information of interest. The magnitude of the interference signal is proportional to the product of the optical fields reflected from the two arms of the interferometer, and thus to the square-root of the product of the intensities reflected from these arms. The strong reference beam provides conversion gain, which effectively boosts the weak signal reflected from the sample [7]. It has been shown that the heterodyne process can be understood in terms of the absorption of individual polychromatic photons [8].
Conventional optical sources used in coherence-domain imaging usually provide sufficient power in the reference beam to achieve shot-noise limited operation with ordinary photodiodes. However, some optical sources with large bandwidths and smooth spectra [9,10], which are particularly useful for coherence-domain techniques, do not provide sufficient power in a single spatial mode to allow shot-noise-limited operation. For the most part, OCDR and OCT experiments make use of commercially available Si or InGaAs semiconductor photodiodes (operated without gain), depending on the spectrum of the light source employed. Roughly speaking, Si photodiodes are used for wavelengths shorter than 1100 nm and are best in the vicinity of 800 nm, whereas InGaAs photodiodes are used for wavelengths longer than 1100 nm and are designed for operation in the vicinity of 1300 nm. Inasmuch as neither Si nor InGaAs are sensitive over the entire spectral range useful for the imaging of scattering biological samples, ultra-high-resolution OCDR and OCT is usually carried out at a central wavelength of either 800 nm or 1300 nm.
Comparing coherence-domain imaging at 800 nm and 1300 nm, we recognize that the latter wavelength offers superior penetration depth but inferior axial resolution. This is because the axial resolution, for a given spectral bandwidth specified in terms of wavelength, is inversely related to the square of the central wavelength. However, an ultra-broadband source of light centered at 1100 nm can provide the best of both worlds: deep penetration together with high resolution. This has indeed been demonstrated by Wang et al. [11], who achieved a resolution of 1.8 μm at a wavelength of 1100 nm. The performance of their system was limited, however, by the insensitivity of their detector to the shorter wavelength portion of their source spectrum.
As the use of ultra-broadband spectra in biological coherence-domain imaging becomes more widespread, there is a growing need for sensitive detectors that can operate over the entire wavelength range of interest to jointly optimize both axial resolution and penetration depth.
Photon-Counting OCDR
We have carried out a series of experiments to demonstrate the merits of using SSPDs in OCDR. These detectors are sensitive over a broad range of wavelengths, making them a good candidate for use in high-resolution coherence-domain techniques that require a broad spectrum of light. Moreover, since SSPDs operate in a photon-counting mode, they also offer enhanced sensitivity for low levels of light. We discuss the photon-counting OCDR system configuration, and the operational principles and properties of SSPDs, in turn.
Experimental arrangement for photon-counting-based OCDR
The photon-counting OCDR system illustrated in Fig. 2 makes use of the same interferometric arrangement as employed in standard coherence-domain imaging (Fig. 1). The reference arm of the interferometer has a mirror placed on a scanning delay stage, which is controlled by a Nanomotion-II micropositioning system (Applied Precision, LLC, Issaquah, WA). The sample arm contains the sample under investigation. The light exiting from the interferometer is coupled to a single-mode fiber that feeds the SSPD. An incident photon causes the detector to generate an electrical pulse; the probability of such an occurrence depends on the quantum efficiency of the detector. Once produced, the pulse is amplified and fed to a discriminator, which generates a standardized electrical pulse if the magnitude of the detector pulse lies above a prespecified threshold. The output of the discriminator is processed by a PC using National Instrument's Data-Acquisition Counter-Timer (Model PCI 6602).
To obtain the axial profile of the sample of interest, the discriminator output is recorded as the reference mirror is continuously scanned. The numbers of pulses obtained in a userdefined counting time are assigned to the corresponding position of the reference arm. An alternate way of obtaining the axial profile is to move the reference mirror in discrete steps and to integrate the pulse count from the discriminator for a finite amount of time at each location. In both cases the discrete signal is then bandpass filtered and demodulated to obtain its envelope. The scanning, data acquisition, and synchronization are all performed in an automated fashion using LabView.
Superconducting single-photon detectors
The active element of the SSPD is a meander-shaped narrow stripe that covers the 10 μm x 10 μm area of the device. The stripe is fabricated from a 4-nm-thick superconducting niobium nitride (NbN) film that has been sputtered on a double-sided polished sapphire substrate, using direct electron-beam lithography and reactive ion etching [12]. The width of the stripe is 80-120 nm.
The SSPD operates by utilizing a resistive region that appears in the superconducting stripe following the absorption of a photon. This absorption creates a hotspot (a localized region with increased resistivity) that suppresses the superconductivity. The device is maintained at a temperature T that is substantially below the critical temperature T c . The device is electrically biased along its length by a current I b that is close to the critical current I c . During the thermalization stage, the hotspot grows in size as electrons diffuse out of the initial hotspot core. The supercurrent is expelled from the hotspot into the side regions where its density exceeds the critical current density, thereby initiating the appearance of a resistive barrier across the entire cross-section of the stripe. This gives rise to a voltage pulse with a magnitude proportional to the bias current. Superconducting devices are very attractive for single-photon-detection applications, especially in the infrared region, because of their small energy gap Δ (Δ ≈ 2 meV for NbN) and their low dark-count rate.
The quantum efficiency η, defined as the probability of obtaining a voltage pulse at the SSPD output in response to an input photon, as well as the dark-count rate, strongly depend on the bias current and on the temperature of operation, as illustrated in Fig. 3 for light at a wavelength of 1.3 μm (the quantum efficiency in the figure is indicated in %). It is apparent that higher sensitivity and lower dark-count rate are achievable as the temperature is decreased.
The quantum efficiency of SSPDs monotonically decreases with increasing wavelength of the incident light. Despite this, these detectors can be reliably used for single-photoncounting applications in a spectral region that stretches from 0.4 to 6 μm [13]. Some semiconductor-based photodetectors can also serve as single-photon detectors in the infrared, but they suffer from a more limited wavelength range and from far higher dark-count rates. Although SSPDs have attractive parameters for infrared single-photon counting, their use in practice is complicated by the need for low-temperature operation and by their small active area. To accommodate these requirements, we made use of a specially designed cryostat, outfitted with a superconducting detector fed by a single-mode (SM) fiber, as illustrated in Fig. 4. This allowed us to work efficiently with 10 μm x 10 μm detectors at selected temperatures ranging from 1.8 K to 4.2 K.
The input to the single-mode optical fiber is equipped with a standard FC connector, permitting use with various optical systems. The output of the detector is connected to a highfrequency coaxial cable through a coplanar RF transmission line. The apparatus is positioned inside a standard 60-liter liquid-helium transport dewar and the detectors can be cooled to 1.8 K by reducing the He vapor pressure. The room-temperature high-frequency amplifiers (Phillips Scientific 6954 0.0001-1.5 GHz) boost the electrical signals before they are fed to discrimination and counting circuitry.
Another advantage of the SSPD is its ability to carry out photon counting at repetition rates in excess of 100 MHz [14], which is large in comparison with many single-photon detectors. The oscilloscope-screen image portrayed in Fig. 5 shows that the SSPD response follows an incident train of light pulses presented at an 81.3-MHz repetition rate.
Axial resolution
The axial resolution in coherence-domain imaging systems is governed by the bandwidth of the source, as well as the frequency response of the optical components and the detector. High axial resolution is attained by making use of a broadband source together with optical components and a detector that exhibit flat responses over the spectral range of interest. The usual array of optical components in use in such systems do indeed have approximately flat responses. The overall spectral response of the system, ( ) S n , is therefore given by (1) where z is the reference-arm displacement in the interferometer, c is the speed of light in the medium under consideration, and FT indicates the Fourier transform. The width of the pointspread function is the axial resolution Δz.
Of principal interest in this paper is the effect of the spectral response of the detector on axial resolution. Since the point-spread function is the convolution of the temporal coherence function of the source with the Fourier transform of the detector spectral response, the axial point-spread function will, by necessity, be wider than the coherence function of the source. A detector with a relatively flat and smooth spectral response function over the bandwidth of interest is best suited for coherence-domain imaging because it offers the least amount of broadening of the point-spread function.
Sensitivity
An oft-used measure for characterizing sensitivity is the signal-to-noise ratio SNR, where the signal is proportional to the optical power from the sample arm and the noise is defined as the variance of the background. Three principal sources of noise are generally considered: thermal electrical noise in the detector and post-detection circuitry, electric-current shot noise, and intensity-fluctuation noise arising from the thermal character of the optical source [7]. Noise-in-signal contributions are ignored in this definition. An where P R and P S are the optical powers in the reference and sample arms of the interferometer, respectively, and R is the responsivity of the detector (A/W). The first term in the denominator represents the thermal noise in the receiver, where T is the temperature, k is Boltzmann's constant, B is the effective electrical bandwidth of the detection system (which is principally determined by the bandpass filter following the detector), and R f is the feedback resistance of the trans-impedance amplifier. The second term in the denominator represents the current shot noise, where e is the charge of an electron. The third term represents gamma-distributed intensity-fluctuation noise associated with the thermal nature of the light source; Π is the degree of polarization of the light, and Δν represents the spectral bandwidth of the light source [7]. The intensity-fluctuation noise term that depends on the square of the reference-beam optical power dominates at high values of P R , whereas the detector thermal-noise term dominates at low values. Coherence-domain imaging systems typically operate at intermediate values of the reference-beam power, where shot noise is important [16,18], in which case Eq. ( Operation in this domain is considered desirable since it offers the largest signal-to-noise ratio for a given optical power in the sample arm. The presence of detector thermal noise is sometimes unavoidable, however, if the light source cannot provide sufficient power to the reference arm. Taking the parameter values used by Soren and Baney [16], for example, using standard photodiode-based detection, detector thermal noise becomes significant for reference powers below 10 nW. However, it is important to observe that there is a way of reducing the contribution of detector noise by several orders of magnitude, so that it becomes insignificant even for pW levels of referencebeam optical power: use single-photon counting. In photon-counting-based coherence-domain imaging, we record the number of photons at the output of the discriminator (see Fig. 2) in a given counting time of duration T; the corresponding bandwidth at the output of the photon-counting detector is 1/2T [7]. A single interferometric scan comprises a sequence of these counts collected at different positions of the reference-arm mirror. This sequence can be digitally filtered by using a bandpass filter with the same bandwidth as the signal, thereby reducing the noise. The bandwidth B of the filtered interferometric scan is then that of the filter. It should be noted that digital bandpass filtering in photon-counting plays the same role as such filtering in conventional OCT (OCDR).
The relevant signal-to-noise ratio in the shot-noise regime is [7] SNR 2 where S Φ is the photon flux from the sample arm (photons arriving at the detector per sec).
At an SNR of unity, it is apparent that the minimum-detectable photon flux is given by This signifies the detection of 1 η photons per resolution time of the receiver, which, for unity quantum efficiency, corresponds to the detection of one photon per resolution time, which is optimal [19,20]. In addition to the signal-to-noise ratio, we can also consider the statistical nature of the photon counts of the signal. These fluctuations can be evaluated by determining the ratio of count-variance to count-mean [21,22], var( ) n n F = .
This quantity is also known as the normalized variance or the Fano factor [20]. For independent measurements at a given mirror location, and a source that is devoid of intensity fluctuations, we expect the counts to follow Poisson statistics. The Poisson distribution has mean n and variance var( ) n n = , so that 1 F = . In real measurements, however, we have a finite number of samples N, and can therefore only obtain an estimate of the normalized variance F. This estimate, which we denote F , is itself a random variable with a mean of unity and a standard deviation that turns out to be 2 N for Poisson statistics [23].
Data acquisition rate
The rate of acquiring data in conventional coherence-domain imaging is rarely limited by the response time of the photodiode detectors, which is typically sub-nsec. This is not always the case for photon-counting OCDR, however, since commercially available photon-counting modules typically have far longer response times (≈ several hundred nsec), and therefore saturate at low optical powers. Consequently, collecting an image of a given quality when detector saturation comes into play requires more time when using a photon-counting configuration than when using a conventional configuration. The performance of SSPDs in this respect is superior to that of commercially available single-photon-counting modules, however, as will be discussed in Sec. 5.3.
Enhancement of axial resolution
To compare the performance of SSPDs and standard silicon SPADs (single-photon avalanche detectors) in photon-counting coherence-domain reflectometry, an experiment was conducted using the arrangement shown in Fig. 6. A 532-nm (doubled Nd:YVO 4 ) Verdi laser was used to pump a 1.5-mm BBO nonlinear crystal (NLC) cut for type-I phase matching. The crystal was aligned to obtain degenerate and collinear spontaneous parametric downconversion (SPDC). The downconverted light, which served as a convenient broadband optical source centered at 1064 nm, was introduced into a Michelson interferometer. Mirror 1 in the reference arm was placed on a nano-positioning stage to change its position, while mirror 2 was kept stationary. The dichroic components D1, D2, and D3 were used to reflect light at 532 nm and transmit light at 1064 nm; for D1 and D2 the infrared radiation comes from the laser whereas for D3 it comes from the downconversion, which is desired. The Glan-Taylor polarizers P1 and P2 were used to reflect light at 1064 and 532 nm, respectively. The light emerging from P2 was fed into the fiber-coupled detectors (SPAD and SSPD) via a lens. Fig. 6. Photon-counting OCDR experimental arrangement using a Michelson interferometer comprising a beam-splitter (BS) and two mirrors. Mirror 1 is translated to change the length of the reference arm. Collinear spontaneous parametric downconversion generated in a 1.5-mm-thick BBO nonlinear-optical crystal (NLC), cut for type-I phase matching, serves as the optical source. D1 and D2 are dichroic components that direct the 532-nm output of the doubled Nd:YVO 4 pump laser to the NLC. Dichroic D3 and Glan-Taylor polarizers P1 and P2 are used to remove unwanted wavelengths. Experiments were performed using both SPADs and SSPDs as photoncounting detectors.
The counts from the SPAD and SSPD were measured in a fixed time window as a function of the position of mirror 1. The resultant interferograms are illustrated in Fig. 7. It is clear from the data that the SSPD offers a narrower interferogram than the SPAD (3.3 vs. 5.4 μm). In accordance with the discussion in Sections 3.2 and 4.1, this is expected because the SSPD is sensitive over a broader spectral range than the SPAD. This observation, in turn, means that the SSPD offers better axial resolution than the SPAD. Fig. 7. OCDR interferograms measured with SPAD and SSPD single-photon detectors using the apparatus depicted in Fig. 6. A reduction in the full-width at half maximum (FWHM), corresponding to an improvement in axial resolution, is observed with the SSPD. This is a result of its broader spectral sensitivity.
To better understand the improvement in axial resolution, we calculate the Fourier transforms of the interference signals shown in Fig. 7, and plot them as a function of wavelength. The results, shown in Fig. 8, reveal that the SPAD is not sensitive to wavelengths beyond 1100 nm, whereas the SSPD is sensitive in this region and therefore yields improved axial resolution. However, the resolution obtained in this experiment is limited by the bandwidth of our downconversion source. Far higher axial resolution could be obtained were we to use an SSPD in conjunction with broader sources that operate near 1100 nm, such as broadband continuum generation from a photonic-crystal fibers [11] and fiber lasers [24], as the SSPD response extends over a far greater wavelength range. Fig. 7, plotted as a function of wavelength. It is evident that the SPAD is not sensitive to wavelengths beyond 1100 nm, whereas the SSPD is sensitive in this region.
Enhancement of sensitivity at low light levels
To demonstrate OCDR using single-photon counting with low levels of source power, we made use of the system depicted in Fig. 2. The source was a standard superluminescent diode (SLD) whose output was centered at a wavelength of 930 nm, with a spectral width of 70 nm. This source, which is often used in coherence-domain imaging, has an optical power that is sufficient so that it can be conveniently measured and attenuated to the level desired for the experiment at hand. The SLD was operated at an output power of ≈ 1 mW, but was attenuated to 10 nW by means of neutral-density (ND) filters placed directly at the output. In addition, to simulate a sample of low reflectance, ND filters were used to introduce an attenuation of 70 dB in the sample arm of the interferometer, which comprised a mirror.
We now forge a comparison with the theoretical results for the SNR provided in Sec. 4.2. The attenuation of 70 dB in the signal arm is expected to result in a signal optical power P S ≈ 2.5 x 10 -16 W (half the power is lost in the interferometer), whereupon S S P hν Φ ≈ = 1170 photons/sec. Since η is measured to be ≈ 0.05 pulses/photon and the effective bandwidth B, which is determined by the bandwidth of the digital-filtering system, is ≈ 1/40 Hz (this is narrower than 1/2T, where T = 1 sec is the counting time per data point). In accordance with Eq. (4), we then expect an SNR ≈ 1170 (30.7 dB). Using the measured envelope of the signal, and the variance of the noise in the region outside the signal (i.e., at a reference-arm displacement greater than the coherence length of the source), we obtain an observed SNR = 562 (27.5 dB), which is within a factor of two of the theoretical prediction.
To examine the count-variance to count-mean ratio, we carried out a series of experiments in which the reference-arm mirror was translated in discrete steps while maintaining the path-length difference between the reference and sample arms within the coherence length of the source (l c ≈ 6 μm). The number of pulses from the detector in 1 sec was measured at each particular location of the reference mirror. A total of N = 100 such measurements were made using the SSPD detection system shown in Fig. 2.
A plot of the mean count rate, i.e., the mean number of pulses in a 1-sec counting time, is displayed in Fig. 9(a) as a function of the reference-arm displacement. The error bars denote ±1 standard deviation of the count rate. To confirm whether our observations are in accord with the theory presented in Sec. 4.2 for Poisson statistics, we replot these data in Fig. 9(b) in the form of the observed normalized varianceF . The mean of F is indeed seen to be close to unity, and its standard deviation close to 2 0.14 N » , for all reference-arm displacements. The observation of Poisson counting statistics at different signal magnitudes, corresponding to different reference-arm displacements, indicates that the photon statistics of our source are also Poisson [7]. This demonstrates that the particular SLD used in our experiments is devoid of intensity-fluctuation noise. This, together with the fact that photon counting eliminates thermal noise, is consistent with the use of Eq. (4) for the signal-to-noise ratio.
The results described in this section demonstrate that photon-counting OCDR allows us to achieve nearly shot-noise-limited performance even when using a very weak source of light; this cannot be achieved using conventional detection schemes. It is clear, therefore, that photon-counting coherence-domain imaging can be used to image low-reflectance specimens with a low-power light source.
Rate of data acquisition
As indicated in Sec. 4.3, the long response time of single-photon counting detectors limits the rate of data acquisition. However, SSPDs are generally superior to SPADs in this respect. As an example, our SSPDs have a response time of 10 nsec, as shown in Fig. 5.
An experiment was carried out to measure the time required to obtain an OCDR scan of a specified quality. The experimental arrangement is the same as that shown in Fig. 2, using the source described in Sec. 5.2. The SLD was again operated at an output power of ≈ 1 mW, but in this case ND filters were used to yield a prespecified counting rate. We operated our SSPD at an average rate of 5 MHz corresponding to 50 photons in a counting time of 10 μsec at the output.
Moving the reference mirror at a speed of 1 mm/sec, scanning for a distance of 1 mm, and using a counting time of 10 μsec per data point, we observed the two surfaces of a 90-μm thick silica window, as shown in Fig. 10. We measure a displacement of 134 μm between the peaks, corresponding to the optical pathlength of ≈ 135 μm, as expected (the refractive index of the silica window is 1.5).
The scan time of 1 sec for the image presented in Fig. 10 could be reduced by a factor of 10 (corresponding to ten times faster scanning of the reference mirror), while maintaining the same image quality, by operating the SSPD at 50 MHz rather than 5 MHz, and using a counting time of 1 μsec rather than 10 μsec. Although the SSPD is capable of operating at this rate, we did not use these parameters because of a technical limitation in the speed at which we could move our nanomotion-controlled scanning stage (the maximum speed available was 1 mm/sec). Thus, with a sufficiently fast scanning mechanism, it is evident that SSPDs permit conveniently rapid data acquisition in photon-counting coherence-domain imaging. Figure 10: Single-photon axial scan of a 90-μm-thick silica window obtained with a scanning speed of 1 mm/sec and a counting time of 10 μsec per data point. The distance between the peaks is 134 μm, corresponding to the optical pathlength.
Conclusion
Coherence-domain imaging using single-photon counting allows weak light sources to be used for imaging weakly reflecting samples. We have demonstrated the use of superconducting single-photon detectors (SSPDs) in such an imaging system. These detectors are sensitive over the entire spectral range useful for OCT in biological samples. Neither Si nor InGaAs detectors have comparable sensitivity over the entire spectrum of interest. In addition, SSPDs can also provide high-acquisition-rate imaging, with counting rates as high as 100 MHz, if a sufficient flux of light is available. Although these detectors provide greater flexibility in the choice of optical sources that can be used for coherence-domain imaging, they do require cryogenic cooling, and are more expensive than ordinary semiconductor photodetectors, at least in the current state of our technology. | 6,445.6 | 2008-10-06T00:00:00.000 | [
"Physics"
] |
Systematic assessment of GFP tag position on protein localization and growth fitness in yeast
While protein tags are ubiquitously utilized in molecular biology, they harbor the potential to interfere with functional traits of their fusion counterparts. Systematic evaluation of the effect of protein tags on localization and function would promote accurate use of tags in experimental setups. Here we examine the effect of Green Fluorescent Protein (GFP) tagging at either the N or C terminus of budding yeast proteins on localization and functionality. We use a competition-based approach to decipher the relative fitness of two strains tagged on the same protein but on opposite termini and from that infer the correct, physiological localization for each protein and the optimal position for tagging. Our study provides a first of a kind systematic assessment of the effect of tags on the functionality of proteins and provides step towards broad investigation of protein fusion libraries. Highlights Protein tags are widely used in molecular biology although they may interfere with protein function. The subcellular localization of hundreds of proteins in yeast is different when tagged at the N or the C terminus. A competition based assay enables systematic deciphering of correct tagging terminus for essential proteins. The presented approach can be used to derive physiologically relevant tagged libraries.
• The subcellular localization of hundreds of proteins in yeast is different when tagged at the N or the C terminus.
• A competition based assay enables systematic deciphering of correct tagging terminus for essential proteins .
• The presented approach can be used to derive physiologically relevant tagged libraries.
Report
Protein tags are essential for a variety of assays in biology -from affinity tags for protein purification to fluorescence tags for visualization. However, tagging proteins comes at a price: fusion proteins are different from their native form and may suffer from impaired activity, reduced stability, loss of binding partners, wrong targeting, etc [1][2][3][4] . Often, the same tag may induce different phenotypes depending on where it appears on the protein. Most protein tags are added to one of the two termini of the polypeptide (carboxy terminus (C') or amino terminus (N')). However, with no a-priori knowledge, choosing the appropriate tagging terminus for a protein of interest requires trial and error.
Here we report a systematic approach suited for gauging the effect of a tag on global protein functionality. We use a Green Fluorescent Protein (GFP) tag as a test case and rely on a recent comparison made between two whole-genome libraries of strains, each encoding one protein fused to GFP at either the N' [5] or C' [6,7] . In this comparison it was shown that 515 proteins in yeast are differentially localized when tagged in the opposing termini (Fig. 1A). While protein function can be impaired without displaying a mis-localization, it is clear that a difference in localization affects the capacity of a protein to function properly in a cellular context. Hence, we chose these proteins to test our method: which tagged terminus represents the physiologically relevant localization of these proteins?
To systematically address whether an N' or C' tag better represents the correct cellular localization of a given protein, we established a pairwise competition approach that relies on the assumption that there would be a growth advantage to the strain carrying the correctly localized protein form (Fig. 1B). While it may theoretically be the case that mis-localization can give rise to a growth advantage, here we assume that this is not the norm. We hypothesized that such a difference could be easily monitored in essential proteins where even partial loss of the protein's function inherently leads to a growth deficiency. Thus, to test this approach we focused on all the proteins in yeast that are both essential [8] and differentially localized (57 proteins, out of which 46 were successfully tested here; see Methods and Table S1 for further details). Flow cytometry was then used to infer the relative growth fitness difference (Δµ) for each pair of strains (N' vs. C' form) with identification of the fittest strain (as illustrated in Fig 1B; for full description of the assay see the Methods section). comparison of localization assignments between the C' tagged (y-axis; [6,7] ) and N' tagged GFP genome-scale library (x-axis; [5] ). Altogether 515 proteins are differentially localized, representing about 10% of the entire collection of yeast proteins. Grayscale goes from white (least) to black (most) strains with altered localization (data from [5]) ( B ) Schematic representation of the pairwise competition approach. The C' tagged library was genetically modified to express cytosolic mCherry, giving rise to the "red" phenotype, which in turn allows the quantitative measurement of population sizes of the two variants separately using flow cytometry on pooled mixed samples. A total of 21 proteins (out of the 46) showed a significant fitness difference between the two tagged forms (|Δµ|>1.5%; Fig. 1C); 14 cases where the C' tagged form was superior and 7 cases where the N' had an advantage. For example, Apc11, a catalytic core subunit of the Anaphase-Promoting Complex/Cyclosome (APC/C), showed an ER localization when C' tagged as opposed to a punctate localization with the N' tag (Fig. 1E), and had a growth advantage when C' tagged, suggesting that this protein may serve as a connection between the ER and cell cycle regulation. Another example is Rsp5 that has an advantage when it is localized to the nucleus with the C' tag while being in a punctate form when N' tagged. This may suggest that either its SUMO ligase activity [9] is its essential function or that the control of multivesicular body (MVB) sorting [10] is also achieved through nuclear control (Table S1).
An example of a N' tag winner is Hrt1, a RING-H2 domain core subunit of multiple ubiquitin ligase complexes, that showed an advantage when localized to the nucleus with an N' tag and mis-localized to the cytosol when C' tagged ( Fig 1F). Rpn12 is shown as a representative of a control, where both tagged forms are localized to the same organelle and the fitness of both strains is similar (Fig. 1D).
Notably, essential proteins that had a different localization but showed identical growth rate at our resolution level (the remaining 25 proteins; Table S1) may indicate that both localizations are tolerated (for example in dual localized proteins) or neither (both tags may cause mis-targeting of the protein).
Comparison of each of the tagged variants to wild-type can help distinguish between the two cases, since, if both variants suffer from protein miss-localization, we expect the wild-type to be fitter than either. Here we used colony-size quantification to compare the fitness of N' tagged variants to wild-type (Table S1). We found that out of the above 25 cases, where no significant fitness difference was found between the two forms, only 5 proteins had a significant reduction in colony-size relative to wild-type (mean=0.96, S.D=0.37 with a normal distribution according to the Shapiro-Wilk normality test), implying that in most cases where no superiority was observed, both localizations are tolerated. We are also aware that the presented approach may be more relevant for essential proteins, since, for non-essential proteins, the fitness difference between the mis-and well-localized variants may be too small to detect.
However, many "non-essential" proteins become essential under specific conditions (different media and/or genetic backgrounds), and hence they could be included in tailored analyses. For example, peroxisomal biogenesis proteins become essential when cells are grown in fatty acids as a sole carbon source and mutants lacking mitochondrial genome become essential when yeast are forced to respire.
Our work suggests a systematic methodology to evaluate the effects of protein tags. The presented approach can readily be extended to study the effect of additional tags and therefore can be used to derive multiple physiologically relevant tagged libraries. In a similar manner, one can also test the effect of a given tag on the cellular function of a protein, by comparing the fitness of two variants that are localized to the same place. To conclude, we believe that our approach provides a useful tool to study the relationship between protein function and cellular fitness. Accounting for potential caveats of protein tags is essential for accurate understanding of cell biology. Such data are hence valuable for systematic, as well as for detailed, investigation of many questions in molecular biology.
Methods
A total of 77 proteins were analyzed here (Table S1): 46 essential and differentially localized proteins (the study subset of study); 12 essential and similarly localized proteins; 9 non-essential and differentially localized proteins; and 10 non-essential and similarly localized proteins. For each protein, two strains were mixed in SD media such that one strain was tagged with GFP at the C' of the protein of interest (taken from the genome wide C' GFP yeast collection [7] ) and the second strain was tagged with GFP at the N' (taken from the N' genome wide yeast collection NATIVEpr-GFP [5] ). To allow optical separation between the strains, we included endogenous soluble mCherry in the C' library strain (TEF2pr-mCherry tag was introduced into the URA3 locus; for more details see [7] ). Cells were grown together for 24 hours, diluted 32-fold and then flow cytometry was used to monitor population sizes of Gating of +GFP-labeled population and +GFP+mCherry labeled population was done using a custom Matlab script; all measurements were done in triplicates. Downstream computational data processing was done using a custom Python script. We imaged the C' and N' GFP tagged strain arrays using a ScanR system (Olympus) as previously described [7] . Images were acquired using a 60× air lens for GFP (excitation, 490/20 nm; emission, 535/50 nm), mCherry (excitation, 572/35 nm; emission, 632/60 nm), and brightfield channels. Images were transferred to ImageJ (1.51p Java1.8.0_144 (64-bit)), for slight, linear adjustments to contrast and brightness. Colony size quantification was done by plating yeast strains in 1536 format using a RoToR benchtop colony arrayer (PMID: 21877281) (Singer Instruments).
Strains were grown overnight in 30 0 C and photographs of plates were analyzed for colony size using SGAtools [11] . Final Colony size score was calculated by dividing the colony size of a specific strain by the wild-type colony size from the same plate. | 2,378.6 | 2018-07-02T00:00:00.000 | [
"Biology"
] |
Characterization of Linezolid-Analogue L3-Resistance Mutation in Staphylococcus aureus
In a previous study, a linezolid analogue, called 10f, was synthesized. The 10f molecule has an antimicrobial activity comparable to that of the parental compound. In this study, we isolated a Staphylococcus aureus (S. aureus) strain resistant to 10f. After sequencing the 23S rRNA and the ribosomal proteins L3 (rplC) and L4 (rplD) genes, we found that the resistant phenotype was associated with a single mutation G359U in rplC bearing to the missense mutation G120V in the L3 protein. The identified mutation is far from the peptidyl transferase center, the oxazolidinone antibiotics binding site, thus suggesting that we identified a new and interesting example of a long-range effect in the ribosome structure.
Introduction
Multidrug resistance in Gram-positive pathogen bacteria is one of the most significant challenges for the scientific community involved in the research and discovery of new and more effective antimicrobial agents active against these pathogens. Linezolid, an oxazolidinone antibiotic, is effective for the treatment of infections caused by Grampositive pathogens resistant to other antibiotics including methicillin-resistant S. aureus (MRSA), vancomycin-resistant enterococci (VRE), and penicillin-resistant Streptococcus pneumoniae [1]. Favorable pharmacokinetic and toxic effect profiles, consistent with oral or intravenous administration in humans, represent significant features which make linezolid an antibiotic of great success [2], also showing several characteristics appropriate to reduce the occurrence of drug resistance.
Indeed, linezolid is a completely synthetic drug; thus, no natural and pre-existing pool of resistance genes would be expected to ease the appearance of resistance mechanisms. Furthermore, it has a unique mechanism of action which targets bacterial protein synthesis at an extremely early stage [3], and, consequently, cross-resistance between the drug and commercially available antimicrobials would be remote.
In any case, the identification of linezolid-resistant bacteria [4] has already underlined the need to find new oxazolidinone-type drugs with different targets that bypass resistance. Studies of new oxazolidinone with structural changes and improved features are underway and the research area is very active [5]. In a previous paper [6], we have described the design, the synthesis and the preliminary anti-bacterial activity of unreported linezolid analogues bearing the urea and thiourea functionality at the C-5 position. In this paper, we describe the anti-microbial activity of one of these linezolid analogues, called 10f. To understand the mechanism of action of this analogue, resistant mutants of S. aureus were generated.
Isolation of Resistant and Revertant Mutant 10f
Five microliters of an overnight bacterial suspension of S. aureus ATCC 6538P were used for MIC determination and were inoculated into multiwell plates containing different concentrations (from 1 to 32 µg/mL of 10f). We found that bacterial cells were able to grow at a concentration of 10f of 16 µg/mL. This resistance was present and stable when the strain was taken under antibiotic selection.
To select the revertant phenotype, a resistant strain was grown in absence of 10f. Four independent resistant colonies were propagated in 96-well microtiter plates in an antibiotic-free medium for 50 days. The plates also contained control wells that were not inoculated by cells. A volume of 1.2 µL of each stationary phase culture was transferred daily to 100 µL of fresh medium using a manual-held 96-pin replicator.
Bacterial DNA Extraction
Genomic DNA was extracted from single S. aureus isolate colonies that were inoculated in 5 mL of Luria Bertani broth and incubated overnight at 37 • C using a guanidiumthiocyanate-based method [9].
Polymerase Chain Reaction (PCR) Amplification of Individual 23S rRNA, rplD and rplC Genes
Primer couples were designed based on the published S. aureus genome N315 (Gen-Bank accession n. NC_002745). For S. aureus isolates with 6 copies of 23S rRNA operons, we used the primers listed in Table 1 (rrn1-rrn6) [10]. PCR conditions were 1 min at 94 • C and 30 cycles of denaturing, annealing, and extension at 94 • C (30 s), 55 • C (30 s), and 72 • C (5 min). PCR products were separated by agarose (1.5%) gel electrophoresis. The 6 individual bands were then gel-extracted and purified (Qiagen). For each purified rRNA gene fragment, the domain V region spanning 2280-2699 bp (Escherichia coli numbering) was amplified. The primers used were 5_-GCGGTCGCCTCCTAAAAG-3_ (upper primer, corresponding to bases 2280-2297 of S. aureus 23S rRNA gene; GenBank accession no. X68425) and 5_-ATCCCGGTCCTCTCGTACTA-3_ (lower primer, complementary strand corresponding to bases 2680-2699 of S. aureus 23S rRNA gene, GenBank accession no. X68425). PCR conditions were 5 min of lysis and denaturation at 94 • C; 30 cycles of denaturing, annealing, and extension at 94 • C (30 s), 55 • C (30 s), and 72 • C (1 min), respectively; and a final 10 min extension at 72 • C. The products were ∼390 bp in size and were separated by agarose (1.5%) gel electrophoresis. PCR conditions for rplD gene amplification were 5 min of lysis and denaturation at 94 • C; 30 cycles of denaturing, annealing, and extension at 94 • C (30 s), 44 • C (30 s), and 72 • C (1 min) respectively; and a final 10 min extension at 72 • C. The products were ∼624 bp in size and were separated by agarose (1.5%) gel electrophoresis. PCR conditions for rplC gene amplification were 5 min of lysis and denaturation at 94 • C; 30 cycles of denaturing, annealing, and extension at 94 • C (30 s), 50 • C (30 s), and 72 • C (1 min), respectively; and a final 10 min extension at 72 • C. The products were ∼663 bp in size and were separated by agarose (1.5%) gel electrophoresis. The PCR products were gel-extracted and purified (Qiagen). They were then sequenced by use of the standard dideoxynucleotide method (Molecular Biology Core Facility, Dana-Farber Cancer Institute; Boston, MA). Sequence data were analyzed by the use of MEGALIGN (DNASTAR) and CHROMAS (version 1.45; Conor McCarthy, School of Health Sciences, Griffith University, Gold Coast Campus; Southport, Queensland, Australia).
Primer Sequence
For-rrn-all
Modelling
Docking of 10f in the structure of the large ribosomal subunit from S. aureus bound to linezolid (PDB code 4WFA) was performed as previously described [6]. For comparison, numbering of the rRNA form H. marismortui has been used through the text.
10f Antimicrobial Activity
In order to find antibiotics active against linezolid resistant strains, a new series of 5-substituted oxazolidinones derived from linezolid, having urea and thiourea moieties at the C-5 side chain of the oxazolidinone ring, were tested in a previous paper [6]. The 10f compound ( Figure 1B) demonstrated antimicrobial activity comparable to that of linezolid ( Figure 1A) against S. aureus, recording an MIC value of 1 µg/mL. In the present study, we have tested 10f against different Gram-positive strains. Tabl 2 lists the minimum inhibitory concentration (MIC) for linezolid and 10f against differen ATCC strains of S. aureus, Enterococcus faecalis, Enterococcus facium, S. epidermidis and on S. aureus methicillin-resistant strain (WKZ-2). These results showed that the activity of 10 was strongly comparable to that of linezolid against tested strains.
Compound 10f, like linezolid, did not induce significant changes in cell viability fo In the present study, we have tested 10f against different Gram-positive strains. Table 2 lists the minimum inhibitory concentration (MIC) for linezolid and 10f against different ATCC strains of S. aureus, Enterococcus faecalis, Enterococcus facium, S. epidermidis and one S. aureus methicillin-resistant strain (WKZ-2). These results showed that the activity of 10f was strongly comparable to that of linezolid against tested strains. Compound 10f, like linezolid, did not induce significant changes in cell viability for two different eukaryotic cell lines (Hela and H1299), in the concentration range tested and for the exposure times described in the materials and methods section.
Isolation of S. aureus Mutant Resistant to 10f
To characterize the mechanism of action of 10f, we tried to isolate S. aureus mutants showing resistance to this compound. S. aureus ATCC 6538P 10f-susceptible strain (MIC 1 µg/mL) was grown at 37 • C in Mueller-Hinton broth and then serially passaged in a medium containing increasing concentrations (1 to 32 mg/L) of 10f. During these passages, one S. aureus descendant was isolated, which showed a 10f MIC increase to 16 µg/mL. The 23S rRNA genes of this resistant mutant were amplified and sequenced as previously described [10,11]. None of the six 23S genes were mutated. As a number of 50S large-subunit ribosomal proteins have regions which interact closely with the oxazolidinone binding site in the peptidyl transferase center (PTC), to identify the mutated locus responsible for 10f resistance we sequenced two genes coding for ribosomal proteins that were already described to induce linezolid resistance: L4 and L3. L4 belongs to a conserved family of r-proteins with mixed α-helices and β-strands [12]; it is essential for the early steps of ribosome assembly of both bacteria and eukaryotic cells [13,14]. In mature ribosome structures, its globular body domain overlaps the external moieties of domains I and II, while its internal loop region fits deep into the same domains, also reaching parts of the peptidyl transferase center (PTC) in domain V [15]. The resistant strain that we isolated did not show mutations in the gene coding for protein L4 [16]. Therefore, we decided to focus on the gene encoding for L3 ribosomal protein. Mutations in L3 have been associated with resistance against tiamulin (TIA) and retapamulin (whose binding site overlaps with that of oxazolidinones in the PTC) [17][18][19]. However, different researchers described a variety of L3 mutations in S. aureus following in vitro selection with oxazolidinones [20]. In our case, we found the mutation G359U in the rp C gene corresponding to a missense mutation at position 120 in the L3 protein that causes a change from Glycine (wild type) to Valine (resistant). Very interestingly, in three different experiments, we propagated (see methods) the S. aureus-resistant populations in the absence of 10f for 50 transfers (approximately 400 generations) by diluting 1% of the saturated cultures into fresh medium every 24 h, and we selected a representative clone for further analysis. The 10f resistance disappeared together with the mutation in the L3 gene (see Table 3 for MIC values). This finding is a strong indication that the mutation G120V could be the major factor responsible for the resistant phenotype.
Interaction Model between 10f and the Bacterial Ribosome
We have previously reported a docking analysis of 10f in the structure of the Haloarcula marismortui ribosome (PDB code 3CPW) [6]. We suggested that the peculiar syn-anti conformation of the nitrophenyl-thiourea moiety in 10f fits very well with the linezolid pocket, allowing several additional van der Waals and polar interactions, which could counterbalance the loosening effects of mutations conferring resistance to linezolid (e.g., G2447U, U2500A and G2576U). Recently, Eyal and co-workers published the crystal structure of the large ribosomal subunit from S. aureus bound to linezolid [21]; therefore, we repeated the docking analysis using this structure as reference (PDB code 4WFA). The structures of the complexes between linezolid and the large ribosomal subunits from H. marismortui and S. aureus are very similar but not identical. The main difference is in the orientation of the acetamide moiety. In the ribosome from S. aureus, it points toward a small pocket defined by G2447 and C2501. In the ribosome from H. marismortui, it is folded onto the oxazolidinone ring pointing toward the formyl-Phe-CCA ligand; an analogue of the formyl-Met-tRNA and the small pocket in this structure hosts a potassium ion. Whether the difference is caused by the presence of the formyl-Phe-CCA ligand or not, in the case of 10f, the nitrophenyl-thiourea moiety is too large to adopt the same orientation of the acetamide group observed in the structure of the S. aureus ribosome ( Figure 2). Indeed, the docking of 10f in the ribosome of S. aureus suggests that it should adopt an orientation very similar to that found in the case of the ribosome of H. marismortui (Figures 2, 3A and 4A). methods) the S. aureus-resistant populations in the absence of 10f for 50 transfers (approximately 400 generations) by diluting 1% of the saturated cultures into fresh medium every 24 h, and we selected a representative clone for further analysis. The 10f resistance disappeared together with the mutation in the L3 gene (see Table 3 for MIC values). This finding is a strong indication that the mutation G120V could be the major factor responsible for the resistant phenotype.
Interaction Model between 10f and the Bacterial Ribosome
We have previously reported a docking analysis of 10f in the structure of the Haloarcula marismortui ribosome (PDB code 3CPW) [6]. We suggested that the peculiar syn-anti conformation of the nitrophenyl-thiourea moiety in 10f fits very well with the linezolid pocket, allowing several additional van der Waals and polar interactions, which could counterbalance the loosening effects of mutations conferring resistance to linezolid (e.g., G2447U, U2500A and G2576U). Recently, Eyal and co-workers published the crystal structure of the large ribosomal subunit from S. aureus bound to linezolid [21]; therefore, we repeated the docking analysis using this structure as reference (PDB code 4WFA). The structures of the complexes between linezolid and the large ribosomal subunits from H. marismortui and S. aureus are very similar but not identical. The main difference is in the orientation of the acetamide moiety. In the ribosome from S. aureus, it points toward a small pocket defined by G2447 and C2501. In the ribosome from H. marismortui, it is folded onto the oxazolidinone ring pointing toward the formyl-Phe-CCA ligand; an analogue of the formyl-Met-tRNA and the small pocket in this structure hosts a potassium ion. Whether the difference is caused by the presence of the formyl-Phe-CCA ligand or not, in the case of 10f, the nitrophenyl-thiourea moiety is too large to adopt the same orientation of the acetamide group observed in the structure of the S. aureus ribosome (Figure 2). Indeed, the docking of 10f in the ribosome of S. aureus suggests that it should adopt an orientation very similar to that found in the case of the ribosome of H. marismortui ( Figures 2, 3A and 4A). (PDB code 4WFA). (B) Model of the complex 10f/large ribosomal subunit. Linezolid, 10f, and the surrounding nucleotides are shown as sticks. Nitrogen atoms are shown in blue, oxygen in red, fluorine in pale cyan, carbon atoms of linezolid and 10f are in green, carbon atoms of the nucleotides of the linezolid binding pocket are in magenta, carbon atoms of the nucleotides which contact only 10f or make more extended contacts with 10f than with linezolid are in white, carbon atoms of G2576 are in yellow and carbon atoms of its neighbors (C2575, G2578, C2579, and U2580) are in orange. The dashed white line in panel (B) indicates the possible H-bond between the nitro group of 10f and the NH2 group at position 6 of A2058. . L3 is shown as sticks and cartoon images except for residue Gly120, shown as spheres. 10f and the surrounding nucleotides are shown as sticks. L3 is colored according to the secondary structure (helices, red; strands, yellow; loops, green) except Gly120, shown in cyan. In the case of 10f and of the surrounding nucleotides the color code is the same as in Figure 2.
In particular, the nitrophenyl-urea moiety makes van der Waals contacts with A2503, G2505, and A2059. Moreover, the nitro group is involved in a H-bond with the N6 of A2058 (Figures 2B and 4A). The additional and extended van der Waals contacts of the nitrophenyl-urea moiety with G2505 are particularly interesting as the base of this nucleotide is staked on the base of G2576 that is mutated to U in several linezolid resistant strains [22] (Figures 2-4). Likely, the substitution of a purine with a pyrimidine at position 2576 makes the surroundings of G2505 more flexible, thus reducing the interaction with linezolid. In the case of 10f, the additional stacking interaction between the nitrophenyl ring and G2505 ( Figures 2B, 3A and 4A) would counterbalance the increased mobility of this nucleotide, thus allowing a strong interaction of 10f with the mutated ribosome. . L3 is shown as sticks and cartoon images except for residue Gly120, shown as spheres. 10f and the surrounding nucleotides are shown as sticks. L3 is colored according to the secondary structure (helices, red; strands, yellow; loops, green) except Gly120, shown in cyan. In the case of 10f and of the surrounding nucleotides the color code is the same as in Figure 2.
In particular, the nitrophenyl-urea moiety makes van der Waals contacts with A2503, G2505, and A2059. Moreover, the nitro group is involved in a H-bond with the N6 of A2058 (Figures 2B and 4A). The additional and extended van der Waals contacts of the nitrophenyl-urea moiety with G2505 are particularly interesting as the base of this nucleotide is staked on the base of G2576 that is mutated to U in several linezolid resistant strains [22] (Figures 2-4). Likely, the substitution of a purine with a pyrimidine at position 2576 makes the surroundings of G2505 more flexible, thus reducing the interaction with linezolid. In the case of 10f, the additional stacking interaction between the nitrophenyl ring and G2505 ( Figures 2B, 3A and 4A) would counterbalance the increased mobility of this nucleotide, thus allowing a strong interaction of 10f with the mutated ribosome.
The model of the complex 10f/ribosome also provides a possible explanation to the effects of the mutation G120V in the L3 protein. As shown in Figures 3 and 4, L3 has an unusually long loop (spanning from T119 to V178) with an extended beta-structured stem that penetrates into the ribosome and approaches the region, including G2576. In particular, C2575, G2577, C2578, and U2579 are at direct van der Waals contact with several residues in the middle part of the loop, namely G144, S145, H146, F147, G152, S153, G155, M156, and A157 ( Figure 4B). the cavity where the nitrophenyl moiety of 10f is hosted, thus decreasing its binding af finity. Therefore, G120V would be a very interesting example of long-range effects in th complex ribosome structure. It is worth noting that the suggested changes would influ ence only or mainly the nitrophenyl-thiourea binding pocket, thus selectively impairing the binding of 10f with respect to linezolid and explaining why the mutation G120V doe not affect linezolid's activity. Mutation G120V is at the base of the loop (Figures 3 and 5) and, due to the considerably higher volume of the valine side chain ( Figure 5), it would likely cause a rearrangement of the structure that would propagate to the tip of the loop, thus indirectly influencing the mobility of the C2575-U2780 region and hence of the linezolid/10f binding pocket. In particular, one could speculate that the mutation G120V could indirectly cause a repositioning of G2576 and of the adjacent G2505 which in turn would occupy, at least in part, the cavity where the nitrophenyl moiety of 10f is hosted, thus decreasing its binding affinity. Therefore, G120V would be a very interesting example of long-range effects in the complex ribosome structure. It is worth noting that the suggested changes would influence only or mainly the nitrophenyl-thiourea binding pocket, thus selectively impairing the binding of 10f with respect to linezolid and explaining why the mutation G120V does not affect linezolid's activity.
Discussion
The spreading of strains resistant to linezolid proves that bacteria can develop resistance in a few years, even against completely artificial antimicrobials. This makes it essential not only to continue searching for new antibiotics but also to study the molecular strategies that bacterial cells adopt to develop resistance in order to design more lasting antibiotics.
In our case, by growing S. aureus in the presence of increasing concentrations of the linezolid derivative 10f, we were able to isolate a resistant strain which revealed a very interesting mutation in the ribosomal protein L3. The observed mutation, G120V, to the best of our knowledge, has never been reported before, neither in linezolid or other antibiotics resistant S. aureus, nor in other resistant bacteria. As the site of the mutation is quite far from the linezolid and 10f binding site, i.e., the peptidyl transferase center, the observed mutation likely has long-range effects on the shape and/or dynamics of this essential site of the ribosome. As described in detail above, a close inspection of the structure of the ribosome suggests that the mutation G120V might cause a small rearrangement of a long loop which, protruding from the body of L3, penetrates into the ribosome, contacting some of the nucleotides that line up the peptidyl transferase center. This could cause a reduction in the cavity necessary to host the bulky nitrophenyl-thiourea moiety which characterizes 10f. It is worth noting that most of the known mutations in L3 associated with linezolid resistance are located in the central part of the mentioned loop, hence considerably nearer to the peptidyl transferase center than the residue at position 120 [23,24]. Even more interestingly, the mutation G120V in the L3 gene reverted in three different replicated experiments when the mutated strain was grown in the absence of 10f and the reversion of the mutation was accompanied by the disappearance of the resistant phenotype. These findings demonstrate that the observed mutation, even if advantageous in the presence of 10f, decreases the biological fitness of S. aureus, thus making its spreading among the population unlikely. Very interestingly, our findings suggest that targeting the cavity which hosts the nitrophenyl-thiourea moiety of 10f is a promising strategy to develop further linezolid derivatives with a lower potential to induce the insurgence of resistant strains.
Discussion
The spreading of strains resistant to linezolid proves that bacteria can develop resistance in a few years, even against completely artificial antimicrobials. This makes it essential not only to continue searching for new antibiotics but also to study the molecular strategies that bacterial cells adopt to develop resistance in order to design more lasting antibiotics.
In our case, by growing S. aureus in the presence of increasing concentrations of the linezolid derivative 10f, we were able to isolate a resistant strain which revealed a very interesting mutation in the ribosomal protein L3. The observed mutation, G120V, to the best of our knowledge, has never been reported before, neither in linezolid or other antibiotics resistant S. aureus, nor in other resistant bacteria. As the site of the mutation is quite far from the linezolid and 10f binding site, i.e., the peptidyl transferase center, the observed mutation likely has long-range effects on the shape and/or dynamics of this essential site of the ribosome. As described in detail above, a close inspection of the structure of the ribosome suggests that the mutation G120V might cause a small rearrangement of a long loop which, protruding from the body of L3, penetrates into the ribosome, contacting some of the nucleotides that line up the peptidyl transferase center. This could cause a reduction in the cavity necessary to host the bulky nitrophenyl-thiourea moiety which characterizes 10f. It is worth noting that most of the known mutations in L3 associated with linezolid resistance are located in the central part of the mentioned loop, hence considerably nearer to the peptidyl transferase center than the residue at position 120 [23,24]. Even more interestingly, the mutation G120V in the L3 gene reverted in three different replicated experiments when the mutated strain was grown in the absence of 10f and the reversion of the mutation was accompanied by the disappearance of the resistant phenotype. These findings demonstrate that the observed mutation, even if advantageous in the presence of 10f, decreases the biological fitness of S. aureus, thus making its spreading among the population unlikely. Very interestingly, our findings suggest that targeting the cavity which hosts the nitrophenyl-thiourea moiety of 10f is a promising strategy to develop further linezolid derivatives with a lower potential to induce the insurgence of resistant strains. | 5,527.2 | 2023-03-01T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Prebiotic supplementation modulates selective effects of stress on behavior and brain metabolome in aged mice
Aging has a significant impact on physiology with implications for central nervous system function coincident with increased vulnerability to stress exposures. A number of stress-sensitive molecular mechanisms are hypothesized to underpin age-related changes in brain function. Recent cumulative evidence also suggests that aging impacts gut microbiota composition. However, the impact of such effects on the ability of mammals to respond to stress in aging is still relatively unexplored. Therefore, in this study we assessed the ability of a microbiota-targeted intervention (the prebiotic FOS-Inulin) to alleviate age-related responses to stress. Exposure of aged C57BL/6 mice to social defeat led to an altered social interaction phenotype in the social interaction test, which was reversed by FOS-Inulin supplementation. Interestingly, this occured independent of affecting social defeat-induced elevations in the stress hormone corticosterone. Additionally, the behavioral modifications following FOS-Inulin supplementation were also not coincident with improvement of pro-inflammatory markers. Metabolomics analysis was performed and intriguingly, age associated metabolites were shown to be reduced in the prefrontal cortex of stressed aged mice and this deficit was recovered by FOS-Inulin supplementation. Taken together these results suggest that prebiotic dietary intervention rescued the behavioral response to stress in aged mice, not through amelioration of the inflammatory response, but by restoring the levels of key metabolites in the prefrontal cortex of aged animals. Therefore, dietary interventions could be a compelling avenue to improve the molecular and behavioral manifestations of chronic stress exposures in aging via targeting the microbiota-gut brain axis.
Introduction
Aging is a complex multiorgan process that involves considerable molecular remodelling, which is characterized by defined hallmarks, such as mitochondrial dysfunction, loss of proteostasis and altered intercellular communication, which largely shapes the immune landscape due to time-dependent cellular damage accumulation (López-Otín et al., 2013). Inflammaging, the accumulation of pro-inflammatory signals that follow aging in mammals have been the subject of much investigation in the context of diverse age-related pathologies (Franceschi et al., 2018). Further, the allostatic load caused by the constant physiological adaptation in response to chronic stress (McEwen, 2007) is an important predictor of mortality (Robertson et al., 2017). Additionally, there are many neurobiological similarities between stress and aging that make it crucial to explore the effects of stress in the context of aging (Prenderville et al., 2015). These changes include hyperactivation of the hypothalamic-pituitary-adrenal (HPA) axis, deficiencies in neurotrophic factors such as brain-derived neurotrophic factor (BDNF), decreases in adult hippocampal neurogenesis, increased neuroinflammation, disruption of blood brain barrier, and changes in several neurotransmitters such as 5-hydroxytryptamine (5-HT or serotonin), noradrenaline, dopamine, glutamate, and γ-aminobutyric acid (GABA), all of which lead to neuronal dysfunction (Prenderville et al., 2015). Stress and HPA axis activation have been associated with a compromised profile of cognitive functions later in life (Qiu et al., 2021;Rimmele et al., 2022). Moreover, animal studies have highlighted a differential effect of chronic stress on dendritic spine remodelling in the prefrontal cortex (PFC) of aged compared to young rats (Bloss et al., 2011).
It is worth noting that the manifestation of anxiety and major depressive disorder is substantial in elderly adults, which significantly impacts their quality of life as a result of social isolation (Cacioppo and Hawkley, 2009;Prenderville et al., 2015). In humans, meta-analyses show that social networks tend to reduce with age and that novelty seeking, particularly relating to generating new social relationships, is reduced with age (Jopp et al., 2016;Prenderville et al., 2015). Even though social behavior has also been shown to be decreased in aged mice (Scott et al., 2017), this behavioral facet is still relatively unexplored in the context of aging (Prenderville et al., 2015). Taken together, the understanding of the impact of stress on/during aging is essential to provide solutions better suited for this particular demographic.
Throughout our lifespan, the gut microbiotathe millions of microbes that inhabit the guthas been increasingly linked to the maintenance of homeostasis (Fung et al., 2017;Lynch and Pedersen, 2016;Miquel et al., 2018). Gut microbiome compositional changes have been reported with aging, namely a reduction of some commensal taxa (ie Roseburia, Bifidobacterium and Prevotella), and an increase in other commensals (such as Akkermansia and Christensenellaceae) (Claesson et al., 2011;Ghosh et al., 2022). Furthermore, gut microbiomes from healthy individuals, while showing diverging time-dependent changes with age, are not only followed by increased circulation of specific microbial metabolites in the plasma, but also predict extended survival later in life (Wilmanski et al., 2021). These age-dependent gut microbiome alterations can reflect age-associated decline in health, but also suggest that lifestyle factors, particularly diet, offer an opportunity to shape the gut microbiome and hence contribute to better health outcomes (Ghosh et al., 2022). Diet has been shown to not only impact gut microbiome composition, but also predict and shape inflammation and mental health markers/status and cognitive function in aged human populations (Claesson et al., 2012;Ghosh et al., 2020).
The PFC is an important brain structure involved in emotional processing and social behavior (Franklin et al., 2017), also in response to social defeat (Challis et al., 2014;Covington et al., 2010), and is related to emotional regulation in aged humans (van Reekum et al., 2018;Winecoff et al., 2011). Further in aged rats, stress has been shown to impact neural oscillations in the PFC (Takillah et al., 2017). Interestingly, changes in the composition of the gut microbiome have been shown to drive microRNA expression in the PFC, along with transcriptional changes and regulation of the myelination process, thus implying a crucial role for the gut microbiome in the development and function of the PFC (Gacias et al., 2016;Alan E. Hoban et al., 2017;A. E. Hoban et al., 2016). Additionally, the gut microbiome has been implicated in the development, programming and expression of social behavior (Agranyoni et al., 2021;Desbonnet et al., 2014;Sherwin et al., 2019;Wu et al., 2021).
Diet plays a prominent role in shaping the gut microbiome, making it an accessible target for microbiota modulation (Audet, 2021;Sanders et al., 2019). Prebiotics, substrates that are used by gut microorganisms to confer health benefits to the host, are increasingly used as a dietary intervention to modulate the gut microbiome for health benefits (Sanders et al., 2019). Inulin, a polysaccharide dietary fiber, widely used as a prebiotic, has been shown to modulate the gut microbiome and thereby shape the immune system (Han et al., 2021;Zou et al., 2018). This prebiotic has been shown to modulate the gut microbiome in middle aged mice, reducing neuroinflammation and peripheral inflammation in response to stress (Boehme et al., 2020). We hypothesize that a prebiotic dietary intervention can modulate age-related changes in the gut microbiome to counter the effects of stress. We assess whether any potential effects are coincident with changes in the production of microbial and host metabolites in the prefrontal cortex and cecum.
Animals
Male aged C57BL/6 mice (n = 40; 18-19 months old; Charles River, Kent, UK) were used in this study. All experiments were conducted in accordance with European Directive 86/609/EEC, Recommendation 2007/526/65/EC, and approved by the Animal Experimentation Ethics Committee of University College Cork. Animals were kept under a 12-h light/dark cycle, with a temperature of 21 ± 1 • C and humidity of 55 ± 10%. Food and water were given ad libitum. Approximately one week before commencement of social defeat sessions, all mice were singly housed and weighed daily over the course of the experimental protocol. For the chronic social defeat stress procedure, non-experimental singly housed adult male CD1 mice (5 months old) were used as aggressors (Envigo, UK).
Study design
Aged C57BL/6 mice (19 months old) were fed either a FOS-Inulin enriched diet or chow diet for 19 days before the beginning of the stress protocol. For 6 days, all aged animals were exposed to social defeat stress (Fig. 1a). To understand if the prebiotic intervention acts on the gut brain axis to shape stress response in aged mice, we analysed social behavior using social interaction (Day 25) and the three-chamber test (Day 26). Tissues and biomarkers were harvested on day 27 including for measurement of circulating levels of corticosterone, ileal pro-inflammatory cytokine levels, and prefrontal cortex and cecal metabolomics (Fig. 1b).
Stress protocol
Mice were randomly assigned to either the stress (n = 20) or control groups (n = 20). Chronic social defeat stress was carried out daily for 6 consecutive days (see Fig. 1a for experimental timeline) as previously described but with slight modifications (Savignac et al., 2011). Prior to the defeat sessions, all CD1 aggressor mice were tested for aggressiveness over two separate days. A CD1 mouse was exposed to another CD1 mouse until the first attack. Mice with the shortest attack latencies were selected as aggressors to be used in subsequent social defeats. For each defeat session, experimental mice were exposed to a different aggressor CD1 mouse each day over the 6-day period. The session involved a single initial exposure of the test mouse to the aggressive CD1 in the home-cage of the aggressor (33 × 15 × 13cm) and lasted until the first attack with expression of submissive posturing, or until 5 min had passed. The latency to attack or display a submissive posture was recorded. The mice were then separated by a perforated Plexiglas® wall that allowed non-physical contact for 2 h. Then, the separator was removed and after another defeat, mice were returned to their home-cage. Control animals were left undisturbed in their homecages during the stress protocol.
Social interaction test
One day after the last social defeat session, the social interaction test was conducted as described before (A Gururajan et al., 2019). Briefly, it encompassed two trials of 150 s each. The test was performed in an open arena (40 × 32 × 24 cm, L × W × H) containing an empty wire mesh cage (9.5 × 7.5 × 7.0 cm) placed in the middle of one of the walls of the arena. During the first trial, the chamber in the social exploration box was empty; in the second trial an unfamiliar non-aggressive CD-1 male mouse was placed inside the exploration chamber in the arena. Both mice were returned to their home-cages upon completion of the test, and the arena was wiped with 70% ethanol. All mice were habituated 45 min before testing, and testing was conducted under red light (5 lux), being recorded from the ceiling. The interaction zone is defined as a rectangular area (25cmx15cm) around the wire-mesh cage where the CD1 target mouse is placed during the interaction phase. The corner zones are defined as squared areas on both corners opposite to the wire-mesh cage (9cmx9cm). Time spent in the interaction zone, time spent in the corner zones, movement and entries in the corner zones, and time spent facing the wire-mesh were scored using a deep-learning informed software analysis coupled with the SimBa 1.1.3, an open source toolkit for computer classification of complex social behaviors in experimental animals (Nilsson et al., 2020). Time facing the CD1 ratio was calculated as time spent facing the interaction zone during the second trial (CD1 target present) divided by the time spent facing the interaction zone during the first trial (CD1 target absent). DeepLabCut 2.2 with CUDA Toolkit 10.1 and Tensorflow 1.12.0 was used to perform behavioral analysis (Mathis et al., 2018). We defined 8-body part pose configuration and labelled 200-250 frames from representative videos from each group. The system was trained using a deep neural network that was trained in 155,000 iterations as the loss relatively flattened (Mathis et al., 2018;Nath et al., 2019). The trained network could accurately track the position of the mice in the full sets of video segments. The labelled x-axis (i.e. left-right) and y-axis (i.e. bottom-top) positions of the pixels in each frame were stored and exported in CSV format. Further analysis was performed using SimBa 1.1.3, where the width of the arena in centimeters was compared to the width in pixels, to generate a pixel-to-centimeter ratio (Nilsson et al., 2020). Next, we defined the region of interest analysis, as described above, and extracted the metrics calculated based on the X-Y coordinates of the center body part in a frame-by-frame basis. We followed the recommendations as described in https://github.com/sgoldenlab/simba.
3-Chamber test
Social cognition was evaluated using the 3-chamber social interaction test, in which time spent interacting with a novel conspecific is compared to time spent with a novel object or familiar conspecific. It is based on the premise that mice will prefer to seek an animal over an inanimate object, and that they will prefer a novel conspecific to a familiar one. This test was performed 24 h after the social interaction test. Mice were habituated to the room for 1 h before testing.
The test arena consisted of 3 chambers; the left and right chambers measured 13.5 × 20 × 20 cm and the center chamber was 9 × 20 × 20 cm. A solid partition divided the chambers, with a small hole allowing access to the other chambers. There were 3 trials in this test: habituation, sociability, and social novelty preference. All phases of the test lasted 10 min, were performed sequentially, and recorded from above for later analysis. During the habituation phase, the mouse was placed into the center chamber and then allowed access to the empty left and right chambers for 10 min. The mouse was then gently coaxed to the center chamber and a novel mouse was placed in a mesh cage in one of the side chambers, whereas a novel object (a small rubber duck) was placed in a mesh cage in the other side chamber for the sociability trial. Placement of the novel mouse and novel objects were randomized between animals to eliminate side preferences. For the social novelty trial, an agedmatched novel mouse was placed in the mesh cage that had previously housed the novel object. The 3-chamber apparatus was cleaned with 70% ethanol between animal trials. The animals were habituated to the room for 45 min before the test, and the test was conducted under dim light (60 lux). The time spent in each chamber was then scored using DeepLabCut and SimBa, as described in the previous section.
Tissue collection
One day after the conclusion of behavioral testing, the animals were sacrificed. Animals were killed by decapitation in a random fashion regarding testing groups between 09.00 h and 15.00 h. Trunk blood was collected in EDTA-containing tubes and centrifuged for 15 min at 10,000 g at 4 • C. Plasma was collected and stored at − 80 • C for later analysis. Whole cecum and ileum were removed and snap-frozen on dry ice and stored at − 80 • C. Brain tissue was rapidly hand-dissected and snap frozen in dry ice and then stored at − 80 • C until further tissue processing. Spleens and mesenteric lymph nodes (MLNs) were dissected out of the animals and were processed for flow cytometry as described in the corresponding section.
Fig. 1. Experimental design. a)
After approximately three weeks of FOS-Inulin dietary supplementation, animals were exposed to 6 days of social defeat stress. Subsequently, mice underwent social interaction test and 3-chambers sociability test, followed by sacrifice. b) Experimental outputs outlined in this experiment.
Prefrontal cortex and cecal metabolomics
The cecal and prefrontal cortex metabolome was analysed by MS-Omics as follows. Prefrontal cortex and cecal content were acidified using hydrochloric acid, and deuterium labelled internal standards were added. All samples were analysed in a randomized order. Analysis was performed using a high polarity column (Zebron™ ZB-FFAP, GC Cap. Column 30 m × 0.25 mm x 0.25 μm) installed in a GC (7890 B, Agilent) coupled with a quadropole detector (5977 B, Agilent). The system was controlled by ChemStation (Agilent). Raw data was converted to netCDF format using Chemstation (Agilent), before the data was imported and processed in Matlab R2014b (Mathworks, Inc.) using the PARADISe software described by Johnsen et. al. (Johnsen et al., 2017).
Peaks were quantified using area under the curve (AUC). Biostatistics were run in R (version 4.1.2) with the Rstudio GUI (version 1.4.1717). Principal-component analysis was performed on CLR-transformed values (Aitchison et al., 2000). The PERMANOVA implementation from the vegan library was used to find structural differences between treatments on a compositional level. To find metabolites that were differentially abundant based on either stress or prebiotic supplementation, we fitted linear models using the CLR-transformed metabolite levels with both factors as explanatory variables. Linear models were also used to test for concordance or discordance of metabolite levels between the prefrontal cortex and cecum, again including stress and prebiotic as additional explanatory variables. In order to assess differences between singular pairs of groups we used Tukey's HSD procedure. To correct for multiple testing (FDR) in tests involving metabolomics features, Storey's q-value posthoc procedure was performed with a q-value of 0.2 as a cut-off (Storey, 2002). Custom scripts to analyze data can be found online at https://github.com/thomazbastiaanssen/Tjazi (Bastiaanssen et al., 2022). Metabolomics figures were generated using ggplot2.
Plasma corticosterone quantification
Corticosterone quantification of plasma (15 μL) collected from trunk blood at the sacrifice was performed using a corticosterone ELISA (Enzo Life Sciences) and was performed according to the manufacturer's instructions and was analysed as previously described (Bastiaanssen et al., 2021;Anand Gururajan et al., 2022). A multi-mode plate reader (Synergy HT, BioTek Instruments) was used to quantify light absorbance in the assay, at 405 nm. Only data derived from duplicates with <15% CV were included in the analysis. Concentrations of plasma corticosterone were expressed in ng/mL. Limit of detection is represented in Supplementary Table 1.
Cell isolation and flow cytometry
Spleens and mesenteric lymph nodes (MLNs) were dissected out of the animal, cleaned from fat tissue and stored in media (RPMI-1640 medium with L-glutamine and sodium bicarbonate -R8758, Sigma), supplemented with 10% FBS (F7524l, Sigma) and 1% Pen/strep (P4333, Sigma) on wet ice for flow cytometry the same day.
Flow cytometry was performed as previously described (Boehme et al., 2020; A Gururajan et al., 2019). Spleenocytes were isolated by flushing the spleen with media using a syringe. The cell suspension was subsequently centrifuged, aspirated and incubated with 1 mL lysis buffer (Sigma, R7757) for 5 min 10 mL media was added to dilute the lysis buffer and the cell suspension was poured over a 70 μm strainer, after which it was centrifuged and aspirated. 2 × 10 6 cells were resuspended in 90 μL staining buffer and split into 2 aliquots for the staining procedure. MLNs were transferred onto a 70 μm strainer and disassembled using the plunger of a 1 mL syringe. The strainer was subsequently rinsed with 10 mL media, and the cell suspension was centrifuged, aspirated, 2 × 10 6 cells were resuspended in 90 μL staining buffer, and split into 2 aliquots for the staining procedure.
For the staining procedure, 5 μL of FcR blocking reagent (Miltenyi, was added to each sample. Samples were subsequently incubated with a mix of antibodies (Supplementary Table 2) for 30 min on ice, after which they were centrifuged, aspirated and fixed using 100 μL 4% PFA for 30 min on ice. Samples were finally centrifuged, aspirated and resuspended in staining buffer for flow cytometric analysis the following day on the BD FACSCalibur. Data was analysed using FlowJo (Version 10).
Statistical analysis
All data are represented as mean ± SEM. Statistical analyses were conducted using SPSS 27 (IBM, USA). Normality was assessed employing the Shapiro-Wilk test and for equality of variances using the Levene's test. Non-parametric data were analysed with independentsamples Kruskal-Wallis test followed by pairwise comparisons adjusted by the Bonferroni correction for multiple tests, with a 95% confidence interval. Parametric data were analysed using two-way analysis of variance (ANOVA) and pairwise comparisons were assessed using a Tukey HSD adjustment. Time spent in interaction area, time spent in the corners, movement in the corners, entries in corners and social novelty were analysed using a general linear model integrating stress, diet and stimulus. Further, we utilized simple main effects to explore the individual group differences in these complex models, adjusted by the Bonferroni correction for multiple tests, as we hypothesized a priori that the stress-only group would be significantly different from the other 3 groups. Statistical significance was set at p ≤ 0.05.
Prebiotic supplementation reverses behavioral impairments but not endocrine effects of social defeat stress in aged animals
Following psychosocial stress, animals were tested in the social interaction test to assess the behavioral response of aged mice to stress. Stress significantly reduced the time spent in the interaction zone in Chow-fed animals (F (1,34) = 5.820, p = 0.021, CTRL-Chow vs Stress-Chow p = 0.005, Supplementary Fig. 1a), which was rescued by FOS-Inulin supplementation (Stress-Chow vs Stress-FOS-Inulin; p = 0.027). An effect of diet was observed in time spent in the corner zones (F (1,33) = 6.650, p = 0.015), and simple main effects showed stressed animals supplemented with FOS-Inulin spent significantly less time in the corners than stressed animals fed with chow (Stress-Chow vs Stress-FOS-Inulin; p = 0.003, Supplementary Fig. 1b). Movement inside the corner zones showed a significant effect of diet (F (1,34) = 4.594, p = 0.039), as demonstrated by Stress-FOS-Inulin animals moving significantly less in the corners than Stress-Chow animals (p = 0.011, Supplementary Fig. 1c). Further, with regards to entries into the corner zone, there was a significant effect of diet (F (1,33) = 5.217, p = 0.029) as FOS-Inulin supplementation in stressed animals significantly reduced entries in the corners (Stress-Chow vs Stress-FOS-Inulin, p = 0.009, Supplementary Fig. 1d). Considering a ratio of the time spent by the experimental animal facing the CD-1/interaction zone, there was an overall interaction effect of stress and diet (F (1,37) = 4.703, p = 0.037) as revealed by an increase in the time facing the CD-1 by stressed animals supplemented with FOS-Inulin when compared to stressed controls (Stress-Chow vs Stress-FOS-Inulin, p = 0.028, Supplementary Fig. 1e). To further understand how our interventions shaped stress-responding in the mice, we created a z-score combining the time spent in the interaction zone when the CD1 is present, time spent in the corner zones in presence and absence of CD1, the ratio of time spent facing the interaction zone, the entries and movement in the corner zones, and the movement in the interaction area. Two-way ANOVA revealed an interaction effect of stress and diet (F (3,37) = 4.958, p = 0.033), as revealed by a significant decrease in the Z-score in the Stress-Chow group compared to the non-stressed Control-Chow group (p = 0.031) and followed by an amelioration of the stress phenotype by FOS-inulin supplementation, as shown by an increase in the FOS-Inulin group in comparison to the Stress-Chow group (p = 0.004, Fig. 2a).
To assess the influence of FOS-Inulin supplementation on social behavior in stressed aged animals, mice were tested in the threechamber apparatus. While an overall significant interaction effect of stress and diet (F (1,36) = 1.364, p = 0.250) was not observed, there was a significant overall preference between a novel and familiar mouse (F (1,36) = 14.914, p < 0.001), which was not observed in animals fed with Chow (Control-Chow: p = 0.175; Control-FOS-Inulin: p = 0.006; Stress-Chow: p = 0.328, Stress-FOS-Inulin: p = 0.021, Fig. 2b).
To assess the endocrine response to stress, plasma levels of corticosterone were assessed. Corticosterone in plasma collected at the time of sacrifice showed a significant group effect (H (3) = 15.226, p = 0.002). Corticosterone was elevated in Stress-Chow compared to Control-Chow (H = − 17.167, p. adj = 0.003), which is not reversed by FOS-Inulin supplementation (H = 3.667, p. adj = 1, Fig. 2c).
Flow cytometry in mesenteric lymph nodes and spleen of general immune populations did not reveal any involvement of the peripheral immune system in the FOS-Inulin supplementation in response to stress in aging (Supplementary Figs. 3 and 4).
Fig. 2. -FOS-Inulin supplementation ameliorates the effects of stress in aged mice. a)
Integrative Z-score of social interaction outputs; Stressed animals fed with Chow have a significantly reduced Z-score compared to control animals, which is restored by FOS-Inulin supplementation. b) FOS-Inulin supplemented mice, but not chow-fed animals, spend significantly more time in the novel mouse chamber. c) Plasma corticosterone is higher in aged animals fed with chow when exposed to stress, which is not recovered in stressed animals supplemented with FOS-Inulin. Results presented as mean + standard error of the mean (SEM). n = 9-10 per group. *p < 0.05, **p < 0.01.
Inulin rescues metabolite levels in the prefrontal cortex of aged stressed animals
To understand if gut microbial-derived metabolites could possibly drive the positive prebiotic response to stress in aged mice, metabolomics analysis was performed in the cecum and prefrontal cortex of stressed and non-stressed aged mice. In total, 205 metabolites were identified in the cecum and 105 were detected in the PFC. In the cecum, diet significantly impacted the cecum metabolome, whereas in the PFC, stress was the main clustering factor ( Fig. 4a and b). In the Fig. 3. -FOS-Inulin supplementation increases pro-inflammatory cytokines in the ileum. a) FOS-Inulin supplementation significantly increases the concentration of IFN-γ in the ileum of stressed aged animals, when compared with stressed animals fed with chow and control animals fed with FOS-Inulin; b) Stressed aged animals show increased IL-6 concentration in the ileum (non-significantly), which is further inflated by FOS-Inulin supplementation. Results presented as mean + standard error of the mean (SEM). n = 8-10 per group. *p < 0.05, **p < 0.01, ***p < 0.001. main effect of stress in the b) prefrontal cortex of stressed aged mice. c) Boxplots showing the centered log-ratio transformed (clr) abundance of prefrontal cortex metabolites that displayed a significant effect of stress, diet, or an interaction between the two. In the prefrontal cortex, 8 metabolites were altered by stress and/or dietmetabolites were altered in the PFC of stressed aged mice, which were restored in FOS-Inulin supplemented animals. For the boxplots, boxes represent the limits of the interquartile range, the horizontal line represents the median point, and the whiskers represent the full data range. n = 7-10 per group. #p < 0.1, *p < 0.05, **p < 0.01, ***p < 0.001. cecum, 139 metabolites were found altered by diet in the cecum, but no stress or interaction effects were found (Supplementary Table 3). In the prefrontal cortex, 8 metabolites were altered by stress or diet after posthoc correction: 3-Methylhistidine, 4-Hydroxybenzaldehyde, apocynin, 2-hydroxy-3-methylbutyric acid, ethyl sulfate, N-acetyl ornithine, spermine and trimethylamine N-oxide. Of these, only 4-Hydroxybenzaldehyde and spermine show an interaction effect of stress and diet (Fig. 4c). 4-Hydroxybenzaldehyde is a naturally occurring benzaldehyde that has been shown to boost antioxidant activity and wound healing Kang et al., 2017) -despite showing an interaction effect of stress and diet (β = − 0.42, p = 0.003, q = 0.0994), as this metabolite is increased in the PFC of Stress-Chow animals (Stress-Chow vs Control-Chow, p = 0.019), FOS-Inulin supplementation does not significantly reverse this increase in stressed aged animals (p = 0.109). Spermine, a natural polyamine reported to modulate autophagy-related age effects (Xu et al., 2020), also presented an interaction effect of stress and diet (β = 0.349, p = 0.004, q = 0.0994) as it was found to be decreased in the prefrontal cortex of stressed aged animals (Stress-Chow vs Control-Chow p = 0.036) and presents a non-significant tendency to be recovered by FOS-Inulin supplementation (Stress-FOS-Inulin vs Stress-Chow, p = 0.072).
In summary, stress mainly altered the concentration of apocynin and ethyl sulfate in the prefrontal cortex, while FOS-Inulin supplementation mainly changed the concentration of 3-Methylhistidine, 2-hydroxy-3methylbutyric acid, and trimethylamine N-oxide, while 4-Hydroxybenzaldehyde and spermine show an interaction effect of stress and diet. This suggests that this dietary intervention modulates the levels of spermine and 4-Hydroxybenzaldehyde, in the prefrontal cortex of stressed aged mice, posing as a potential new avenue to explore agedependent stress effects.
Discussion
Some underexplored aspects of aging are the mechanisms underlying the response to stress, and its consequent behavioral outcomes. In this study, we demonstrate a clear interaction between a dietary intervention targeting the microbiome in modulating the stress response in aged mice. In particular, the present study demonstrates that FOS-Inulin supplementation improves social interaction in response to stress in aged mice and regulates spermine and 4-Hydroxybenzaldehyde concentrations in the prefrontal cortex.
In line with previous reports from our group (Scott et al., 2017), aged mice exhibit social novelty deficits in contrast to their young counterparts. In this study, we further observed that stress did not exacerbate social novelty deficits, but that FOS-Inulin supplementation improved overall social recognition in aged mice, thus implying that a prebiotic dietary intervention in aging can mitigate age-dependent behavioral deficits. Social defeat exposure also elevated plasma corticosterone levelshowever, prebiotic supplementation did not result in the reversal of the corticosterone levels, suggesting a dissociation between the impact of the prebiotic intervention on the behavioral and physiological response to stress.
Given the cumulative evidence that the gut microbiome is intertwined with age-related peripheral and neuroinflammation (Boehme et al., 2020;Claesson et al., 2012;Scott et al., 2017), we explored the potential mediation of age-related effects by inflammatory factors to understand if the FOS-Inulin supplementation effects were related to inflammatory factors. In this study, analysis of pro-inflammatory cytokines was focused on the ileum, as this region of the small intestine is important for the sampling of antigens from diet, hence influencing the immune system (Brown and Esterházy, 2021). Contrary to our expectations, FOS-Inulin supplementation was found to increase the concentration of pro-inflammatory cytokines in the small intestine of stressed animals. A few reports have described that FOS and inulin supplementation can increase inflammation in the colon of mice in an immune dysregulated colitis model (Singh et al., 2019). Given that aging induces a low-grade inflammatory profile, and that aging interferes with the resolution of acute inflammation (Arnardottir et al., 2014), it is not completely surprising that the prebiotic intervention could provoke moderate inflammation in the small intestine of these animals. Further, the exacerbation of pro-inflammatory markers by FOS-Inulin supplementation on stressed animals could be a reflection of a 'double hit' effect as stress is a known inducing factor of inflammatory markers (Cruz-Pereira et al., 2020;Marsland et al., 2017). Surprisingly, in peripheral immune structures, namely the mesenteric lymph nodes and spleen, we found no changes to general immune cell populations, suggesting that the behavioral outcomes are not related to extra-intestinal modulation of inflammation by stress or FOS-Inulin (Supplementary Figs. 2 and 3).
It has previously been reported that FOS-Inulin dietary intervention shapes gut microbiome in middle-aged mice (Boehme et al., 2020), and that prebiotic fibers are known to modulate the gut microbiome (O'Connor et al., 2020), and further that the metabolome is intrinsically connected with the gut microbiome (Garza et al., 2020;Valles-Colomer et al., 2019). Metabolomics were performed in the cecum and prefrontal cortex to profile the microbial metabolite levels in an intestinal structure that is reshaped by dietary fibers (Drew et al., 2018) and an important brain region for emotional processing and social behavior altered in aging (Franklin et al., 2017;Yan and Rein, 2022). As expected, diet was the main factor to shape the cecal metabolome, which follows consistent observations that dietary fibers change the gut microbiota in humans (Le Bastard et al., 2020;Vandeputte et al., 2017) and in rodents (Drew et al., 2018;Fuhren et al., 2021). Curiously, in an aged human cohort, despite the reshaping of the gut microbiota, inulin intake did not impact immune cell numbers in the peripheral blood (Kiewiet et al., 2021), which aligns with our results (Supplementary Figs. 2 and 3).
In the PFC, 4 metabolites were altered by the prebiotic supplementation alone, that may play an important role in the context of aging: 3methyl-L-histidine, acetylornithine, trimethylamine-N-oxide and 2-hydroxy-3-methylbutyric acid. In dementia patients, 3-methyl-L-histidine has been found declined in the blood, and is associated with frailty (Teruya et al., 2021), while its supplementation suppresses glial inflammation and ameliorates neurovascular-unit dysfunction in an aged Alzheimer's disease model (Kaneko et al., 2017), and in this study is increased with the prebiotic supplementation. Curiously, acetylornithine, also positively modulated by prebiotic supplementation in our study, has also been found elevated in the serum of dementia patients (Weng et al., 2019), while being reduced in the cerebral cortex of aged mice (Ding, 2021). Trimethylamine-N-oxide, which in this study was reduced by prebiotic supplementation, is increased in both elderly humans and aged mice, and can accelerate brain cellular senescence, while enhancing cognitive impairment (Li et al., 2018). Lastly, 2-Hydroxy-3-Methylbutyric acid was identified as the metabolite responsible for the promotion of intestinal epithelial cell proliferation upon Lactobacillus paracasei probiotic administration (Qiao et al., 2022), and is increased by FOS-Inulin supplementation in the prefrontal cortex. While it is remarkable that the levels of these metabolites in the prefrontal cortex are affected by prebiotic supplementation and have previously been reported to be involved in aging, the functional consequences of these alterations remain unclear as well as the underlying mechanisms.
In our analysis, we found that stress alone independently affected only two metabolites in the prefrontal cortex of aged animals after posthoc correction: apocynin and ethyl sulfate. Apocynin suppresses reactive oxygen species, partially reversing the aging process in mesenchymal stem cells and boosts osteogenesis (Sun et al., 2015), and in this study is found increased in the prefrontal cortex of aged animals exposed to stress. Ethyl sulfate is a metabolite classically linked to ethanol degradation (Helander and Beck, 2005), but to the best of our knowledge this is still underexplored in the context of brain metabolome.
Here, we further show that the combination of prebiotic dietary intervention and social defeat stress in aged mice, is associated with alterations in the levels/abundance of two metabolites in the prefrontal cortex: spermine and 4-Hydroxybenzaldehyde. Spermidine, the precursor of spermine, has been found to extend the lifespan by promoting autophagy (Aman et al., 2021). Spermine and spermidine delay aging-related cognitive impairments by enhancing autophagy and mitochondrial function in the brain of a mouse model of accelerated aging (Xu et al., 2020). Further, spermine and spermidine levels have been found to be enhanced in the blood of centenarians (Pucciarelli et al., 2012). Here, we found that levels of spermine in the aged prefrontal cortex are reduced by stress exposure, and that the prebiotic intervention recovered the spermine levels. Spermine has been shown to prevent LPS-induced memory deficits (Frühauf-Perez et al., 2018), and has been found to be slightly reduced in the brains of rats after restraint stress (Hayashi et al., 2004). Furthermore, spermine is also an NMDA receptor agonist, and has been shown to be increased in the PFC of aged rats (Gupta et al., 2012). Given that aging has been linked to reduced NMDA receptor function (Gramuntell et al., 2021), it is feasible to associate elevated CNS spermine levels with age-dependent cognitive decline. Taken together, this could potentially suggest that stress in aged mice may increase NMDA receptor function potentially through the reduction of spermine concentration, which in turn can result in excitotoxicity, in a similar fashion to what is observed in Alzheimer's disease (Wang and Reddy, 2017). Further studies are needed to dissect this hypothesis.
4-Hydroxybenzaldehyde is a relatively underexplored compound isolated from Gastrodia elata, a common Chinese herbal medicine that has been shown to boost acute wound healing and is altered in the plasma of middle-aged mice of a short-longevity model (Kadota et al., 2021). In this study, FOS-Inulin reduced the levels of this metabolite upon stress exposure. It is challenging to draw objective conclusions from this observation, as the functional properties of this particular metabolite still remains mostly unexploredhowever, it is possible that it could represent an interesting target for future experiments (or dietary modulation) as there are still a considerable number of metabolites that remain unidentified (Bar et al., 2020). Taken together with the relative novelty of 4-Hydroxybenzaldehyde in this context, further studies dissecting the involvement of these metabolites in age-dependent effects will be crucial.
Conclusions
In conclusion, this study provides evidence that prebiotic supplementation with FOS-Inulin ameliorates the stress response in aged mice at a behavioral and metabolic level. This was not associated with modulation of the immune system or HPA axis but through the regulation key age-associated metabolites, spermine and 4-Hydroxybenzaldehyde, in the PFC. Further studies targeting spermine and 4-Hydroxybenzaldehyde and their metabolic pathways will be essential to uncover the specific effects and mechanisms underpinning the impact of these metabolites on the age-related stress-induced behavioral phenotypes.
Ethics approval and consent to participate
Animal experiments were conducted with the approval and oversight of the Animal Experimentation Ethics Committee of University College Cork.
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Funding
APC Microbiome Ireland is a research center funded by Science Foundation Ireland (SFI/12/RC/2273_P2). Prof. Cryan is funded by the Science Foundation Ireland (SFI/12/RC/2273_P2), Saks Kavanaugh Foundation and Swiss National Science Foundation project CRSII5_186,346/NMS 2068, and has received research funding from 4D Pharma, Cremo, Dupont, Mead Johnson, Nutricia, and Pharmavite; has been an invited speaker at meetings organized by Alimentary Health, Alkermes, Ordesa, and Yakult; and has served as a consultant for Alkermes and Nestle. Prof. Clarke has received honoraria from Janssen, Probi, and Apsen as an invited speaker; is in receipt of research funding from Pharmavite and Fonterra; and is a paid consultant for Yakult, Zentiva and Heel pharmaceuticals. Ms. Cruz-Pereira was also funded by the HEA Covid-19 Cost Extension Fund.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data availability
Data will be made available on request. | 8,762.6 | 2022-11-01T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Online Detection System for Wheat Machine Harvesting Impurity Rate Based on DeepLabV3+
Wheat, one of the most important food crops in the world, is usually harvested mechanically by combine harvesters. The impurity rate is one of the most important indicators of the quality of wheat obtained by mechanized harvesting. To realize the online detection of the impurity rate in the mechanized harvesting process of wheat, a vision system based on the DeepLabV3+ model of deep learning for identifying and segmenting wheat grains and impurities was designed in this study. The DeepLabV3+ model construction considered the four backbones of MobileNetV2, Xception-65, ResNet-50, and ResNet-101 for training. The optimal DeepLabV3+ model was determined through the accuracy rate, comprehensive evaluation index, and average intersection ratio. On this basis, an online detection method of measuring the wheat impurity rate in mechanized harvesting based on image information was constructed. The model realized the online detection of the wheat impurity rate. The test results showed that ResNet-50 had the best recognition and segmentation performance; the accuracy rate of grain identification was 86.86%; the comprehensive evaluation index was 83.63%; the intersection ratio was 0.7186; the accuracy rate of impurity identification was 89.91%; the comprehensive evaluation index was 87.18%; the intersection ratio was 0.7717; and the average intersection ratio was 0.7457. In terms of speed, ResNet-50 had a fast segmentation speed of 256 ms per image. Therefore, in this study, ResNet-50 was selected as the backbone network for DeepLabV3+ to carry out the identification and segmentation of mechanically harvested wheat grains and impurity components. Based on the manual inspection results, the maximum absolute error of the device impurity rate detection in the bench test was 0.2%, and the largest relative error was 17.34%; the maximum absolute error of the device impurity rate detection in the field test was 0.06%; and the largest relative error was 13.78%. This study provides a real-time method for impurity rate measurement in wheat mechanized harvesting.
Background
Wheat is one of the most important food crops in the world. The sown area and output of wheat rank first among all food crops. One third of the world's population depends on wheat as their staple food. At present, the harvesting method for wheat is generally mechanized harvesting using a combine harvester at the mature stage. By 2021, the level of mechanized wheat harvesting in China had reached 97.49% [1]. The impurity rate is an important indicator of the quality of wheat mechanized harvesting. The impurity rate is the percentage of impurities in wheat harvested mechanically. Impurities refer to non-grain substances such as wheat straw and awn. However, if the parameters of the combine harvester are improperly set, the impurity content of mechanically harvested wheat will be too high [2]. Existing combine harvesters generally lack an online detection system for impurity rate and thus cannot provide drivers with real-time harvesting information; this lack of information affects the quality of mechanically harvested wheat [3]. Therefore, it is important to realize online detection of the impurity rate of wheat in mechanized harvesting.
The common methods for the determination of the impurity rate in wheat mechanized harvesting are manual visual inspection and sampling inspection. Manual visual inspection involves the driver observing the harvested grains in the granary through the observation window on the combine harvester and qualitatively judging the impurity rate of the wheat. For manual sampling and testing, it is necessary to stop the machine to sample from the grain tank and manually separate the grains and impurities in the wheat samples to obtain the impurity rate after weighing. However, these two methods rely on human judgment, and hence are error-prone, time-consuming, and unable to provide real-time impurity rate information. It is important to quickly and accurately determine the impurity rate of mechanically harvested wheat. In recent years, machine vision and image processing technology have played key roles in the grain quality inspection of soybeans for impurities and broken grains [4], detection of rice impurities and broken grains [5], detection of impurities in seed cotton [6], and classification of wheat varieties [7,8]. The technology has the advantages of rapid detection and online measurement, factors that make up for the shortcomings of traditional detection methods. However, machine vision and image processing algorithms have problems such as severe over-segmentation, segmentation parameters relying on human experience, and requirement of large image sets. The time required for detection makes it difficult to meet the actual requirements of mechanized production [9].
Literature Review
With the rapid development of deep learning, machine vision technology integrated with deep learning is a rapidly emerging nondestructive detection method that not only contains the image information of the target to be detected but also can obtain richer feature information from a small data set. This improves the segmentation accuracy of the target image to be detected [10]. Machine vision technology incorporating deep learning has been widely used in the examination of grain target features such as grain damage [11], wheat high-throughput yield phenotyping [12], wheat variety Identification [13], and individual tree detection and species classification [14]. Semantic segmentation is a very important direction, as it enables image pixel-level classification. DeepLabV3+ is a typical semantic segmentation network. In order to integrate multi-scale information, it introduces the resolution of features extracted by the encoder that can be arbitrarily controlled and balances accuracy and time consumption through atrous convolution.
At present, DeepLabV3+ is widely used in grain target detection. For example, Zhao et al. realized the segmentation and counting of rapeseed [15]; Zhang et al. realized the automatic extraction of wheat lodging area [16]; Bhagat et al. realized the plant leaf segmentation and counting [17]; and Yang et al. achieved the efficient segmentation of soybean planting areas [18].
Although the DeepLabV3+ technology is now widely used, there are no relevant reports concerning the application of DeepLabV3+ to the detection of the impurity rate of mechanically harvested wheat. Shen et al. achieved fast and effective detection of impurities in wheat based on terahertz spectral imaging and a convolutional neural network, but that study used a convolutional neutral network to extract data and information of sample composition characteristics [19]. Chen et al. used the least squares support vector machine to construct an inversion model of the wheat sample impurity rate based on different indicators, but this technology cannot be applied to the detection of wheat impurities in the mechanized harvesting process [20]. In the above reports, the detection of wheat impurities was completed in the laboratory, and there remained a certain gap between laboratory and practical application.
DeepLabV3+ has both encoder and decoder modules. In the encoder module, feature maps are obtained through the backbone feature extraction network. MobileNetV2, Xception, and ResNet are often used as backbone feature extraction networks. The effect of each backbone feature extraction varies according to different detection targets. Wu et al. found that using ResNet-101 to segment abnormal leaves of hydroponic lettuce was the best, while ResNet-50 demonstrated a high segmentation speed [21]. Sun et al. constructed a band information enhancement (BIE) module and proposed a DeepLabV3+ grape-growing area identification method with enhanced band information [22]. This method segmented the grape-growing area more completely and showed a good edge recognition effect. Mu et al. found that the use of ResNet-101 could accurately identify rice lodging, and the accuracy of rice lodging image recognition was 0.99 [23]. Dai et al. found that the use of MobileNetV2 could quickly and effectively monitor the occurrence of wheat scab, and the average accuracy of the model was 0.9692 [24]. Based on the above analysis, DeepLabv3+ could in theory encode rich contextual information and use a simple and effective decoder module to recover object boundaries; this could capture multi-scale information and effectively utilize the detailed information of the image and the spatial correlation of pixels in a large range. Selecting different backbone feature extraction networks resulted in detailed differences in the performance of DeepLabv3+. These studies have provided new ideas for the application of DeepLabv3+ regarding the online detection of wheat impurity rate in mechanized harvesting.
Contributions
This study explored the feasibility of detecting the impurity content of mechanically harvested wheat based on machine vision and deep learning technology. MobileNetV2, Xception-65, ResNet-50, and ResNet-101 were adopted as candidate backbone feature extraction networks for the DeepLabv3+ model. Through comparisons of the modeling effects of different backbone feature extraction networks by comprehensive evaluation indicators, average cross-combination ratio, and image processing speed, the optimal recognition and segmentation model was finally determined. Finally, the optimal DeepLabv3+ model was used to construct an online detection algorithm for the mechanized harvesting impurity rate based on image information, and the feasibility and accuracy of the algorithm were verified by experiments.
Online Detection Device for Wheat Impurity Rate
We developed an online device for detection of the impurity rate of mechanically harvested wheat. The device included an Ubuntu 20.04 host, a 12-V DC power supply, an industrial camera, a servo, and other parts ( Figure 1). The device contained an industrial camera (LRCP10230, SinoVision, Guangzhou, China) to acquire images of wheat samples. The industrial camera was set facing the photo window of the sampling bin, with a lens with a focal length of 12 mm, and the lens was 105 mm away from the transparent plexiglass. Under the LED visual light source, the RGB (red, green, blue) wheat sample images captured by the industrial camera had a resolution of 1280 pixels × 1024 pixels and were saved in JPEG format. The device also included two DC servos (LX-20, Magic Technology, Shenzhen, China). By controlling the forward and reverse rotation of the DC servos, a telescopic plate could be retracted or extended to realize the dynamic updating of the wheat in the sampling bin.
The device requires an Ubuntu 20.04 host (Tsinghua Tongfang T45PRO laptop, Tongfang Co., Ltd., Wuxi, China, Intel ® Core ® i7-6500U processor, 16 GB DDR4 3200 MHz memory, and 6 GB Nvidia GeForce RTX3060 graphics card). The Ubuntu 20.04 host was primarily used to run the online detection algorithm of wheat impurity content. The machine acquired images of wheat samples via USB-controlled industrial cameras. The recognition and segmentation of wheat samples was realized by running the DeepLabv3+ model. After obtaining the number of pixels of grains and impurities in the image, the impurity rate of the sample to be tested was calculated through a quantitative model.
The Network Architecture of DeepLabV3+
The DeepLabv3+ network was proposed in 2018 [25] and was an improvement on the original DeepLabv3 network. This version is currently the best performing network in the DeepLab network series. The network structure was divided into an encoder and a decoder, as shown in Figure 2.
In the upper part of the encoder (Figure 2), MobileNet, ResNet, and Xception could be selected as the backbone network. In order to make the extracted features have a larger receptive field, normal convolution was replaced with dilated convolution in the last coding block. The atrous spatial pyramid pooling model composed of atrous convolution with different expansion coefficients was used to encode the image context information and splicing and fusion. Then, we used 1 × 1 convolution to adjust the number of output channels to improve the generalization ability of network feature extraction.
The Network Architecture of DeepLabV3+
The DeepLabv3+ network was proposed in 2018 [25] and was an improvement on the original DeepLabv3 network. This version is currently the best performing network in the DeepLab network series. The network structure was divided into an encoder and a decoder, as shown in Figure 2. (1 computer; 2 12-V DC power supply; 3 system bus; 4 sliders; 5 telescopic plates; 6 bases; 7 rail seats; 8 rails; 9 levers; 10 DC servos; 11 data bus connectors; 12 industrial cameras; 13 Industrial camera fixing bracket; 14 shell; 15 embedded data processing module; 16 LED visual light source; 17 transparent plexiglass; 18 photo window; 19 sampling chamber.)
The Network Architecture of DeepLabV3+
The DeepLabv3+ network was proposed in 2018 [25] and was an improvement on the original DeepLabv3 network. This version is currently the best performing network in the DeepLab network series. The network structure was divided into an encoder and a decoder, as shown in Figure 2.
In the upper part of the encoder ( Figure 2), MobileNet, ResNet, and Xception could be selected as the backbone network. In order to make the extracted features have a larger receptive field, normal convolution was replaced with dilated convolution in the last coding block. The atrous spatial pyramid pooling model composed of atrous convolution with different expansion coefficients was used to encode the image context information and splicing and fusion. Then, we used 1 × 1 convolution to adjust the number of output channels to improve the generalization ability of network feature extraction. In the upper part of the encoder (Figure 2), MobileNet, ResNet, and Xception could be selected as the backbone network. In order to make the extracted features have a larger receptive field, normal convolution was replaced with dilated convolution in the last coding block. The atrous spatial pyramid pooling model composed of atrous convolution with different expansion coefficients was used to encode the image context information and splicing and fusion. Then, we used 1 × 1 convolution to adjust the number of output channels to improve the generalization ability of network feature extraction.
The lower part of the decoder used bilinear interpolation to upsample the feature tensor output from the encoder by a factor of 4. Then, it was spliced with the feature map of the corresponding level of the backbone network, and the detailed information carried by the shallow features was captured by the cross-layer connection in order to enrich the semantic information and detailed information of the image. Finally, the fused feature map was upsampled four times to obtain a semantic segmentation map with the same size as the original image.
Data Annotation and Augmentation
In order to train a wheat component segmentation model for mechanized harvesting based on DeepLabV3+, a total of 500 wheat images with a resolution of 1280 pixels × 1024 pixels were collected as the original dataset in this study.
We used LabelMe (Version No. 3.16.7, Massachusetts Institute of Technology, Cambridge, MA, USA) to label the wheat image dataset, to label wheat grains and impurities with polygons, and then assigned labels, where the background was labeled as 0, grains were labeled as 1, and impurities were labeled as 2. In order to reduce the computational complexity of the model and improve the detection time, each image was scaled to 512 pixels × 512 pixels through bilinear interpolation. Examples of RGB images and their corresponding label images are shown in Figures 3a and 3b, respectively.
of the corresponding level of the backbone network, and the detailed information carried by the shallow features was captured by the cross-layer connection in order to enrich the semantic information and detailed information of the image. Finally, the fused feature map was upsampled four times to obtain a semantic segmentation map with the same size as the original image.
Data Annotation and Augmentation
In order to train a wheat component segmentation model for mechanized harvesting based on DeepLabV3+, a total of 500 wheat images with a resolution of 1280 pixels × 1024 pixels were collected as the original dataset in this study.
We used LabelMe (Version No. 3.16.7, Massachusetts Institute of Technology, Cambridge, USA) to label the wheat image dataset, to label wheat grains and impurities with polygons, and then assigned labels, where the background was labeled as 0, grains were labeled as 1, and impurities were labeled as 2. In order to reduce the computational complexity of the model and improve the detection time, each image was scaled to 512 pixels × 512 pixels through bilinear interpolation. Examples of RGB images and their corresponding label images are shown in Figure 3a and Figure 3b, respectively.
To avoid unbalanced performance evaluation on the test set, the datasets were randomly selected as a training set (350 images), a validation set (50 images), and a test set (100 images) in a ratio of 7:2:1. Data augmentation played a crucial role in the training of deep learning models. To improve the robustness of the model, data augmentation was performed on the limited dataset. Images in the training and validation sets were subjected to 90° and 270° image counterclockwise rotation, 0.6× and 1.8× image scaling, and image mirroring on the horizontal and vertical axes. After data augmentation, the training set consisted of 2450 images, and the validation set consisted of 350 images. The test set was unaugmented and consisted of 100 images. Black polygons, green polygons, and beige polygons represent areas marked as "background," "grain," and "impurity," respectively.
Network Training
DeepLabV3+ models were trained and tested using an Ubuntu 20.04 host (Dell Precision 7920 Tower graphics workstation, Dell, Xiamen, China) with GPU (Nvidia Quadro Black polygons, green polygons, and beige polygons represent areas marked as "background", "grain", and "impurity", respectively. To avoid unbalanced performance evaluation on the test set, the datasets were randomly selected as a training set (350 images), a validation set (50 images), and a test set (100 images) in a ratio of 7:2:1. Data augmentation played a crucial role in the training of deep learning models. To improve the robustness of the model, data augmentation was performed on the limited dataset. Images in the training and validation sets were subjected to 90 • and 270 • image counterclockwise rotation, 0.6× and 1.8× image scaling, and image mirroring on the horizontal and vertical axes. After data augmentation, the training set consisted of 2450 images, and the validation set consisted of 350 images. The test set was unaugmented and consisted of 100 images.
Network Training
DeepLabV3+ models were trained and tested using an Ubuntu 20.04 host (Dell Precision 7920 Tower graphics workstation, Dell, Xiamen, China) with GPU (Nvidia Quadro RTX5000 16 GB GPU), 26-core CPU (dual Intel ® Xeon ® Gold 6230R, 4.00 GHz) and 128 GB DDR4 3200 MHz memory. In this study, to train the DeepLabV3+ model on the wheat image dataset, the weights of the pretrained model were used to initialize and fine-tune the model through further training. These initial weights were obtained from pretrained models on the PASCAL VOC 2007 dataset [26].
Bench Test Design
In order to test the performance of the online detection device for the impurity rate of wheat harvested by mechanization, an indoor test bench was constructed in this study. The test bench consisted of a rack, a grain tank, a scraper elevator, a motor, and a wheat impurity rate detection device, as shown in Figure 4.
In order to test the performance of the online detection device for the impurity rate of wheat harvested by mechanization, an indoor test bench was constructed in this study. The test bench consisted of a rack, a grain tank, a scraper elevator, a motor, and a wheat impurity rate detection device, as shown in Figure 4.
When the bench was working, the motor drove the auger to rotate, and the wheat in the grain tank was transferred to the scraper elevator. There was a hopper at the top of the elevator. During the process of dropping the wheat from the hopper into the grain tank, part of the wheat entered the sampling bin of the wheat impurity rate detection device. After the detection device completed the detection of the wheat samples, the wheat in the device fell back into the grain tank.
A total of three batches of wheat samples were prepared for the bench test, and repeated tests were carried out. The wheat was collected from an experimental field in Daba Village, Yongchang County, Jinchang City, Gansu Province. The wheat variety was Longchun 39; the moisture content was 12.4%, and the thousand-kernel weight was 46.87 g. When testing the impurity rate of each batch of wheat, referring to DG-T 014-2019 "Grain Combine Harvester," three wheat samples (500 g each) were randomly selected manually to estimate the impurity rate of the samples. Then, the test bench motor was run, and the wheat impurity rate detection device dynamically sampled and detected the wheat 30 times and automatically recorded the test data. Finally, the results of the wheat impurity rate detection device and manual detection were analyzed and compared, and the online detection effect of the wheat impurity rate based on the DeepLabV3+ model was verified. When the bench was working, the motor drove the auger to rotate, and the wheat in the grain tank was transferred to the scraper elevator. There was a hopper at the top of the elevator. During the process of dropping the wheat from the hopper into the grain tank, part of the wheat entered the sampling bin of the wheat impurity rate detection device. After the detection device completed the detection of the wheat samples, the wheat in the device fell back into the grain tank.
A total of three batches of wheat samples were prepared for the bench test, and repeated tests were carried out. The wheat was collected from an experimental field in Daba Village, Yongchang County, Jinchang City, Gansu Province. The wheat variety was Longchun 39; the moisture content was 12.4%, and the thousand-kernel weight was 46.87 g. When testing the impurity rate of each batch of wheat, referring to DG-T 014-2019 "Grain Combine Harvester", three wheat samples (500 g each) were randomly selected manually to estimate the impurity rate of the samples. Then, the test bench motor was run, and the wheat impurity rate detection device dynamically sampled and detected the wheat 30 times and automatically recorded the test data. Finally, the results of the wheat impurity rate detection device and manual detection were analyzed and compared, and the online detection effect of the wheat impurity rate based on the DeepLabV3+ model was verified.
Field Trial Design
In order to verify the detection effect of the mechanized harvesting wheat impurity rate online detection device during the field harvesting process, we installed the devise on a combine harvester (Wode Ruilong Zhihang version combine harvester, Model 4LZ-7.0EN(Q)) and carried out field test experiments. The study site was an experimental wheat field in Daba Village, Yongchang County, Jinchang City, Gansu Province. The wheat variety was Longchun 39; the moisture content was 12.3%, and the thousand-kernel weight was 47.05 g. The test date was 24 July 2022.
This field test comprised a repeated test of three trips, with a single trip length of 200 m and an operating speed of 4 km/h. The test site is shown in Figure 5. The impurity rate detection device was installed below the grain outlet of the combine harvester, connected to the notebook through a data bus, and powered by a 12-V DC battery.
Field Trial Design
In order to verify the detection effect of the mechanized harvesting wheat impurity rate online detection device during the field harvesting process, we installed the devise on a combine harvester (Wode Ruilong Zhihang version combine harvester, Model 4LZ-7.0EN(Q)) and carried out field test experiments. The study site was an experimental wheat field in Daba Village, Yongchang County, Jinchang City, Gansu Province. The wheat variety was Longchun 39; the moisture content was 12.3%, and the thousand-kernel weight was 47.05 g. The test date was 24 July 2022.
This field test comprised a repeated test of three trips, with a single trip length of 200 m and an operating speed of 4 km/h. The test site is shown in Figure 5. The impurity rate detection device was installed below the grain outlet of the combine harvester, connected to the notebook through a data bus, and powered by a 12-V DC battery.
During each test, the online device automatically detected the real-time impurity rate of the wheat during the harvesting operation and recorded the test data. Then, the harvester was stopped to unload the grain; the wheat in the grain tank was emptied; three random samples were taken manually; and the impurity rate of the wheat in this trip was obtained by testing. Finally, the test data were used to analyze the performance of the online detection device for estimating the wheat impurity rate.
Network Recognition and Segmentation Performance Evaluation Index
In this study, the precision rate P, the recall rate R, the comprehensive evaluation index F1, the intersection ratio FIOU, the average intersection ratio FMIOU, and the average processing speed Iv of a machine-harvested wheat sample image were used as evaluation indicators of the image recognition and classification results of different models, and they were calculated as follows: During each test, the online device automatically detected the real-time impurity rate of the wheat during the harvesting operation and recorded the test data. Then, the harvester was stopped to unload the grain; the wheat in the grain tank was emptied; three random samples were taken manually; and the impurity rate of the wheat in this trip was obtained by testing. Finally, the test data were used to analyze the performance of the online detection device for estimating the wheat impurity rate.
Network Recognition and Segmentation Performance Evaluation Index
In this study, the precision rate P, the recall rate R, the comprehensive evaluation index F 1 , the intersection ratio F IOU , the average intersection ratio F MIOU , and the average processing speed I v of a machine-harvested wheat sample image were used as evaluation indicators of the image recognition and classification results of different models, and they were calculated as follows: , and (4) where P represents the precision rate; R represents the recall rate; F 1 represents the comprehensive evaluation index; T P represents the number of correctly classified pixels predicted; F P represents the wrongly classified pixels predicted; F N represents the correctly classified pixels predicted to be misclassified pixels; n represents the number of categories of the classification; F IOU represents the intersection ratio; F MIOU represents the average intersection ratio, and I v represents a machine-harvested wheat sample average image processing speed in ms.
Performance Evaluation of Wheat Impurity Content Detection Based on Image Information
In the existing methods for detecting the impurity rate of a wheat combine harvester the impurity rate is the percentage of the mass of the grains made up by the mass of impurities in the sample. According to the existing measurement methods, a quantification model of pixel-based impurity rate was formulated. The calculation formula was: where P cz represents the manual measurement of impurity rate in percent; w z represents the mass of non-grain substances in manually sampled samples in gram; w represents the mass of manually sampled wheat samples in gram; P z represents the impurity rate in percent; T w represents the number of pixels of grains in the predicted image; T z represents the number of impurities pixels in the predicted image; ∂ represents the ratio of the average mass of grains to the average mass of impurities at 1000 pixel points. Under laboratory conditions, the value of ∂ was 11.8906 by manual calibration. The coefficient of variation, the absolute error, and relative error between the average value of system detection and manual detection results were used to evaluate the effect of online monitoring of wheat machine harvesting impurity rate based on DeepLabV3+. The calculation formulas were as follows: where P Sz represents the average impurity rate of samples detected by the system in percent; P Mz represents the average value of impurity rate of samples detected manually in percent; R az represents the absolute error of impurity rate in percent; R rz represents the relative error of impurity rate in percent; R Scv represents the coefficient of variation of the device detection value in percent, and R Mcv represents the variation coefficient of the manual detection value in percent.
In terms of speed, MobileNetV2 required about 234 ms to segment the grain and impurities in an image with a resolution of 512 pixels × 512 pixels; this was the fastest among the four backbones. There was no significant difference in speed between ResNet-50 and ResNet-101 at 256 ms and 261 ms, respectively. Xception-65 demonstrated the slowest image processing at 268 ms. Moreover, although ResNet-50 was not the fastest (256 ms/image), it was still the best choice considering F 1 and F MIOU .
In terms of F IOU , the ResNet-50-based DeepLabV3+ model demonstrated better performance than the other three models, especially for the grain, as shown in Table 1 Figure 6f (ResNet-101). The white boxed area in Figure 6 shows the difference in segmentation results using four backbones when segmenting impurities. MobileNetV2 and ResNet-101 only segmented some irrelevant background without any useful information. Xception-65 outperformed MobileNetV2 and ResNet-101, but still could not fully identify impurities. ResNet-50 outperformed other models in impurity identification and segmentation and could effectively identify and segment most impurities. Therefore, in this study ResNet-50 was selected as the backbone network of DeepLabV3+ to carry out the identification and segmentation of mechanically harvested wheat grain and impurity components. On this basis, the online detection of wheat impurity rate based on image information was realized. ground without any useful information. Xception-65 outperformed MobileNetV2 and ResNet-101, but still could not fully identify impurities. ResNet-50 outperformed other models in impurity identification and segmentation and could effectively identify and segment most impurities. Therefore, in this study ResNet-50 was selected as the backbone network of DeepLabV3+ to carry out the identification and segmentation of mechanically harvested wheat grain and impurity components. On this basis, the online detection of wheat impurity rate based on image information was realized.
ResNet-50 Online Recognition and Segmentation Effect Analysis
As shown in Figure 7, a detection image was randomly selected; the wheat grain and impurity components in the image were manually marked, and the recognition effects of the online detection device for wheat impurity rate during the bench test and field test were analyzed. Based on manual annotation, we found that ResNet-50 could segment most impurities such as straw and wheat husks. At the same time, the performance of ResNet-50 for the identification and segmentation of impurities was better than that for grain.
ResNet-50 Online Recognition and Segmentation Effect Analysis
As shown in Figure 7, a detection image was randomly selected; the wheat grain and impurity components in the image were manually marked, and the recognition effects of the online detection device for wheat impurity rate during the bench test and field test were analyzed. Based on manual annotation, we found that ResNet-50 could segment most impurities such as straw and wheat husks. At the same time, the performance of ResNet-50 for the identification and segmentation of impurities was better than that for grain. As shown in Table 2, in the bench test, the P value of the ResNet-50 model grain identification was 96.25%; the R value was 58.88%; the F1 value was 73.06%, and the FIOU value was 0.5756. The P of the ResNet-50 model impurity identification was 93.40%; R was 69.73%; F1 was 75.37%; FIOU was 0.6646, and the FMIOU was 0.6201. In the field test, the P of the ResNet-50 model grain identification was 99.00%; the R was 51.62%; the F1 was 67.86%, and the FIOU was 0.6646. The P of the ResNet-50 model impurity identification was 88.71%; R was 83.66%; F1 was 86.11%; FIOU was 0.7561, and the FMIOU was 0.7104. Whether it was a bench test or a field test, the model's recognition and segmentation effects on impurities were better than that for grain. As shown in Table 2, in the bench test, the P value of the ResNet-50 model grain identification was 96.25%; the R value was 58.88%; the F 1 value was 73.06%, and the F IOU value was 0.5756. The P of the ResNet-50 model impurity identification was 93.40%; R was 69.73%; F 1 was 75.37%; F IOU was 0.6646, and the F MIOU was 0.6201. In the field test, the P of the ResNet-50 model grain identification was 99.00%; the R was 51.62%; the F 1 was 67.86%, and the F IOU was 0.6646. The P of the ResNet-50 model impurity identification was 88.71%; R was 83.66%; F 1 was 86.11%; F IOU was 0.7561, and the F MIOU was 0.7104. Whether it was a bench test or a field test, the model's recognition and segmentation effects on impurities were better than that for grain.
Analysis of the Detection Effect of Impurity Rate
During the test, the online monitoring device for wheat impurity rate based on the DeepLabV3+ model worked normally, realizing the dynamic online detection of wheat samples. The test results are shown in Figure 8 and Table 3. During the field test, the maximum value of the wheat impurity rate detected by the device was 1.78%; the minimum value was 0.33%, and the average value was 1.11%. The maximum value of the artificial detection of wheat impurity rate was 1.78%; the minimum value was 0.34%, and the average value was 1.12%. Compared with the manual detection results, the maximum absolute error of device detection was 0.06%, and the maximum relative error was 13.78%. During the bench test, the maximum value of the wheat impurity rate detected by the device was 1.56%; the minimum value was 0.13%, and the average value was 0.95%. The maximum value of the artificial detection of wheat impurity rate was 1.32%; the minimum value was 0.79%, and the average value was 1.04%. Compared with the manual detection results, the maximum absolute error of device detection was 0.2%, and the maximum relative error was 17.34%.
During the field test, the maximum value of the wheat impurity rate detected by the device was 1.78%; the minimum value was 0.33%, and the average value was 1.11%. The maximum value of the artificial detection of wheat impurity rate was 1.78%; the minimum value was 0.34%, and the average value was 1.12%. Compared with the manual detection results, the maximum absolute error of device detection was 0.06%, and the maximum relative error was 13.78%.
In the bench test, the maximum value of the coefficient of variation of the device detection results was 35.24%, and the minimum value was 31.79%; the maximum value of the coefficient of variation of the manual detection results was 17.39%, and the minimum value was 11.42%. In the field test, the maximum value of the coefficient of variation of the device detection results was 46.31%, and the minimum value was 33.43%; the maximum value of the coefficient of variation of the manual detection results was 28.63%, and the minimum value was 21.51%. Whether a field test or a bench test, the results from the device had significant fluctuation. This was mainly because the detection device calculated the real-time impurity rate by dynamically capturing the sample image and analyzing the single-layer image information. When the impurities in the image captured by the detection device occupied a large area, the value of the impurity rate in the detection result would be very large. When the impurities in the image captured by the detection device occupied a small area, the value of the impurity rate in the detection result would be relatively small.
Although there was a certain difference in the numerical values of the device and manual detection results, the results of the two detection methods showed that the impurity rate of the wheat in the actual operation process was less than 2%, and the impurity rate of wheat met the national standard. Therefore, the two detection methods were consistent in the qualitative identification of whether the wheat impurity rate met the national standard. It could be seen that the detection results of the device could objectively reflect the actual working conditions of the combine harvester, and the device provided technical support for the driver to grasp the working conditions of the combine harvester in real time.
Conclusions
In order to realize the online detection of wheat impurity content during mechanized harvesting, an online detection device for wheat impurity rate based on deep learning was designed. In this study, a segmentation model for mechanized harvesting wheat grain and impurity identification with four backbones was developed based on DeepLabV3+. The optimal backbone network was determined by indicators such as precision rate, recall rate, comprehensive evaluation index, intersection ratio, and average intersection ratio. On this basis, a quantitative model of wheat impurity content based on image information was established to realize the online detection of wheat impurity content. The results showed that the DeepLabV3+ model with ResNet-50 achieved the highest F 1 on grain and impurity recognition segmentation at 83.63% and 87.18%, respectively. The F MIOU value of ResNet-50 was 0.0608, 0.0397, and 0.0383 higher than those MobileNetV2, Xception-65, ResNet-101, respectively. Therefore, in this study, ResNet-50 was selected as the backbone network of DeepLabV3+ to carry out the identification and segmentation of mechanized harvesting wheat grain and impurity components. In terms of speed, it took about 256 ms for ResNet-50 to recognize an image with a segmentation resolution of 512 × 512 pixels using an Nvidia Quadro RTX5000 16 GB GPU. Based on the manual detection results, the maximum absolute error of the device detection during the bench test was 0.2%, and the maximum relative error was 17.34%; the maximum absolute error of the device detection during the field test was 0.06%, and the maximum relative error was 13.78%. Therefore, the mechanized wheat impurity rate detection model could meet the needs of actual production. This study could be used to help wheat farmers grasp the operating performance of a combine harvester in real time, thereby effectively improving the quality of mechanized wheat harvesting.
In the future, we will focus on the following work: First, we will collect images of different varieties of mechanically harvested wheat to enrich the content of the data set; The second is to improve the DeepLabV3+ network structure and reduce the problems of missing segmentation and excessive segmentation. | 8,797.6 | 2022-10-01T00:00:00.000 | [
"Agricultural and Food Sciences",
"Computer Science",
"Engineering"
] |
Experimental ion mobility measurements in Ne-N2
Data on ion mobility is important to improve the performance of large volume gaseous detectors, such as the ALICE TPC or in the NEXT experiment. In the present work the method, experimental setup and results for the ion mobility measurements in Ne-N2 mixtures are presented. The results for this mixture show the presence of two peaks for different gas ratios of Ne-N2, low reduced electric fields, E/N, 10–20 Td (2.4–4.8 kV·cm−1·bar−1), low pressures 6–8 Torr (8–10.6 mbar) and at room temperature.
Introduction
Measuring the mobility of ions in gases is relevant in several areas from physics to chemistry, e.g. in gaseous radiation detectors modelling and in the understanding of the pulse shape formation [1,2], and also in IMS (Ion Mobility Spectrometry) a technique used for the detection of narcotics and explosives [3]. In order to fully understand and model these detectors it is important to have detailed information on the transport properties of ions.
Ion mobility
Under a weak and uniform electric field a group of ions will eventually reach a steady state characterized by a drift velocity [3], v d , expressed by: where K is the mobility of the ions, expressed in units of cm 2 ·V −1 ·s −1 and E the intensity of the drift electric field. The ion mobility, K, is normally expressed in terms of reduced mobility K 0 , where N is the gas number density and N 0 is the Loschmidt number (N 0 = 2.6867 × 10 19 cm −3 ).
The mobility values can be presented as a function of the reduced electric field E/N in units of Townsend (1 Td=10 −17 V·cm 2 ).
Langevin's theory
According to Langevin's theory [14], one limiting value of the mobility is reached when the repulsion becomes negligible compared to the polarization effect. This limit is given by the following equation, where α is the neutral polarisability in cubic angstroms (α = 0.394 Å 3 for Ne [15] and α = 1.74 Å 3 for N 2 [15]) and µ is the ion-neutral reduced mass in atomic mass units. The Langevin limit is the value of K in the double limit of low E/N and low temperature, conditions which ensure the dominance of the polarization attraction over other atomic interactions (e.g. hard sphere repulsion), describing well our experimental conditions: low pressure, low temperature and low reduced electric fields.
Blanc's law
In binary gaseous mixtures Blanc's law has proven to be most useful when determining the ions' mobility. According to this law the reduced mobility of the ion in the binary mixture, K mix , can be expressed as follows: where K g1 and K g2 are the reduced mobility of that same ion in an atmosphere of 100% of gas #1 and #2 respectively and f 1 and f 2 are the molar fraction of each gas in the binary mixture [16].
Method and experimental setup
The mobility measurements presented in this study were obtained using the experimental system described in [4]. A UV flash lamp with a frequency of 10 Hz emits photons that impinge on a 250 nm thick CsI film deposited on the top of a GEM that is inside a gas vessel. The photoelectrons released from the CsI film trigger an electron avalanche inside the GEM holes, where they ionize the gas molecules encountered along their paths. While the electrons are collected at the bottom of the GEM electrode, the cations formed will drift across a uniform electric field region towards a double grid; the first acts as Frisch grid while the second, at ground voltage, collects the ions' charge. A pre-amplifier is used to convert the charge collected into a voltage signal, and the time spectra are recorded in a digital oscilloscope. After the background is subtracted from the signal, gaussian curves are fitted to the time of arrival spectra from which the peak centroids are obtained. Since the peaks' centroid corresponds to the average drift time of the ions along a known distance, the drift velocity and mobility can then be calculated. One important feature of the method is the capability of controlling the voltage across the GEM (V GEM ), and so the energy gained by the photoelectrons as they move across the GEM holes. This characteristic proves to be a great advantage since it enables the identification of the primary ions based on their ionization energies. Identifying the primary ions will allow to pinpoint secondary reaction paths that lead to the identification of the detected ions.
Since impurities play an important role in the ions' mobility, before each experiment the vessel was vacuum pumped down to pressures of 10 −6 to 10 −7 Torr and a strict gas filling procedure was -2 -
JINST 11 P11019
carried out. No measurement was considered until the signal stabilised, and all measurements were done in a 2-3 minutes time interval to ensure minimal contamination of the gas mixture, mainly due to outgassing processes.
The method described together with the knowledge of the dissociation channels, product distribution and rate constants represent a valid, although elaborate, solution to the ion identification problem.
Results and discussion
The mobility of the ions originated in Ne-N 2 mixtures have been measured for different reduced electric fields E/N (from 10 Td up to 20 Td) and different pressures (in the 6-8 Torr pressure range) at room temperature (298 K).
The range of the reduced electric field values used to determine the ions' mobility is limited due to two distinct reasons: one is the electric discharges that occur at high E/N values; the other is the observed deterioration of the time of arrival spectra for very low values of E/N (below 5 Td or 1.2 kV·cm −1 ·bar −1 ), which has been attributed to collisions between the ions and impurity molecules.
A background work on the mobilities and ionization processes of Ne [5] and N 2 [6] in their parent gases has already been performed in our group.
Ne-N 2 mixture
In neon-nitrogen (Ne-N 2 ) mixtures with N 2 concentrations higher than 10% only one peak is observed, as can be seen in figure 1. The ion responsible for this peak is the same ion as in pure N 2 according to the cross sections and rate constants displayed in table 1, i.e. N + 4 . Since the total ionization cross section for electron impact (at an energy of 23 eV) in Ne is 0.0166±0.001×10 −16 cm 2 [17] -about 18 times lower than that for N 2 (0.492±0.025×10 −16 cm 2 [18])), it is expected that even at low N 2 concentrations (down to about 5% of N 2 ), N 2 ions are still the ones preferentially produced. For N 2 concentrations below 10% in the mixture another peak becomes visible as can be observed in figure 1. In table 1 the possible reactions are summarized together with the respective ionization cross sections or rate constants. Figure 2 shows the evolution of the fraction of ion species present as a function of time for N 2 concentration of 10% (figure 2a), and 50% (figure 2b), for a total pressure of 8 Torr. The relative abundance of the different ion species was calculated using both electron impact ionization cross sections and the reaction rates summarized in table 1.
As can be seen, the fraction of the different ion species present at the end of the drift distance will depend on the reaction time. A careful analysis of figure 2 can help to explain figure 1, where the time-of-arrival spectra for several Ne-N 2 mixtures (5%, 10%, 50% and 90% of N 2 ) at a pressure of 8 Torr, temperature of 298 K and for a reduced electric field of 15 Td with a voltage across GEM of 23 V, are displayed. As can be inferred from table 1, at very low N 2 concentrations (up to about 5% of N 2 ), the production of Ne + ions will be more abundant, leading to the same ions as in pure Ne, while above this and up to 10% of N 2 , both Ne and N + 4 ions will be produced. Ne + + N 2 − → N + 2 + Ne 1.1±0.44×10 −13 cm 3 ·s −1 - [19] responsible for the peaks observed will be Ne ions (Ne + /Ne + 2 )1 for the peak with higher mobility and the N + 4 ions for the peak with lower mobility. In this case the expected fraction of each species collected is about 83% of N + 4 , 8.9% of Ne + and 6.8% of Ne + 2 and 1.3% of N + 2 , whereas the ions formed at the GEM are 78% N + 2 and 22% Ne + . Further increasing the concentration of N 2 in the mixture will lead mainly to the formation of N + 4 . Looking at figure 2b (50% N 2 ), at about 0.28 ms, the drift time of the ion responsible for the spectrum in figure 1 (50% N 2 ), the expected fraction of each ion species is 99.8% of N + 4 , with the remaining representing only 0.2%, while the ions formed at the GEM are 97% N + 2 and 3% Ne + .
1As can be see from figure 2a the conversion of Ne + to Ne + 2 is incomplete for this mixture, so the mobility measured is in fact the result of the drift path travelled both as Ne + and Ne + 2 .
JINST 11 P11019
(a) (b) Figure 2. Fraction of ions that can be formed as a function of time for Ne-N 2 mixtures of 10% (a) and 50% (b) of N 2 , for a total pressure of 8 torr.
In fact, above 10% N 2 the only ion expected is N + 4 which results from N + 2 through a threebody reaction: where M is an atom or molecule from the gas mixture, in our case Ne or N 2 . In this collision the excess energy is removed by a third body (M), preventing its dissociation back to N + 2 . As can be seen from table 1 the reaction time will depend on M. As a consequence the reaction time for the formation of N + 4 will be affected by the chosen reaction partner (M), which can affect the signal shape if the drift time is of the same order of magnitude of the reaction time. A longer reaction time means that part of the drift path is spent as N + 2 , which has lower mobility than N + 4 , originating an asymmetry towards the right side of the N + 4 peak, as can be observed in the drift spectra displayed in figure 1.
As for the N + 2 ion, it can be originated either from direct electron impact ionization of N 2 or from the charge transfer reaction, that has a lower reaction time than the competing one, in the pressure conditions of this experiment (as can be seen in figure 2b for 50% of N 2 ). Since to our knowledge there is no charge transfer between Ne + 2 and N 2 we expect that, once formed, it will remain unaltered through the drift distance. Also, since the dissociation energy of N + 4 (0.87 eV) is much larger than the kinetic energy of the ion under low reduced electric field, once N + 4 is formed it will have a low probability of dissociating back into N + 2 and N 2 [6]. Concerning the peak area, we can observe in figure 1 that it varies with N 2 concentration in the mixture, a feature somehow related to the availability of Ne. There is also a peak shift to lower drift times with decreasing N 2 concentration, which translates into an increase in the ion mobility.
-5 -This increase is due to the fact that the Ne atom has a much smaller mass than N + 4 ion implying a much lower energy loss in elastic collisions with the gas atoms/molecules.
As mentioned, Blanc's law can be used to predict the mobility of the ions in gaseous mixtures. In figure 3 we plot the inverse of the reduced mobility obtained for the ions produced in the Ne-N 2 mixture as a function of the different mixture ratios studied for a pressure of 8 Torr and for E/N of 15 Td, at room temperature (298 K). Dashed lines representing Blanc's law for the most abundant ion, N + 4 (orange), as well as for N + 2 (blue) are also displayed. In this case, in Blanc's law (eq. (1.4)), K g1 and K g2 were obtained either by using Langevin's formula (eq. (1.3)) for K (N + 2 /N e) or by selecting experimental values from literature.
Observing figure 3 it is possible to conclude that the ion mobility experimentally obtained roughly follows, within error bars, Blanc's law for the most abundant ion down to 40% N 2 , while below this N 2 concentration it deviates towards the N + 2 theoretical value given by Blanc's law. The same figure indicates that the ion species observed depends on the amount of N 2 in the mixture (a second peak appears as seen in figure 1 for 5% and 10% N 2 and also in figure 3).
In fact, starting from pure Ne, as a result of the reaction rates and pressures, we have initially the formation of Ne + and Ne + 2 , then Ne + and N + 4 and finally only N + 4 according to the different reaction channels discussed previously. In addition it is also possible to see that increasing N 2 leads to a significant decrease in the mobility of the N 2 ions present, i.e. N + 4 . The mobility values in this experiment range from 6.41±0.06 to 2.37±0.03 cm 2 V −1 s −1 for the slowest ion, and from 7.20±0.06 to 6.55±0.05 cm 2 V −1 s −1 for the fastest ion, for E/N = 15 Td and 8 Torr. No significant variation of the mobility was observed in the range of pressures (6)(7)(8) and of E/N (10-20 Td) studied.
The mobility values of the ions observed for the Ne-N 2 mixture ratios studied, at E/N of 15 Td, pressure of 8 Torr and at room temperature (298 K) are summarized in table 2. 2.37 ± 0.03 -From 0 to 100% N 2 , the peaks observed were seen to vary in position and area, demonstrating that the ion or ions formed and their mobilities depend on the ratio of the two gases used.
Conclusion
In the present work we measured the reduced mobility of ions originated by electron impact in the Ne-N 2 mixtures using pressures from 6 to 8 Torr, low reduced electric fields (10-20 Td) and different mixture ratios. The experimental results show that for the mentioned mixtures two peaks were observed in the range of concentrations studied. The ions responsible for these peaks are believed to be the ones originated in Ne (Ne + 2 and Ne + ) for concentrations of N 2 up to 5% while for concentrations up to 10% of N 2 , the ions observed are Ne + and N + 4 . Above 10% N 2 the only ion observed is N + 4 , which can be formed through different reaction channels. The ions' mobility was seen to decrease with the increase of N 2 concentration in the mixture with the behaviour following roughly, Blanc's law for N + 4 down to 40% N 2 and then gradually changing towards the N + 2 predicted behaviour. An asymmetry was observed in some spectra, this asymmetry is expected to be due to the different reaction paths discussed and that originate N + 4 . Additionally we verified that the mobilities calculated did not display a significant dependence either with pressure in the pressure range studied (6)(7)(8), or with E/N in the range used in this work (10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20).
Future work is expected with other gaseous mixtures. It is our intention to extend the work on ion mobility using different mixtures of known interest such as Xe-CO 2 , Xe-CH 4 , Ar-CF 4 , Ne-CF 4 and Xe-CF 4 . | 3,847.2 | 2016-01-01T00:00:00.000 | [
"Physics"
] |
The State Unified Exam as a Requirement in Russia ’ s New Economic Relations
This article examines the issues of the low quality of knowledge, reflected in the results of the last State Unified Exam (SUE), in the context of the shift of the national economy of the Russian Federation towards innovative national development. A special focus is on the analysis of recent years’ changes in SUE results and the low effectiveness of the use of relevant resources in the educational system. The author proposes integrating the principle of continuing individualized education with the financial capabilities of the state and the population through the use of the mechanism of a universal electronic card.
Introduction
The State Unified Exam remains one of the most challenging subjects in terms of Russian education reform.The recent 12-point decrease in the threshold SUE minimum in Russian sparked a wave of both criticism and support on the part of the public.
However, we should approach the exam results with care not just in terms of the present-day state of Russian education but rather in terms of prospects in the development of society itself (Gorin et al., 2013;Trukhachev, 2013;Zaitseva & Popova, 2013).It is the latter, in our view, that there is a lack of.
When it comes to arguments behind the criticism, the most common ones suggest that the current system of knowledge control is destroying its content and the very secondary school, whose graduates are keen on not so much acquiring knowledge but being able to pass tests, rather.Many colleges are unable to enroll the right students, with the list of those instituting additional entrance trials increasingly growing.
Reform proponents, those in favor of reforming the SUE in particular, are talking about easing entering the capital's colleges for children from remote regions.
But the question arises as to what particular facts have to do with future social-economic development, whether the development of education in Russia is oriented towards improving the quality of selection of future students or whether education has to fulfill major social-economic functions, and where the issue of selection fits here then, in the first place.
It should be noted, above all, that amid the transformation of modern society into a knowledge society the educational system is becoming a central organizational-economic instrument for the formation of a new quality of human capital (Bobryshev et al., 2014;Gerasimov et al., 2014).Furthermore, educational services themselves have turned into an object for the formation of new economic and financial relations, which will be the basis of the entire social-economic system.
In this context, all employed instruments for reforming education ought to be oriented not towards enhancing and developing existing economic relations (Trukhachev et al., 2014;Gerasimov et al., 2013;Berezhnoy et al., 2014) but be adapted to the relations of the knowledge economy.
At the same time, the major objectives in the development of the educational system, established for the near future, do not address the most crucial point-raising the issue of the making of a new system of funding education, the hour for which is intrinsically ripe right now and which is crucial to the shift to a knowledge society (Sklyarov & Sklyarova, 2009).
Methods
An increase in the share of science-intensive production, inherent to the knowledge economy, provides a rationale for increased demand for a highly qualified workforce.In a market economy, people who are more educated have the opportunity to receive a higher remuneration for their work.Furthermore, human capital accumulated as a result of going to college translates into not just economic benefits but an improvement in the quality of life.
A top priority for state policy makers in the sphere of education is boosting the quality and level of education for all strata of the population regardless of the citizen's descent, income, and place of residence.Over recent years, the overall number of those attending educational institutions of all levels has been an average of 29 million per year, i.e. over 20% of the population of the Russian Federation.The overall number of those attending institutions of preschool and general learning is about 19 million people, while the number of students going to institutions of professional learning is about 10 million people (Saprankov, 2013).
Over 70% of the total volume of funding from the federal budget is directed into the sphere of higher education.Furthermore, all allocated funds are aimed at boosting the quality of education and preparation of specialists and human resources whose competence will fully meet the requirements of the present-day labor market.
Higher education governs the immediate increase in the economic and social effectiveness of individuals.Thus, education acts as an investment sector and a source of growth in human capital, which is something other areas of activity cannot accomplish (Toffler, 2002;Coombs, 1985).The aim of the modern educational system is to develop the overall cultural level, ensure the acquisition of fundamental knowledge, and foster the ability to study and develop personal competencies, on the basis whereof practical skills are formed.
The operation of the modern educational system is aimed at boosting the overall cultural level of man, the acquisition of knowledge by him, as well as the formation of practical skills in him.These days, employers are setting quite high requirements on the level of training of specialists, demanding from them a broad spectrum of both professional and personal competencies.The formation of a highly-educated individual is grounded in the development of the following competencies: -social, which deal with the individual's ability to make responsible decisions, work in a team, and resolve conflict situations; -intercultural, which are about respecting others as well as the ability to interact effectively with representatives of other cultures, languages, and religions; -communicative, which deal with having a competent command of speech and more than one language; -information, which are about the ability to apply information-communications technology; -the capacity for continuing education.
The above competencies serve as instruments for the accumulation and development of human capital.
Investing in human capital is closely linked with issues in the operation of the market of educational services, which play a central role in the process of formation of a qualified workforce and reflect the investment character of expenditure on education on the part of students.Education in Russia remained free for students for a long period of time.Today the opportunity to study on a paid basis is provided by both commercial and state-run colleges.Investing money in education, the individual thereby takes an active part in the formation of human capital.
Besides, over the last decade we have seen a substantial increase in the number of students attending universities, institutes, and academies in Russia.Note that according to the annual overview of major OECD indicators in the sphere of education, Canada, Korea, and the Russian Federation are leading the OECD member states and the G-20 on the share of young people (25-34 years of age) with a higher education.
By facilitating the build-up of human capital, the developed market of educational services makes a tangible impact on the labor market.A key issue in present-day higher education is the mismatch between curricula and the needs of business.Enterprises are often in need of additional training for their personnel, in terms of not just issues related to a specific narrowly professional sphere but in terms of basic education, as well as relevant qualifications.In this regard, an objective of educational institutions becomes forecasting trends in the development of the labor market and adapting curricula in accordance with those trends.
Results
A great impact on the development of innovation activity in the system of higher education has been made by the implementation of innovation programs as part of a priority national project entitled "Education".As a consequence, through budget funds a substantial number of colleges managed to substantially augment the innovation component, which deals with developing and commercializing scientific-technical achievements.
With the implementation of the project, colleges that won received an opportunity to use additional funding and the engaged funds depending on the potential they already had as well as their field of activity.Technical and classic universities directed the major volume of funds at upgrading the instrument base of science and education, and colleges specializing in humanities and social sciences at developing new learning methodologies and training human resources, including those for innovation entrepreneurship.In 2014, the SUE was taken by about 757 thousand people.Out of those, 94.6% were graduates in that year.The rest, 5.4%, were retaking the exam.To compare, in 2013 graduates in that year accounted for 86.1%.Note that, on the whole, the number of SUE participants was greater-863 thousand people.Figure 1 provides a diagram illustrating the distribution of the overall number of SUE participants across subjects over 2013-2014.The diagram indicates that virtually all participants took Russian and Mathematics, which are compulsory subjects needed to be taken to receive a diploma.The rest of the subjects were sat for on a voluntary basis, in any number, in accordance with which specialty (training field) the student was planning on receiving a professional education in.Among voluntary subjects, the majority of the participants-over a half-were sitting for Social Science.A much lower number of graduates picked Physics, History, and Biology.And an even lower number picked English, Computer Science, Literature, and Geography.Very few sat for German, French, and Spanish.Note that the structure of distribution of subject preferences among participants in 2014 did not change much relative to 2013.Out of 757 thousand participants in 2014, the SUE was failed by 5 thousand students (0.7%), and in 2013 out of 863 thousand it was failed by 6.5 thousand (0.8%).Despite all the advantages, right from the moment the SUE was introduced, as an experiment first and a fully instituted exam later on, it became a subject of heated debate.The major issues of debate were the objectiveness of grading students' knowledge and result rigging.The first issue involved the argument that based on the SUE methodology the likelihood of a well-prepared graduate getting a high score was higher than that of he/she getting a low score.By the same token, there was talk of the likelihood of poorly prepared graduates getting low scores (Shakhariyants, 2010).Regarding the second issue, the holding of the SUE has been accompanied by a slew of headline-making cases over all of the last years.Thus, in July, 2013, a major leak of monitoring/measuring materials (MMM) triggered the dismissal of the Deputy Minister of Education and Science of the Russian Federation, who supervised the SUE (Fursenko, 2013).However, 2014 proved exceptional in that respect.No cases of a leak of MMMs prior to the exam were recorded.Besides, the fair holding of the exam locally was ensured through unprecedented control measures-the examination classrooms had been equipped with metal detectors and surveillance cameras.
Tougher control measures led, on the one hand, to a substantial increase in expenditure on organizing the SUE.Thus, in 2013 it cost 500 million rubles.In 2014, the SUE was funded on all respects at the level of the previous year.Besides, additional funds were expended on installing video-cameras (600 million rubles) and implementing tougher control measures regarding the delivery of monitoring/measuring materials to some regions (63 million rubles).Note that the additional activities cost the budget more than all the major expenses.
Discussion
In an information society, science becomes a sort of generator of human capital.The scientific-technical component of human capital, thus, becomes one of the top national priorities.The development of human capital, the technological modernization of production, and the shift to the innovative path of economic development are the basis for future economic growth and a real alternative to the country's raw-materials specialization.
The country's modernization potential directly depends on a consistent shift to innovative economic development (Tatuev, 2013).An important factor in this shift is the formation, development, and accumulation of human capital.The interaction of development processes could be pictured as a helix (education-human capital-innovative economy-education).Education ensures the accumulation of human capital; a high level of human capital governs the successful flow of innovation processes within the economy, which, in turn, determines the implementation of new technology in the educational process and facilitates the development of human capital, etc.This helix reflects the onward progress of the innovative knowledge economy.
Science and higher education ought to be in line with the needs of modern society and emerging trends in its development.Currently, the adaptation capabilities of the educational system are lagging behind the pace of economic transformations-primarily, in terms of the dynamics of demand for specialists.Knowledge acquired in educational institutions does not match knowledge required of specialists in the labor market.There are various ways to overcome this gap: through internships, career enhancement, getting additional education, retraining, self-education, etc.In present-day conditions, all of the above ways ought to be put into practice on a constant basis, i.e. in the continuing education mode.However, all of them require substantial expenses, including financial expenditure.
Today we can assert that the human potential of Russia's scientific-technical and educational sphere is being reproduced quite inefficiently, which, among other reasons, is due to a lack of young human resources.In large part, this is associated with a number of material reasons-low salaries, trouble resolving housing issues, outmoded jobs and equipment.Besides, many researchers are noting changes in the cultural status of science in Russia-once a supreme value, science is now getting transformed into quite an ordinary, value-wise, social and cultural phenomenon.
The Concept of the Long-Term Social-Economic Development of the Russian Federation through to 2020 sets out the major dimensions of the shift to the innovative, socially-oriented path of national economic development.The Concept-2020 states that shifting Russia's economy to the innovative path of development will require forming a globally competitive national innovation system and a complex of legal, financial, and social institutes, which would ensure the interaction of educational, scientific, entrepreneurial, and non-commercial organizations and establishments in all spheres of the economy and social life (Gerasimov et al., 2013).The formation of a national innovation system capable of ensuring the effective integration of higher education and science can ensure the shift from the export/raw materials model to the innovation model for economic growth.
According to the Concept, a crucial characteristic of the shift to the innovative, socially-oriented path of national economic development is the need for the simultaneous resolving of the objectives of both catch-up and advanced development.The conditions of global competitiveness and open economy make it impossible for the Russian economy to reach the level of developed countries on indicators of welfare and efficiency, which cannot be achieved without ensuring the advanced development of those sectors of the Russian economy which determine its specialization within the global economic system and ensure the most effective realization of national competitive advantages.
Among the issues with the existing model for economic growth is the increase in the population's income, which is outstripping the pace of growth in the GDP, which, in turn, is accompanied by the augmentation of economic differentiation.
Consequently, the shift from the export/raw materials model to the innovation model for economic growth ought to be based on the formation of a new mechanism for social development, which should be predicated on a balance between entrepreneurial freedom, social justice, and national competitiveness.Regarding the development of Russia's human potential, the Concept speaks of the need for a shift from the system of mass education, typical of industrial economies, to the system of continuing, individualized education for all, which is crucial to creating an innovative socially-oriented economy, the need for the development of a type of education that will be indissolubly linked with global fundamental science and oriented towards the formation of creative, socially responsible personality.
However, the development of innovation activity in the system of higher education is impeded by a set of internal and external factors (Tatuev, 2012).The internal factors are: -the low innovation activity of instructors, as well as a lack of specialists in the area of innovation management; -the outmoded material-technical base of colleges, outmoded testing and experimental operations, and, as a consequence, the absence of a full cycle of innovative product creation; -the low development level of college infrastructure; -poor cooperation between universities, as well as between universities and regions' industrial, economic, and social spheres.
The external factors for the low innovation activity in the sphere of education are: -the lack of mechanisms for active government support of small innovative enterprises under colleges; -the ineffectiveness of government support of innovation infrastructure facilities.Here we get to the most complicated issue-towards what and how to orient the system of management of higher education.First and foremost, we need to resolve the existing contradiction whereby, on the one hand, higher education has become a key factor in the development of human capital, while, on the other, the system of management of its development is increasingly oriented towards redistribution relations-which has been demonstrated by the results of the last SUE.
It is in this context that we may find interesting other results of having made control measures tougher-a substantial decrease in scores.We can see from the table that the average score has increased only in Spanish and French.However, considering the low number of SUE participants who sat for those subjects, those values are not representative.The most representative is the situation with Mathematics, where the average score has dropped from 49.6 to 39.6-a 20.1% decline.There is an 11.7% decline in Social Science-the average score has fallen from 60.1 to 53.1.There have also been quite negative changes in the results in Physics, History, and Biology.In Physics, the average score has dropped from 54.6 to 45.8-a 16.3% decline.In History, the average score has fallen from 55.9 to 45.7-an 18.2% decline.In Chemistry, the average score has dropped from 68.7 to 55.7-an 18.9% decline.Against the backdrop of the declining average score, we observe a rather negative dynamics with the situation involving a decrease in the number of 100-pointers-SUE participants who scored 100 points on the exam in a subject.Table 2 provides information that lets us assess the dynamics of change in the number of SUE participants who scored 100 points in the subjects over the period from 2013 to 2014.The table provides values for the shares of participants who scored 100 points in the subjects over 2013 and 2014 and an index of gain in results (obtained through dividing the difference between the 2014 value and the 2013 value by the 2013 value and multiplying the result by 100), which is expressed in %.
The table shows that in some subjects there are no longer any 100-pointers whatsoever (German), while in other subjects the number of such participants has declined considerably.Thus, for instance, in Mathematics there is a drop from 0.07 to 0.01%-an 86.5% decline.In Social Science, the decline is 85.4% (from 0.1 to 0.02%), in Physics 65.0% (from 0.23 to 0.08%), in History-76.8% (from 0.3 to 0.07%), and in Chemistry-82.6%(from 3.43 to 0.6%).The only improvement here has been recorded just in Russian, a 9.3% gain-from 0.31 to 0.33% of the 100-pointers.The total number of participants who scored 100 points has dropped from 9 to 3.5 thousand.
In the meantime, even the above dynamics does not reflect the entire picture, since the overall decline in SUE results has been counterbalanced in an artificial way-by lowering the minimum number of points in the compulsory subjects-Russian (from 36 to 24 points) and Mathematics (from 24 to 20 points).It is this step that made it possible to smooth over the negative exam results-as a matter of fact, the same step led to making the unsatisfactory result stand.Thanks to this step, the number of those who failed to pass the assessment in the compulsory subjects and will not be able to receive a secondary general education diploma in 2014 has decreased, as has been stated above, to 5 thousand people.However, the real figures could have been a lot higher.Thus, according to some data, if the passing score had not been lowered, just the exam in Russian could have been failed this year by as many as 30 000 graduates.According to other data, if it had not been for the reduction in the passing score in both subjects, as many as 20-25% of graduates could have failed to get a diploma-and that considering the fact that, on the whole, as many as 4/5 of graduates, technically speaking, received 3s or failed the assessment altogether (Burmatov, 2014).Then, based on this data, the differentiation index was calculated (through dividing the difference between the average score achieved in the subject by participants from urban schools and that by participants from rural schools by that by participants from rural schools and multiplying the result by 100).
As we can see from the table, the average score in all the subjects taken together achieved by urban school graduates is 7.5% higher than that achieved by rural school graduates.Note that urban school graduates lead in all the subjects.That is, as a rule, urban schools prepare students for the SUE better.When it comes to the compulsory subjects, the differentiation is at the following levels: Mathematics-5.3%;Russian-7.9%.For Social Science it is 6.3%.The highest is the level of differentiation in the foreign languages and Computer Science-from 22.0 to 77.6%.
On the whole, the results of the SUE held in 2014 characterize most eloquently the fundamental issue faced by the national educational system-the low level of the quality of its performance.This fact has been confirmed by the Minister of Education and the President, as well as other competent persons, with modernization stated to await the educational system.That said, modernization will again be carried out through targeted, pinpoint measures.Thus, for instance, there are plans to reconsider academic programs related to teaching Russian and requirements on the quality of work of instructors.For that purpose, on the initiative of the Federal Service for Supervision of Education and Science there will be set up a task force to deal with issues related to enhancing teaching Russian, which will focus on improving teachers' level of qualification (Chernykh, 2014).
However, the efficiency of the activities proposed was immediately called into doubt (Kovalenko, 2014).More specifically, there is an argument whereby the across-the-board use of the SUE mechanism to assess the quality of education has led to the reorientation of secondary general education from the imperatives of preparing "worthy members of modern society" to the imperatives of "cramming students for tests" (Privalov, 2014).Besides, there is the issue of using the SUE to assess the quality of work done by governors and various executives in the system of education.Note that, in point of fact, the entire blame for someone's low grades has been put on the instructors, as a result of which there emerged talk of the latter's low level of qualification, the lack of young human resources, and the insufficient level of motivation.
The same year saw the abolition of the criterion of assessing governors based on SUE results.Besides, they instituted a ban on condemning instructors and schools whose students got the lowest exam scores (Privalov, 2014), since such approaches are deemed absolutely nonsensical, particularly considering the fact that the preparation level at the capital's high schools is much higher than that at rural schools.However, the measures planned, in company with the attempt to raise teacher salaries, are not likely to change things much, since they are not aimed at resolving fundamental errors laid down in the national educational system over the last years of reform.Most of the measures are aimed exclusively at improving the SUE mechanism, which, in essence, is a mechanism for control of the quality of the educational system.Even when we are talking about altering the programs, enhancing instructors' level of qualification, and engaging young specialists, it is still primarily about achieving the best results in the exam in the future.However, virtually no attention is given to problems faced by the educational system per se amidst the realities of a new society-a knowledge society wherein it is man and his knowledge that become the major element of production capital, his capacity for learning and applying his skills in practice.The results of the 2014 SUE have demonstrated that it is this issue that most graduates had serious trouble with.4/5 of graduates (those who had 3s or failed the attestation altogether) possess zero knowledge and have not been taught the skills of self-education.Therefore, the likelihood of graduates having learned anything major on leaving school is quite low.Consequently, this 4/5 of graduates cannot already become the basis for the development of a knowledge society in Russia.In other words, 4/5 of all resources expended on students in the educational system have virtually been wasted.
Conclusion
In our view, we should focus our high education management priorities on building a new system of economic and, principally, market relations encompassing both corporate and national levels.The object of these relations should be the formation of integrated and targeted investments in human capital in the form of specific funding of the system of higher education.Furthermore, it is expedient to form criteria for the effectiveness of this type of investment taking account of the immediate interests of the population as the major consumer of higher education services.
At the same time, these activities will not presuppose the immediate material interest of the very students and institutional establishments at all levels.
The technological vista of realizing the immediate priority of motivational factors is opening up at the modern stage of effectuating administrative reform, as part of which there was passed the Federal Law # 210-FZ of July 27, 2010 "On Organizing the Provision of State and Municipal Services", which provides for the issuance of universal electronic cards for citizens.
Such a card will be a material carrier containing digital information on the owner and his/her rights to consume state and municipal services.Accordingly, such universal electronic cards can be used by RF citizens as well as foreign citizens when provided for by federal laws.
The universal electronic card will become an informative document that will serve as proof of a citizen's identity, the rights of an insured person in compulsory insurance systems, and other rights of a citizen to consume state and municipal services, including in the sphere of education.Thus, users of universal electronic cards become immediate participants in budget-administration relations.In this regard, it is important to work out principles that would make it possible to organize, through universal electronic cards, the administration of monies in budgets and non-budget funds at different administrative levels and direct them towards payment for educational services, including with the possibility of adding personal funds by citizens and corporations.
On this basis, there is being formed a new structure of economic relations associated with the provision of educational services.These relations will help to substantially increase the revenue of institutions of higher learning and form more equal conditions for access to quality higher education for people from all walks of life.In this context, there are additionally developed the motives of increase in personal expenditure on acquiring knowledge, which determine the primary specificity of corresponding systems of management and the trajectory of the development of higher school management.
The implementation of educational programs helped augment the crucial elements of colleges' innovation infrastructure-innovation complexes.The specificity of the latter lies in a combination of scientific, educational, and production resources, which in the future facilitates ensuring a new quality of education, the development of scientific research, and commercialization of the results of scientific-technical activity.Furthermore, one of the more known mechanisms for assessing the quality of education in Russia is the State Unified Exam (SUE).The SUE is an examination conducted in a centralized fashion in secondary educational institutions based on secondary general education curricula.It serves simultaneously as a final examination and an entrance examination to enter a college.The application of the SUE throughout the country involves using single-type assignments and grading methods.
Figure 1 .
Figure 1.The distribution of the total number of SUE participants across subjects, %
Table 1 .
The dynamics of change in the average SUE test scores in the subjects over the period from 2013 to 2014 Table 1 provides information that lets us assess the dynamics of change in the average SUE test score in the subjects over the period from 2013 to 2014.The table provides values for average test scores recorded in 2013 and 2014 and contains an index of gain in results (obtained through dividing the difference between the 2014 value and the 2013 value by the 2013 value and multiplying the result by 100), which is expressed in %.
Table 2 .
The dynamics of change in the number of SUE participants who scored 100 points in the subjects over the period from 2013 to 2014
Table 3 .
The differentiation of the population of different populated localities by the average SUE test score in the subjects in 2013, % Besides, holding a fair SUE illustrated the scale of one other issue-differentiation in the quality of work by different schools.Table 3 provides information based on which we can assess the differentiation of the population of different types of populated localities by the average SUE test score in the subjects in 2013.More specifically, the table provides values for average SUE test scores in the subjects among SUE participants who attended rural and urban schools. | 6,732.6 | 2015-02-25T00:00:00.000 | [
"Economics"
] |
Dynamic mutation based glowworm swarm optimization with long short-term memory approaches for thyroid nodule classification
Objectives: To design an efficient approach for thyroid nodule classification with higher true positive rate.Methodology and statistical analysis: The proposed system designed as a Dynamic Mutation based Glowworm Swarm Optimization with Long-Short Term Memory (DMGSO with LSTM) scheme for thyroid nodule classification. In this proposed research work, input thyroid images are preprocessed by using Dynamically Weighted Median Filter (DWMF). The preprocessed images are segmented with the help of Region based Active Contour scheme. An Improved Local Binary Pattern (ILBP), Grey Level Cooccurrence Matrix (GLCM) and Histogram of Oriented Gradient (HOG) features are extracted from segmented image. Then the optimal features are selected by using Dynamic Mutation based Glowworm Swarm Optimization (DMGSO) algorithm. Finally, the Long-Short Term Memory (LSTM) scheme is utilized for classifying the thyroid nodule. Findings: The experimental results show that the proposed system achieves better performance compared with the existing system in terms of accuracy, precision, recall and f-measure.
Introduction
Thyroid nodule is a solid lump that can grow in thyroid gland. It can be a single lump or cluster of nodules. Research studies indicated that 60% of the people affected by thyroid nodules. Fine Needle Aspiration Cytology (FNAC) is popular and widely used for diagnosing thyroid nodules because of its higher sensitivity when compared to other methods (1) . FNAC is faster and in expensive method. It provides important information for differentiating benign from malignant nodules which reduces unnecessary surgeries. Recently, medical imaging techniques including Ultrasound (US) imaging and Computerized Tomography (CT) are being used for diagnosing thyroid nodules with greater https://www.indjst.org/ accuracy (2) . US imaging modality is non-invasive, low cost and does not use any ionization radiation. Compared to CT, US imaging techniques are widely used due to its size and portability. US imaging technique is operator dependent, images are analyzed manually by a sonographer or physician. Manual analysis is subjective, even experienced persons may provide different diagnosis. To solve the aforementioned issues, Computer Aided Diagnosis (CAD) system have been proposed to discriminate benign from malignant nodules Feature extraction plays a significant role in classification task. Researchers have attempted to develop CAD systems using various feature extraction and classification techniques (3,4) . Most of the researchers have used texture features and Support Vector Machine (SVM) for diagnosing thyroid nodules (5) . Deep learning neural networks have been successfully applied in many fields such as pattern recognition, segmentation, object detection and classification. Studies proved that deep learning neural networks provide outstanding performance compared to standard Artificial Neural Networks (ANNs) (6)(7)(8) .
The previous work designed a Modified Ant Colony Optimization (MACO) with Modified Adaptive Network-Based Fuzzy Inference System (MANFIS) for thyroid ultrasound image classification (9) . It has issue with training time. In order to eliminate the dependency and improve the diagnostic accuracy, the proposed system designed a deep learning based Thyroid nodule classification which minimizes the error between the observed and predicted data. Initially, the thyroid images are segmented by using Region based Active Contour scheme. An Improved Local Binary Pattern (ILBP), Grey Level Co-occurrence Matrix (GLCM) and Histogram of Oriented Gradient (HOG) features are extracted from segmented image. Dynamic Mutation based Glowworm Swarm Optimization (DMGSO) algorithm is utilized for optimal feature selection. Finally, the Long-Short Term Memory (LSTM) scheme is utilized for classifying the thyroid nodule.
The rest of this study is outlined as follows: Section 2 details a survey of the thyroid nodule classification methods. Section 3 explains the functioning of the developed model. Section 4 presents the numerical results. Section 5 presents empirical findings of this research work.
Proposed Methodology
The proposed system focuses on diagnosis of thyroid nodules based on Dynamic Mutation based Glowworm Swarm Optimization with Long-Short Term Memory (DMGSO-LSTM) scheme. Figure 1 shows the block diagram of the developed model for thyroid nodule classification.
Preprocessing using dynamically weighted median filter (dwmf)
In this proposed research work, DWMF is used for pre-processing. To obtain noisy free images, W X W widow is formed using 2D Gaussian surface function. Let the input noisy image I and binary image I b . W X W window, W n and W b are chosen using identified noisy pixels in both I and I b respectively. Window weight W wt is computed and moved over to the I b , pixels, they are discarded where W b have the value of 1. Detected noisy pixels assigned with 0 and shifted if gaps are observed in W wt due to the elimination of noisy pixels. For an instance, if the W wt of window is 4, then the pixel is named as noisy pixel and removed. Therefore, if 2 is detected when shifting from W wt of 3 to 5 in W wt . Weights are reallocated to reduce duplications. The modified window is added. The W wt which is having high value is incremented by 1 if the sum is even. There is no change if the sum is odd. Final window is obtained after checking the odd sum of repeated windows to create repetition array A R . Noisy pixel is substituted by median value of A R .
Segmentation
Localized Region based Active Contour (LRAC) is adopted to do segmentation process. Research studies proved that LRAC using level set method is a good candidate for thyroid nodule classification. LRAC segment of the images in two processes are: (i) curve evolution and (ii) segmentation. Curve evolution uses level set to detect boundary of the images. Based on the detected boundary, active contour scheme segments the image from its background. The advantages of LRAC are: robust to noise and automatically detect boundary.
Feature extraction methods
Feature extraction is used to extract the most important attributes from segmented image. It is very difficult to select useful information from medical images. Over the past years, several feature extraction methods have been proposed and each method has their own characteristics. No one algorithm or method can extract all the important features for thyroid nodule classification. To solve this the proposed system extracts the ILBP, GLCM and HOG features from segmented image. https://www.indjst.org/
Improved Local Binary Pattern (ILBP)
LBP is an image processing method that is used to extract texture features. The merits of LBP are easy implementation and faster operation time, improves the classification performance of ILBP and assigns every uniform pattern to a separate label ranging from 0 to P (P −1) + 1. In ILBP, the oriented mean and standard deviation of the local absolute difference are considered to make the matching more robust against local spatial structure changes. To minimize the variations of the mean and standard deviation of the directional differences, a scheme that minimizes the directional difference along different orientations adds the parameter w.
Grey Level Co-occurrence Matrix (GLCM
GLCM is commonly used method of extracting textural feature from images. GLCM represents the relation between reference pixel (i) and the neighbor pixel (j) in various orientations. Texture features are calculated using GLCM are contrast, correlation, homogeneity and energy.
HOG features
The HOG is a method for extracting representative features. It extracts the features based on local object appearance and its shape are defined by intensity distribution. In HOG method, input image is divided into many groups and then histogram of gradients is calculated (10) . The obtained histograms are added to obtain image descriptor. Local histogram method is applied to enhance the representation of the image descriptor (11) . The intensity values are then used to standardize all cells within the block. The steps to extract HOG features are presented in Table 1 Table 1. https://www.indjst.org/ Table 1. HOG feature extraction algorithm Step 1: Compute the gradient value in both horizontal and vertical directions using Equation (1) and Equation (2) respectively.
Step 2: Calculate HOG. Orientation binning is the process of creating cell histograms. Histogram channels are either unsigned or signed. The signed histogram spans from 0 to 180 degrees whereas unsigned histogram spans from 0 to 360 degrees. Based on the computed value, each pixel is grouped.
Step 3: Create descriptor blocks. The cell orientation histograms are grouped into greater and spatially linked blocks before they can be standardized. Grouping process makes the image robust to illumination and contrast variations. Rectangular (R-HOG) and circular (C-HOG) are the widely used methods. The R-HOG is typically a square grid that can be described with the number of cells, the number of pixels, and the number of histogram channels. The blocks join with each other for a magnitude of half size of a block.
Step 4: Block normalization can be defined as: L2-norm : (9) Where, e is a constant whose value will not influence the result
Based Glowworm Swarm Optimization (DMGSO
The Glowworm Swarm Optimization (GSO) is a type of metaheuristic algorithm (12) . Conventional GSO has limitations such as slow convergence and need more time to do global search. In this proposed research work, DMGSO is utilized for optimal feature selection. The proposed DMGSO algorithm uses mutation strategy to overcome drawbacks of conventional GSO.
In GSO, swarm of glowworms are initialized randomly, each agent carries a luminescence quantity and each agent is attracted by another agent based on luciferin intensity. Higher the intensity of luciferin represents the better solution in current location. In each epoch, glowworm position will change based on the brightness. Detailed steps of DMGSO is given below: 1. Glowworms' initialization 2. Luciferin update phase 3. Movement phase 4. Neighborhood range update phase • Gloworms initilization: Glowworms are initialized randomly and epch is set to 1 • Luciferin-update phase: Calculates fitness of glow warm, if the current fitness is better than previous value, updates the position and luciferin. The luciferin update rule (objective function based on the features) is done by using Equation (10) Where, l i (t) is the luciferin of glowworm i at time t, ρ is the luciferin decay constant (0< ρ < 1), γ represents the luciferin enhancement constant, and J i (t) is the function value • Movement phase: Glowworms search a neighbor by a probabilistic mechanism that has higher luciferin value and move to it. For each glowworm i, the probability equation of moving towards a neighbor j can be stated as https://www.indjst.org/ Let glowworm i select a glowworm j ∈ N i (t) , l i (t) < l j (t)} is the set of neighbors of glowworm i , r i d (t) is the variable local-decision domain, andd i, j (t) represents the Euclidean distance between glowworms i and j at time t.
Where, x i (t) is the location of glowworm i at time t; s is the step size, and||.|| is the Euclidean norm operator.
• Neighbourhood range update rule: In the GSO algorithm, local domain value is updated by using the Equation (14), Here, β is a constant and n t is a parameter to control the neighbor number.
Dynamic mutation strategy
Dynamic mutation strategy is applied to G best as follows: Where, F-Scale factor X a and X b -two random particles with unequal fitness value in the swarm. Mutation strategy is adopted to improve classification performance. In mutation strategy, if the fitness value is better than that of the current feature, mutated feature is selected as best one and positions are updated.
Algorithm 1: Dynamic mutation based GSO
Classification using enhanced LSTM
LSTM is a special form of Recurrent Neural Network (RNN). Though standard RNN is a good candidate for complex problems, it has limitations like vanishing gradient problem (13) . To overcome the drawbacks of standard RNN, LSTM is introduced. LSTM cell consists of four major parts namely input unit, forget gate, output gate and activation part. Figure 2 depicts the general structure of a LSTM cell. Input unit receives the signal from external world. Forget is responsible for eliminating unwanted information.
The input gate of LSTM is defined as The forget gate is defined as The cell gate is defined as The output gate is defined as Finally, the hidden state is computed as tanh -hyperbolic tangent activation function x t− input at time t W and b are the weight and bias respectively, σ is the logistic sigmoid function, and i, f, o and c are respectively the input gate, forget gate, output gate and cell state. W ci , W c f and W co are denotes weight matrices for peephole connections. Input gate i, forget gate f and output gate o are responsible for information processing. Equation (18) calculates the cell state. The forget gate decides whether the previous information passed to the next state or not. Output gate computes the outcome of the LSTM using Equation (19). Hidden state is calculated with Equation (20).
In this proposed work, bias values are updated using weighted average of the features.
Weighted average Bias =
Empirical study
The developed system is implemented on MATLAB platform. Several experiments are conducted in order to assess the efficacy of the proposed system. In this research work, thyroid images are collected from http://cimalab.intec.co/applications/thyroid/ . This system has been mainly used for thyroid nodules that are ≥1 cm. The performance of the proposed DMGSO-LSTM and existing Histogram, MLP, ILBP-ASO and MACO-MANFIS methods are evaluated by measuring four commonly used metrics such as accuracy, precision, recall and f-measure. Figure 3 shows the main menu and collection of database images are given in figure 4. The given input images are preprocessed with the help of Dynamically Weighted Median Filter (DWMF). The pre-processed image is shown in Figure 5. The pre-processed images are segmented by using localized region based active contour scheme. The segmentation results are shown in Figure 6. The Figure 7 represents the feature extraction results.
Based on the extracted features, the classification is performed by using Long Term Short Memory (LSTM) scheme. The output corresponding to the input is shown in figure 8. https://www.indjst.org/
Accuracy
Accuracy is the ratio of sum of correctly classified cases to total cases. Classification accuracy can be defined as:
Accuracy =
True positive + True negative True positive + True negative + False positive + False negative True positive is the sum of the correct classifications; true negative is the sum of incorrect classifications. False positive is the sum of the incorrect classifications that an actual case is negative and false negative is the total number of incorrect classifications that an actual case is positive. Figure 9 demonstrates the accuracy metric comparison for the existing and proposedmethods. In x-axis, methods are depicted https://www.indjst.org/ and accuracy in y-axis. In this proposed research work, optimal features are selected by using Dynamic Mutation based Glowworm Swarm Optimization (DMGSO) algorithm. It improves the accuracy of the classifier. From the experimental outcomes, it is observed that the proposed system attains 99 % of accuracy when other methods such as Histogram, MLP, ILBP-ASO and MACO-MANFIS achieves 89%, 91%, 96% and 98% respectively.
Precision
Precision is the ratio of the true positive to the sum of true positive and false positive. It can be expressed as: The precision of the proposed DMGSO-LSTM is compared with the existing Histogram, MLP, ILBP-ASO and MACO-MANFIS approaches. The x-axis shows the methods and precision depicted in y-axis. The experimental results shows that the proposed DMGSO-LSTM approach attains 96% of precision when other methods such as histogram, MLP, ILBP-ASO and MACO-MANFIS provides 86%, 89%, 92% and 94% respectively.
Recall
Recall value is the ratio of the true positive to the total of true positive and false negative. Mathematically, recall can be defined as:
Recall =
True positive True positive + False negative https://www.indjst.org/ Figure 11 demonstrates the recall metric comparison for the methods. The x-axis has methods and recall is depicted in y-axis. In this proposed research work, enhanced LSTM approach used for the classification of the thyroid nodules with the selected features. Here, the bias values are updated using weighted average of the features. It improves the true positive rate. From the experimental outcomes, it is observed that the proposed system attains 95 % of recall when other methods such as Histogram, MLP, ILBP-ASO and MACO-MANFIS achieves 81%, 85%, 89% and 91% respectively. https://www.indjst.org/
F-measure
F-measure is used to evaluate the classification accuracy. It is calculated by using precision and recall.
The proposed DMGSO-LSTM is compared with the existing Histogram, MLP, ILBP-ASO and MACO-MANFIS approaches in terms of f-measure. In x-axis methods depicted and f-measure in y-axis. Figure 12 demonstrates that the f-measure of the proposed DMGSO with LSTM algorithm provides 96% when other methods such as Histogram, MLP, ILBP-ASO and MACO-MANFIS achieves 83%, 86%, 90% and 92% respectively.
Conclusion
The proposed system designed a Dynamic Mutation based Glowworm Swarm Optimization with Long-Short Term Memory (DMGSO with LSTM) scheme for thyroid nodule classification. DWMF was utilized for removing unwanted data from the input images. Pre-processed images are segmented with region based active contour schemes. Three features such as ILBP, GLCM and HOG are extracted and optimized by using DMGSO algorithm. LSTM network was employed for classifying the thyroid nodule. Empirical findings demonstrated that the proposed method yields higher classification accuracy when compared to other existing models. | 3,985.8 | 2020-04-15T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
A simple object-oriented and open source model for scientific and policy analyses of the global carbon cycle – Hector v 0 . 1
Introduction Conclusions References Tables Figures
Introduction
Projecting future impacts of anthropogenic perturbations on the climate system relies on understanding the interactions of key earth system processes.To accomplish this, a hierarchy of climate models with differing levels of complexity and resolution are used, ranging from simple energy balance models to fully-coupled atmosphere-oceangeneral circulation models (AOGCMs) (Stocker, 2011).
Simple climate models (SCMs) represent only the most critical global scale earth system processes with low spatial and temporal resolution, e.g., carbon fluxes between the ocean and atmosphere, and respiration and primary production on land.These models are relatively easy to use and understand, and are computationally inexpensive.Most SCMs have a few key features: (1) calculating future concentrations of greenhouse gases (GHGs) from given emissions, (2) calculating global mean radiative forcing from concentrations, (3) converting the radiative forcing to global mean temperature, and (4) modeling the carbon cycle, an essential part of the climate system (e.g., Wigley, 1991;Meinshausen et al., 2011a;Tanaka et al., 2007b;Lenton, 2000).With these capabilities, SCMs play an integral role in policy and scientific research.For example, energy-economic-climate models or Integrated Assessment Models (IAMs) are used to address issues on energy system planning, climate mitigation and stabilization pathways, land-use changes, pollution control, and population policies (Wigley et al., 1996;Edmonds and Smith, 2006;van Vuuren et al., 2011c).AOGCMs are too computationally expensive to use in these analyses.Therefore all IAMs have a simple representation of the global climate system in which emissions data from the IAMs are converted to concentrations and then radiative forcing and global temperature are calculated.
SCMs are also used as emulators of more complex AOGCMs (e.g., Meinshausen et al., 2011a, c;Schlesinger and Jiang, 1990;Challenor, 2012;Ratto et al., 2012).The components of SCMs can be constrained to replicate the overall behavior of the more complex model components.For instance, the climate sensitivity of a SCM can be made equal to that of an AOGCM by altering a single model parameter.One SCM, MAGICC, has been central to the analyses presented in the Intergovernmental Panel on Climate Change (IPCC) reports, emulating a large suite of AOGCMs (Meinshausen et al., 2011a).
Lastly, SCMs are computationally efficient and inexpensive to run, and therefore are used for run multiple simulations of future climate change emissions scenarios, parameter sensitivity experiments, perturbed physics experiments, large ensemble runs, and uncertainty analyses (Senior and Mitchell, 2000;Hoffert et al., 1980;Harvey and Schneider, 1985;Ricciuto et al., 2008;Sriver et al., 2012;Irvine et al., 2012).SCMs are fast enough that multiple scenarios can be simulated, and a wide range of parameter Figures
Back Close
Full values can be tested.Specifically, SCMs have been useful in reducing uncertainties in future CO 2 sinks, quantifying parametric uncertainties in sea-level rise, ice-sheet modeling, ocean-heat uptake, and aerosol forcings (Ricciuto et al., 2008;Sriver et al., 2012;Applegate et al., 2012;Urban and Keller, 2009).This study introduces Hector v0.1, object-oriented, simple climate carbon-cycle model.Hector is open source, an important quality given that the scientific community, funding agencies, and journals are increasingly emphasizing transparency and open source (E.P. White, 2013;Heron et al., 2013).With an open source model a large community of scientists can access, use, and enhance it, with the potential for long-term utilization and reproducibility (Ince et al., 2012).
One of the basic questions faced in developing a SCM is how much detail should be represented in the climate system.Our goal is to introduce complexity only where warranted, keeping the representations of the climate system as simple as possible.This results in fewer calculations, faster execution times, and easier analysis and interpretation of results.Sections 2, 3, and 4 describe the structure and components of Hector.Sections 5 and 6 describe the experiments, results and comparison of Hector against other models (MAGICC and CMIP5).
Overall structure and design
Hector is written in C++ and uses an object-oriented design that enforces clean separation between its different parts, which interact via strictly defined interfaces.The separation keeps each software module self-contained, which makes the code easy for users to understand, maintain, and enhance.Entities in the model include a commandline wrapper, the model coupler, various components organized around scientific areas (carbon cycling, radiative forcing, etc.) and visitors responsible for model output.Each of these is discussed below.Figures
Back Close
Full
Model coupler
Hector's control flow starts with the coupler, which is responsible for: (1) parsing and routing input data to the model components, (2) tracking how the components depend on each other, (3) passing messages and data between components, (4) providing facilities for logging, time series interpolation, etc., and ( 5) controlling the main model loop as it progresses through time.Any errors thrown by the model are caught by the wrapper, which prints a detailed summary of the error.
Input data are specified in flat text files, and during startup are routed to the correct model component for its initialization.Some of the key initial model conditions are summarized in Tables 1 and 2. For more details of initial model conditions we urge the reader to download Hector v0.1 (https://github.com/JGCRI/hector).Components can send messages to each other during the model run, most often requesting data.The coupler handles message routing (via the capability mechanism, below) and enforces mandatory type checking: e.g., if a component requests mean global temperature in • C but the data are provided in K, an error will be thrown (i.e., execution halts) unless the receiving component can handle this situation.
Visitor patterns are units of code that traverse all model components and handle model output (Martin et al., 1997).Two visitors currently exist: one saves an easilyreadable summary table to an output file, while the other writes a stream of model data (both standard outputs and internal diagnostics).After the model is finished running, this "stream" file can be parsed and summarized by R scripts included with the code (R Development Core Team, 2014).Log files may also be written by any model entity, using facilities provided by the coupler.The full sequence of events during a model run is summarized in Fig. 1.
Components
Model components are submodels that communicate with the coupler.From the coupler's point of view, components are fully defined by their capabilities and Figures
Back Close
Full dependencies.At model startup, before the run begins, components inform the coupler of their capabilities, i.e., what data they can provide to the larger model system.The coupler uses this information to route messages between components, such as requests for data.Components register their dependencies, i.e., what data they require in order for their computations.After initialization, but before the model begins to run, the coupler uses this dependency information to determine the order in which components will be called in the main control loop.The model's modular architecture, and the capability/dependency systems described above, allows swapping, enabling and disabling of model components directly via the input without recompiling.For example, this means that a user can test two different ocean submodels and easily compare results without having to rebuild the model.
Time step, spinup, and constraints
The model's fundamental time step is 1 year, although the carbon cycle can operate on a finer resolution when necessary (Sect.2.6.1).When the model is on an integer date (e.g.1997.0) it is considered to be the midpoint of that particular calendar year, in accordance with Representative Concentration Pathway (RCP) data (Meinshausen et al., 2011b).
Like many models, Hector has an optional "spinup" step, in which the model runs to equilibrium in an ahistorical, perturbation-free mode (Pietsch and Hasenauer, 2006).This occurs after model initialization, but before the historical run begins, and ensures that the model is in steady state when it enters the main simulation.During spinup, the coupler repeatedly calls all the model components in their dependency-driven ordering, using an annual time step.Each component signals whether it needs further steps to stabilize, and this process repeats until all components signal that they are complete.
Currently only the model's carbon cycle makes use of spinup.Spinup takes place prior to land use change or industrial emission inputs.The main carbon cycle moves from its initial, user-defined carbon pool values to a steady state in which δC/dt < ε Introduction
Conclusions References
Tables Figures
Back Close
Full for all pools; the convergence criterion ε is user-definable and by default 1 Tg C yr −1 .
From its default values the preindustrial carbon cycle will typically stabilize in 300-400 time steps.The model can be constrained, i.e., matching its output to a user-supplied time series, to allow isolation and testing of different components.Available constraints currently include atmospheric CO 2 , global temperature anomaly, total ocean-atmosphere carbon exchange, total land-atmosphere carbon exchange, and total radiative forcing.Most constraints operate by overwriting model-calculated values with user-supplied time series data during the run.The atmospheric [CO 2 ] constraint operates slightly differently, as the global carbon cycle is subject to a continuous mass-balance check.As a result, when the user supplies a [CO 2 ] record between arbitrary dates and orders the model to match it, the model computes [CO 2 ] at each time step, and any deficit (surplus) in comparison with the constraint [CO 2 ] is drawn from (added to) the deep ocean.The deep ocean holds the largest reservoir of carbon; therefore, small changes in this large pool have a negligible effect on the carbon cycle dynamics.When the model exits the constraint time period, [CO 2 ] again becomes fully prognostic.
Code availability and dependencies
All Hector code is open source and available at https://github.com/JGCRI/hector.The repository includes model code that can be compiled on Mac, Linux, and Windows, inputs files for the four Representative Concentration Pathways (RCP) cases discussed in Sect.4, R scripts to process model output, and documentation.We kept the dependencies as limited as possible, with only the GNU Scientific Library (GSL, Gough, 2009) and the Boost C++ libraries (http://www.boost.org).An optional unit testing build target requires the googletest framework (http://code.google.com/p/googletest).However, this is not needed to compile and run Hector.HTML documentation can be automatically generated from the code using the Doxygen tool (http://www.doxygen.org).All these tools and libraries are free and open source.
Main carbon cycle
In the model's default terrestrial carbon cycle, terrestrial vegetation, detritus, and soil are linked with each other and the atmosphere by first-order differential equations (Fig. 2).Vegetation net primary production is a function of atmospheric [CO 2 ] and temperature.Carbon flows from the vegetation to detritus and then soil, losing fractions to heterotrophic respiration on the way.Land-use emissions are specified as inputs.An "earth" pool debits carbon emitted as anthropogenic emissions, allowing a continual mass-balance check across the entire carbon cycle.More formally, any change in atmospheric carbon, and thus [CO 2 ], occurs as a function of anthropogenic emissions, land-use change emissions, and the atmosphereocean carbon flux.The atmosphere is treated as a single well-mixed box whose rate of change is: where, F A is the anthropogenic emissions, F LC is the land use change emissions and F O and F L are the atmosphere ocean and atmosphere land fluxes.The overall terrestrial carbon balance at time t is the difference between net primary production (NPP) and heterotrophic respiration (RH).This is summed over user-specified n groups (each typically regarded as a latitude band, biome, or -political units), with n ≥ 1: Note that NPP here is assumed to include disturbance effects, for which there is currently no separate term.biome's temperature anomaly T i : (5) These are commonly used formulations: NPP is modified by a the user-specified carbon fertilization parameter, β (Piao et al., 2013).RH changes are controlled by a biome-specific Q 10 value.Biomes can experience temperature changes at rates that differ from the global mean T G , controlled by a user specified temperature factor δ I .
Land carbon pools (vegetation, detritus, and soil) change as a result of NPP, RH, and land-use change fluxes, whose effects are partitioned among these carbon pools.In addition, carbon flows from vegetation to detritus and soil (Fig. 2).Partitioning fractions (f ) control the flux quantities between pools (Table 2).For simplicity Eqs. ( 8)-( 10) omit the time t and biome-specific i notations, but each pool is tracked separately for each biome at each time step: The ocean-atmosphere carbon flux is the sum of the ocean's surface fluxes (currently n = 2, high and low latitude surface box):
Conclusions References
Tables Figures
Back Close
Full The surface fluxes of each individual box are calculated from an ocean chemistry model described in detail by Hartin et al. (2014) based on equations from Zeebe and Wolf-Gladrow, (2001).The flux of CO 2 for each box i is calculated by: Where k is the CO 2 gas-transfer velocity, α is the solubility of CO 2 in water based on salinity, temperature, and pressure, and ∆pCO 2 is the atmosphere-ocean gradient of pCO 2 (Takahashi et al., 2009).At steady state, the cold high latitude surface box (> 55 • , subpolar gyres) acts as a sink of carbon from the atmosphere, while the warm low latitude surface box (< 55 • ) off gases carbon back to the atmosphere.Temperatures of the surface boxes are linearly related to atmospheric global temperatures (see Sect. 4.1), T HL = ∆T − 13 and T LL = ∆T + 7 (Lenton, 2000).The ocean model, modeled after Lenton et al. (2000) and Knox and McElroy (1984), circulate carbon through four boxes (two surface, one intermediate depth, one deep), via water mass advection and exchange, simulating a simple thermohaline circulation (Fig. 2).At steady state, approximately 100 Pg of carbon are transferred from the high latitude surface box to the deep box based on the volume of the box and transport in Sv (10 6 m 3 s −1 ) between the boxes.The change in carbon of any box i is given by the fluxes in and out: As the model advances, the carbon values or DIC change in each box.The new DIC values are used within the chemistry submodel to calculate pCO 2 values at the next time step.
Adaptive-time step solver
The fundamental time step in Hector is currently one year, and most model components are solved at this resolution.The carbon cycle, however, can operate on a variable time step, helping to stabilize it under particularly high-emissions scenarios.This Introduction
Conclusions References
Tables Figures
Back Close
Full will also allow future sub-annual applications where desired.The adaptive time step accomplished using the gsl_odeiv2_evolve_apply solver package of GSL 1.16, which attempts many different step sizes to reliably (i.e., with acceptable error) advance the model.Thus all the carbon cycle components handle indeterminate time steps ≤ 1 yr, and can signal the solver if a too-large time step is leading to instability.The solver then re-retries the solution, using a series of smaller steps.From the coupler's point of view, however, the entire model continues to advance in annual increments.
4 Other components
Global atmospheric temperature
Near surface global atmospheric temperature is calculated by: where, user-specified λ is the climate feedback parameter, defined as λ = S /S, where S is the climate sensitivity parameter (3 K) and S is the equilibrium climate sensitivity for a doubling of CO 2 (3.7 W m −2 ) (Knutti and Hegerl, 2008).RF is the total radiative forcing and F H is the ocean heat flux.F H is calculated by a simple expression of the ocean heat uptake efficiency k (W m −2 K −1 ) and the atmospheric temperature change prior to the ocean's removal of heat from the atmosphere (Raper et al., 2002):
Radiative forcing
Radiative forcing is calculated from a series of atmospheric greenhouse gases, aerosols, and pollutants (Eqs.17
Radiative forcing by halocarbons, other gases controlled under the Montreal Protocol, SF 6 , and ozone are calculated via: where α is the radiative efficiency in W m −2 ppbv −1 , and C is the atmospheric concentration.Introduction
Conclusions References
Tables Figures
Back Close
Full where the constants are the ozone sensitivity factors for each of the precursors (Ehhalt et al., 2001).The radiative forcing of tropospheric ozone is calculated from a linear relationship using a radiative efficiency factor (Joos et al., 2001) and a pre-industrial value of ozone of 25 DU (IPCC, 2001):
BC and OC
The radiative forcing from black carbon is a function of the black carbon and organic carbon emissions (eBC and eOC).
Back Close
Full
Sulphate aerosols
The radiative forcing from sulphate aerosols is a combination of the direct and indirect forcings (Joos et al., 2001).
The direct forcing by sulphate aerosols is proportional to the anthropogenic sulphur emissions (GgS yr −1 ) divided by the sulphate emissions from 2000.The indirect forcing by sulphate aerosols is a function of the anthropogenic and natural sulphur emissions.Natural sulphur emissions denoted by eSN is equal to 42 000 GgS.A time series of annual mean volcanic stratospheric aerosol forcing (W m −2 ) is supplied from Meinshausen et al. ( 2011b) and is added to the indirect and direct forcing for a total sulphate forcing.
N 2 O and CH 4
The radiative forcing equations for CH 4 and N 2 O (Joos et al., 2001) are a function of the concentrations (ppbv) and their radiative efficiency: The function f accounts for the overlap in CH 4 and N 2 O in their bands is: Note, we are not explicitly calculating concentrations of CH 4 and N 2 O within Hector, instead we have input files of concentrations.
Stratospheric H 2 O from CH 4 oxidation
The radiative forcing from stratospheric H 2 O is a function of the CH 4 concentrations (Tanaka et al., 2007a).The coefficient 0.05 is from Joos et al. (2001) based on the fact that the forcing contribution from stratospheric H 2 O is about 5 % of the total CH 4 forcing (IPCC, 2001).The 0.036 coefficient corresponds to the same coefficient used in the CH 4 radiative forcing equation.
Model experiments and data sources
A critical test of Hector's performance is to compare the major climatic variables calculated in Hector, e.g., atmospheric [CO 2 ], radiative forcing, and atmospheric temperature, to observational records and other models.We run Hector under historical conditions from 1850-2005 and then under all four Representative Concentration Pathways (RCPs) out to 2300 (Moss et al., 2010).The RCPs are plausible future scenarios that are developed to improve our understanding of the coupled human climate system.All necessary emission and concentration inputs are from the four RCPs (RCP 2.6, RCP 4.5, RCP 6.0 and RCP 8.5) freely available at http://www.pik-potsdam.de/~mmalte/rcps/ (Meinshausen et al., 2011b;Riahi et al., 2011;van Vuuren et al., 2011a, b, d;Masui et al., 2011;Thomson et al., 2011).Figures
Back Close
Full Comparison data is obtained from a series of models.We compared Hector results to MAGICC, a SCM widely used in the scientific and IAM communities, for global variables such as atmospheric CO 2 , radiative forcing, and temperature (e.g., Raper et al., 2001;Wigley, 1995;Meinshausen et al., 2011a).We also compare Hector to a suite of eleven Earth System Models included in the Coupled Model Intercomparison Project (CMIP5) archive (Taylor et al., 2012) (Table 3).All CMIP5 data are converted to yearly global averages from the historical period through the RCPs and their extensions.One SD and the CMIP5 model spread is calculated for each variable.All CMIP5 variables used in this study are from model runs with prescribed atmospheric concentrations, except for comparisons involving atmospheric [CO 2 ] which are from the emissions driven scenario (esmHistorical and esmRCP8.5).The models that run esmRCP8.5 are typically earth system models used to investigate the carbon cycle in further detail.
Historical
A critical test of Hector's performance is how well it compares to historical and present day climate from observations, MAGICC, and a suite of CMIP5 models.We carried out a few statistical tests on Hector (e.g., correlation and root mean square error) which are summarized in Table 4.After spinup is complete in Hector, the atmospheric Compared to observations, MACGICC6, and CMIP5 data from 1850 to 2004, Hector captures the global trends in atmospheric [CO 2 ] (Fig. 3) with correlation coefficients of R > 0.99 and an average root mean square error (RMSE) of 2.6 ppmv (Table 4a).
Hector has the ability to match atmospheric [CO2] records, but we disabled this feature to highlight the full performance of the model.
Historical global atmospheric temperature anomalies (relative to 1850) are compared across Hector, MAGICC6, CMIP5, and observations from HadCRUT4 (Fig. 4).Hector is running without the effects of volcanic forcing, leading to the smoother representation of temperature with time.Atmospheric temperature change from Hector over the period 1850 to 2004 is well correlated to (> 0.8) to observations and models with an average RMSE of 0.12 • C.
Future projections
Within the modeling community, models that best simulate the historical and present day climate are assumed to be credible under future projections.We are confident in Hector's ability to reproduce historical trends and are therefore confident in its ability to simulate future climate changes.We compare Hector to MAGICC and CMIP5 under differing future climate projections.
Figure 5 highlights historical trends in atmospheric [CO 2 ], along with projections of atmospheric [CO 2 ] under esmRCP8.5 from 1850 to 2100.Hector is perfectly correlated with MAGICC and CMIP5 over this period and with a RMSE of 9.2 ppmv (Table 4b).
Hector and MAGICC6 diverge from the CMIP5 median most notably after 2050, but are both still within the low end of the CMIP5 model spread. of carbon over the land as net primary production and respiration change with CO 2 fertilization and temperature effects.We compare Hector to MAGICC6 for changes in radiative forcing under the four RCPs (Fig. 7).Radiative forcing is not an output from the CMIP5 models and therefore we can only compare Hector and MAGICC6.Hector is offset slightly lower compared to MAGICC6, which is expected since atmospheric [CO 2 ] is slightly lower.Over the period 1850 to 2300 Hector is well correlated (1.0) with MAGICC6 with a RMSE of 0.25 W m −2 .We acknowledge that the correlation is lower under the historical period (0.79).This may be due to slight differences in the representation of atmospheric gases, pollutants, and aerosols between the two models.
Figure 8 compares global temperature anomalies from Hector to MAGICC6 and CMIP5 over the four RCPs, from 2005 to 2300.Hector and MAGICC6 are comparable in their temperature change across the four RCPs.However, both are lower than the CMIP5 median under RCP 2.6, 4.5 and 8.5, with the largest discrepancy under high CO 2 emissions in RCP 8.5.Regardless, Hector is still highly correlated (> 0.97) to MAGICC6 and CMIP5 for RCP 8.5, with a RMSE of 0.52 • C compared to CMIP5 (Table 4c).The fluctuations seen in RCP 2.6 within atmospheric [CO 2 ] are also apparent in the atmospheric temperature trends.However, the general trends of temperature change, peaking around 2050 and then slowly declining out to 2300 are captured within Hector.
Another way to visualize model performance is a Taylor diagram (Fig. 9) of global temperature change relative to 1850, from 1850 to 2300 for RCP 8.5.The closer the points are to the reference point (Hector) the higher the correlation and low RMSE between CMIP5 models and MAGICC6.Those points with a SD similar to that of Hector experience the same amplitude of temperature change over this time period (MAG-ICC6).All of the models are highly correlated with Hector, with a large range in the SD (1-5 • C).
Figures 10 and 11 present a detailed view of carbon fluxes under RCP 8.5, for CMIP5 and observations.The ocean is a major sink of carbon through 2100, becoming less Introduction
Conclusions References
Tables Figures
Back Close
Full effective with time in both Hector and the CMIP5 models.MAGICC6 does not include air-sea fluxes in its output, and because it is not open source we were unable to obtain these values.Therefore, we compare air-sea fluxes of CO 2 to MAGICC5.3, the version currently used in the IA model, Global Change Assessment Model, updated with explicit BC and OC forcing as described in Smith and Bond (2014).The correlation is high between Hector and CMIP5 over the historical period (0.95).However, the correlation drops off significantly between 2005 and 2300 (0.10) (Table 4c).This is an active area of research, investigating the differences between Hector and CMIP5 after 2100.One potential reason for the low correlation after 2100 could be due to the fact that we are only comparing to three models that run the RCP extension to 2300 (bcc-csm1-1, IPSL-CM5A-LR, and MPI-ESM-LR).With a larger spread of fluxes, Hector may be better correlated.The average correlation over the CMIP5 models over 1850-2300 is higher at 0.80, with a RMSE of 1.45 Pg C yr −1 (Table 4b).The land fluxes have a large range of uncertainty into the future within the CMIP5 models.Hector follows the general trends of the land acting as a sink of carbon initially with a gradual switch to a carbon source after 2150.Fluxes of carbon over the land are less well correlated to the CMIP5 median compared to the air-sea fluxes, 0.55 (historical) and 0.65 (RCP 8.5).Both land and ocean fluxes within Hector agree well the observations from LeQuere et al., (2013).Lastly, a unique feature of Hector is its ability to actively solve the carbonate system in the upper ocean.This feature allows us to predict ocean acidification, calcium carbonate saturations and other parameters of the carbonate system.Figure 12 shows low latitude (< 55) pH for Hector compared to CMIP5 and observations from 1850 to 2100 under RCP 8.5.We see a significant drop in pH from present day through 2100.
Conclusions
Hector reproduces the large scale couplings and feedbacks on the climate system between the atmosphere, ocean, and land.Hector falls within the range of the CMIP5 model spread and tracks well with MAGICC.Our goal was not to simulate the fine
GMDD Introduction
Full details or parameterizations typically found in large scale complex models, but instead to represent only the most critical global processes.This allows for fast execution times, ease of understanding and straightforward analysis of the model output.To help with the analysis of Hector we included within the online database of Hector, R scripts to process Hector's output as well as the comparison data.
Hector's two key features are its open source license and modular design.This allows the user to manipulate the input files, enable/disable/replace components, or include components not found within the core version of Hector.For example, the user can design a new submodel (e.g., sea-ice) to answer specific climate questions relating to that process.Because of these critical features, Hector has the potential to be a key analytical tool in both the policy and scientific communities.We welcome user input and encourage use, modifications, and collaborations with Hector.
While Hector has many strengths, there are a few limitations that later versions of Hector hope to address.For example, Hector does not have differential radiative forcing and atmospheric temperature calculations over land and ocean.The land responds to changes in emissions of greenhouse gases, and aerosols much quicker than the ocean, leading to different temperature responses over the land and ocean.Also, Hector does not explicitly deal with oceanic heat uptake.Surface temperatures are calculated based on a linear relationship with atmospheric temperature and heat uptake by the ocean is parameterized by a constant heat uptake efficiency.While Hector can reproduce global trends in atmospheric CO 2 , and temperature, we cannot investigate ocean heat uptake in the deep ocean using Hector.Currently, there is placeholder in Hector for a more sophisticated sea-level rise submodel.The current edition of Hector uses inputs of concentrations of CH Laboratory's Global Change Assessment Model to begin running policy relevant experiments.Hector has the ability to be a key analytical tool used across many scientific and policy communities due to its modern software architecture, open source, and object-oriented structure.
GMDD Introduction Conclusions References
Tables Figures
Conclusions References
Tables Figures
Back Close
Full Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | For each biome i , NPP and RH are computed as functions of their preindustrial values NPP 0 and RH 0 , current atmospheric carbon C atm , and the Figures Back Close Full Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Screen / Esc Printer-friendly Version Interactive Discussion Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | , 19-25, 27).Radiative forcing is reported as the relative radiative forcing.The base year user-specified forcings are subtracted from the total radiative forcing to yield a forcing relative to the base year.In the current model of Hector, the gases other than CO 2 are only used for the calculation of radiative forcing.Discussion Paper | Discussion Paper | Discussion Paper | are calculated by the CH 4 concentration and the emissions of three primary pollutants: NO x , CO, and NMVOCs: Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | [CO 2 ] in 1850 is 286.0 ppmv, comparing well with observations from Law Dome of 285.2 ppmv.Discussion Paper | Discussion Paper | Discussion Paper |
Figure 6
compares atmospheric [CO 2 ] from Hector and MAGICC6 under all four RCP scenarios out to 2300.Hector is well correlated with MAGICC6 from 1850 out to 2300 for the four RCPs.Under all of the scenarios except for RCP 8.5, atmospheric [CO 2 ] within Hector fluctuates around the MAGICC6 atmospheric [CO 2 ] values, with the most notable fluctuations under low carbon emissions.This is due to changes in the flux Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | 4 and N 2 O to calculate radiative forcing from CH 4 and N 2 O. Ideally we would like Hector to calculate concentrations from emissions of CH 4 and N 2 O.This would allow for quick integration within IAMs.Future plans with Hector include addressing some of the above limitations and conducting numerous scientific experiments, using Hector as a stand-alone simple climate carbon-cycle model.Also, Hector will be incorporated into Pacific Northwest National Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Ratto, M., Castelletti, A., and Pagano, A.: Emulation techniques for the reduction and sensitivity analysis of complex environmental models, Environ.Modell.
Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper |
Figure 1 .
Figure 1.Model phases for the coupler (left) and a typical component (right).Arrows show flow of control and data.The greyed spinup step is optional.
Table 1 .
Initial model conditions prior to spinup, assuming a pre-industrial steady state.
Table 2 .
Model parameters for the land and ocean carbon components. | 7,179.2 | 2014-12-19T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
Kolmogorov Flow: Seven Decades of History
The Kolmogorov flow (k-flow) is generated by a stationary sinusoidal force that varies in space. This flow is rather academic since generating such a periodic forcing in an unbounded flow is difficult to appear in nature. Nevertheless, it allows for simple experimental measurements and for a fairly detailed analytical treatment. Although simple, the k-flow makes a good test case for investigating simultaneously inhomogeneous, sheared, and anisotropic features in a flow, and several studies concerning the stability, transition, and turbulence of the k-flow have been published. The present article reviews the most important published works incorporating the k-flow as a test-bed for studying fluid mechanics, testing numerical or experimental methods, or even studying the properties of the k-flow itself.
Introduction
Near the end of the 1950s, A.N. Kolmogorov shifted his focus towards the study of the two-dimensional incompressible flows using a specific high wavenumber forcing and created the so-called Kolmogorov flow (k-flow). The Kolmogorov flow can be defined as a two dimensional and unidirectional shear flow with a specific sinusoidal mean velocity profile (U = sinz), which should always be maintained by any form of external forcing inside a viscous fluid. In U = sinz, z stands for the cross-stream coordinate. Using U = sinz, Kolmogorov's basic intent was to study and understand the transitions and complexities of turbulence in addition to the energy cascading process.
In most of the studies on turbulence, the meaning and intent of Kolmogorov's ideas are widely recognized. Since his pioneering ideas on locally isotropic tur-a number of measurements have also been made in terms of assessing the characteristics of turbulence while using different forms of the natural media including the atmosphere, large wind tunnels, the ocean, etc. [1] [2]. These measurements have been effective in confirming the different predictions of the multidimensional theory that was proposed by Kolmogorov [3]. Since then, the k-flow has been attracting a lot of attention and considerable progress and advancements have been made, both in the experimental and the theoretical aspects of the flow. Referring to the review article presented by Obukhov [4] and the studies conducted by Meshalkin and Sinai [5], it can be said that the k-flow belongs to a more diverse class of large-scale fluid instabilities. The Kolmogorov flow is more often defined as a form of sinusoidal flow whether the fluid being investigated is viscous or not. It is because of its simplicity, effectiveness, and accessibility in terms of analysis that Kolmogorov termed this flow optimal for being investigated in either theoretical or laboratory settings. The flow is also appropriate for conducting an investigation on fluid instability together with the transition towards turbulence [6]. Furthermore, different electrolytic fluids, soap films and other materials have been helpful in offering experimental measurements and realizations of this flow [3]. The k-flow has also been investigated and studied extensively in the field of magnetohydrodynamics (MHD) because suitably placed electric and magnetic fields can reproduce the k-flow fairly easy. It is used for the study of the varying dynamics of all types of electrically conductive fluids, electrolytes, different liquid metals, and plasmas. MHD turbulence and turbulent k-flow share common dynamical features, like the quasi-2D basic flow pattern and the inverse kinetic energy cascade and, thus, the k-flow has offered the test-bed for studies on fluid dynamics and flows in the field of MHD [7].
Characterization of Turbulent Flows
Surely turbulence is a most complex concepts and phenomenon, due to which it has been the subject of a large number of studies. It can be said that there is a lot more that needs to be done for grasping the complexities of turbulence. Researchers reaffirm that the study of turbulence is not easy because it demands a firm grip on the concepts of mathematics and physics. Despite the endless number of hypotheses that have been proposed in this regard, only a few of them have been able to generate definite predictions in terms of the ubiquitous nature of turbulence. It is also evident that there is no single universal theory of turbulence that has managed to provide accurate and deterministic predictions and applications of turbulence. Thus, it can be said that the true nature, cause, and mechanisms of turbulence need to be assessed and explored for shaping the future of fluid dynamics. In the first few decades of the 20th century, Richardson [8] and Kolmogorov [6] formulated a theory of turbulence using the concept of energy cascade. The kinetic energy is dissipated at a high constant rate from the large-size eddies towards the small eddies until the point at which the viscous action effectively dissipates the kinetic energy (KE). For the different incompressible Newtonian fluids, the following formula can be used for understanding the kinetic energy dissipation [9]: where S ij is the rate of the strain tensor and v is the kinematic viscosity coefficient.
After having performed an analysis of the Navier-Stokes equations, it can be said that there are a number of rigorous limits on the rate of energy dissipation for variable incompressible flows [9]. It has also been demonstrated that in the case of unbounded turbulence, the rate of energy dissipation is influenced by the shape of the forcing. The upper bound approach method that was adopted from the in depth analysis of Doering & Constantin [9] [10] revealed a dependence on a number of variable k-flow.
Firstly, all turbulent flows happen to be highly irregular. This is the reason for which most of the turbulence problems are dealt and studied in a statistical manner. Furthermore, turbulent flows are highly chaotic. However, it is important to consider here that not all chaotic flows can be classified as turbulent.
There are always readily available supplies of energy that lead to an increased and rapid homogenization of the variable fluid mixtures. Turbulent flows are marked for having an intense and strong three-dimensional vortex generation principle or mechanism. This phenomenon is commonly referred to as vortex stretching. In order to sustain the turbulent flow, there is always a need of having a persistent supply of energy. The main reason is that turbulence always dissipates in an agile manner, as the resulting kinetic energy is continuously converted into internal energy.
Turbulent fluid flow has always been classified as one of the most complex and fundamental problems in physics. Additionally, it also holds great importance on predictions regarding heat transfer, the weather, ocean currents, etc. In the field of fluid dynamics, turbulence is defined as a flow regime marked by a number of chaotic changes and variations in properties. It includes a lower momentum diffusion, rapid variations in pressure and momentum [3]. In the case of the turbulent flows, a number of unsteady vortices seem to appear on the different scales while interacting with each other. At the same time, both the structure and the location of the boundary layer change, leading to a lower or reduced overall drag. In order to understand turbulent flows in two-dimensional and three-dimensional k-flow regimes, the aforementioned features need to be evaluated [11]. Thus, turbulence is the most complicated unsolved problem in the field of classical physics [12].
Kolmogorov's seminal studies and papers can be seen as empirical departures from the different scaling predictions that he presented. Thus, it is now confirmed that the turbulent scales cannot be termed self-similar and that they Journal of Applied Mathematics and Physics gradually become more intermittent as the size of the scale decreases. The characterization and visualization of these deviations are yet another contribution of Kolmogorov. The modern theories and explanations on fluid turbulence can be seen as going beyond the Kolmogorov Theory. During the past 50 years, a number of sophisticated and theoretical descriptions regarding fluid turbulence were presented; these also include the seminal contributions of Robert Kraichnan [13] [14] [15]. However, a number of researchers agree that his works on statistical field theory are based on the assumptions and insights offered by Nikolai Kolmogorov [16].
There have been a wealth of publications from researchers that attempted to characterize k-flow and turbulence during the last century. Most of this research can be summarized into recently published review papers. Two review papers initially presented at the Royal Society summarize a large part of the progress that had been made on the understanding of turbulence up to that point of time [17]. Frisch [18] summarized the progress that had been made on the scaling in fully developed turbulence, discussing the work of those that focused their research on this matter and noting that even though turbulence remains an unsolved problem, the actual problem is that there is no consensus on how the problem of turbulence should be formulated. The review of Hunt and Vassilicos [19] was focused on the research that has been performed on small-scale turbulence until that point of time. After summarizing all the research that has been done on the topic, the authors presented some extensions and applications based on Kolmogorov's hypotheses. They pointed out that the most widespread practical application of Kolmogorov's model has been for calculating the effects of turbulence, yet there is a very long list of other possible applications, such as sound production, transmission of light, mixing of species and fluctuating forces.
A third review paper was presented in the same publication [17] by Bray and Cant [20], summarizing the advancements that the research on Kolmogorov's turbulence had offered in the field of combustion. Bray and Cant summarized the research that had been performed with the particular application in mind, focusing on the insight gained by research that performed direct numerical simulations. They also identified the unresolved problems and, based on the data from previous numerical simulations, a new theoretical model for the mean flame stretch factor was developed. A more recent attempt aiming to improve our understanding of turbulence has been performed in 2014 by Suri et al. [21].
Via simulations using a 2D model and experimental realization of a quasi-two-dimensional flow, the authors derived an equation for the vertical profile of the horizontal velocity field for Kolmogorov-like and unidirectional flows. The effects of viscosity, magnetic field and layer thickness on the coefficients of the equation are also being discussed.
Kolmogorov's Studies on Turbulence
To begin with, the two-thirds law should be analyzed in order to develop a better Journal of Applied Mathematics and Physics understanding of the theory. The law states that the mean square difference of the different velocities along two points in turbulent flow is inherently proportional to the specific distance between the two observational points raised in the power 2/3 in the area or regions of the different intermediate scales. With the modern development and formulation of new and more sophisticated measurement techniques, it is possible to gain new insights about the subtle structures and components of turbulence. Kolmogorov is also remembered for his contribution of giving a theoretical estimate in relation to the corresponding scale [22].
Kolmogorov was primarily concerned with the fluid mechanics and other related problems with relevance to the turbulent disturbances. These disturbances arise due to the hydrodynamic instability or inconsistencies in the flows of fluids that happen to have a small viscosity. Now here, the corresponding mathematical viewpoint or theory is highly complicated. In order to develop a better understanding of this phenomenon, Kolmogorov was of the view that a simple model should be used. He suggested that the model of the two-dimensional movement or motion of any viscous fluid caused by a periodic external source or force field can be used in this regard. And effective solution in terms of the problem of stability of the k-flow was presented by Lilly [23]. The model which was put forward was termed a convenient benchmark or object for furthering the theoretical investigations. No one had thought at that time that the model might be realized and used physically under the varying laboratory conditions. A number of other studies and references will be quoted later in order to offer a detailed idea about the k-flow and the mathematics behind its functionality [3].
The theories and works of Kolmogorov can be classified as the primary benchmarks for studying various natural phenomena such as turbulence. More specifically, Kolmogorov formulated a number of basic principles explaining the local structure of developed and complex turbulent flows. Moreover, these principles and concepts are based on a specific cascade model encompassing a large number of levels. There is also a sequence of scales ranging from large to small size. The large scales can be compared with the characteristic size of the entire system while the small scales can be seen as related to the order of the internal scale [24]. Making use of the available experimental data, Kolmogorov drew interesting conclusions: 1) all laminar solutions become unstable with the decrease in viscosity, and 2) with the viscosity approaching zero, the inherent smoothness of the observed solutions tends to decline in a very strong manner. It is important to note that the order of energy dissipation is determined based on the characteristic velocity as well as length, but is totally independent of the viscosity [24].
Based on these different conclusions, it was proposed that a turbulent solution is always present at low viscosity. In order to examine the problem in a more detailed manner, Kolmogorov proposed a simplistic model. It was referred to as the two-dimensional viscous flow that was caused by an external periodic force. The problems related to the stability of the flows were solved using the same ap-Journal of Applied Mathematics and Physics proach. Moreover, it was shown that the observed laminar flow happened to be unstable in relation to the long wavelength perturbations and disturbances [24]. However, there was not any kind of turbulent flow regime to be found. In short, a number of studies asserted that no turbulent flow regime could be obtained in two-dimensional cases. In order to understand turbulence and the various flow regimes, a third coordinate should be used. It has also been reported that the mechanism responsible for the onset of turbulence is marked for having a three-dimensional nature. Moreover, turbulence can also be studied using the numerical simulation of k-flow inside a compressible shear layer [24]. Comprehending the fully matured or developed turbulence is important for a number of applications of geophysical flows. The k-flow has long been studied in the domains of geophysical fluid dynamics in relevance to the finite amplitude Rossby waves with the atmosphere as the setting [1] [2].
One of the primary notions of turbulence is that any turbulent flow is always composed of a number of eddies of varying sizes. These sizes help in defining the characteristic length scales for all the different eddies which in turn are characterized by velocity and time scales. In this case, the large size eddies are always unstable and eventually break up into a number of smaller eddies. At the same time, the kinetic energy of the large eddy is eventually distributed into several smaller sized eddies. These smaller eddies also go through the same process resulting in even smaller eddies. In this manner, the energy is passed down from the large to the small scales until reaching the point at which the kinetic energy is dissipated into internal energy due to the viscous action of the fluid [11]. It is clear that a turbulent flow is unique for having a hierarchy of different scales of different sizes due to which an energy cascade is formed. The dissipation of the kinetic energy scales with an order of Kolmogorov length (η). Moreover, it is also important to note that the energy input for the cascade occurs due to the decay of the large size scales having an order L. The large scales can also differ from each other due to the order of their magnitude at relatively high Reynolds numbers. There are also a number of scales that are formed in between [25].
These scales happen to be very large in comparison to the Kolmogorov length, but smaller in comparison to the large scale of the flow. The mechanism and functions of these scales and turbulence have been explained in a number of studies and hypotheses proposed by Kolmogorov [6]. In his original works, Kolmogorov clearly postulated that for high Reynolds numbers, all small-scale turbulent flows and motions could be seen as statistically isotropic (meaning that no spatial direction or point can be discerned). However, not all of the large size scales in a flow can be termed isotropic primarily because they are influenced and determined on the basis of the specific geometrical features and characteristics of the boundaries. Yet another main idea of Kolmogorov was that this form of geometrical as well as directional information is decreased or lost inside Richardson's energy cascade. At the same time, the scale is also decreased or reduced so that the inherent statistics of the different small scales might come to have a universal character. In other words, the statis-tics for all small scales are the same when the Reynolds number is effectively high [6]. These findings were the source of a number of theoretical studies that were focused on the derivation of Reynolds number for accessing the onset of the fluid instability in different forms of unstrained k-flows. These studies also extended into unstable regimes using various forms of numerical simulations [3]. The results indicate that different small-scale instabilities result in a negative viscosity that in turn seeds a cascade of energy in relation to the injection scale. A number of researchers agree that the insights offered by Kolmogorov were useful in providing simplistic visualizations of the ingredients involved in two-dimensional turbulence. Kolmogorov further assumed that this cascading process occurs in a remarkably self-similar manner [7]. In other words, the eddies that share a given size behave in the same or similar manner in comparison with the ones having a different size. This assumption, combined with the 4/5 law, helped in formulating general scaling predictions that are used in the numerical simulations of turbulence [3].
The Early Experimental Background
The two-dimensional flows are considered powerful tools for conducting a theoretical study on the transition of turbulence. These forms of theoretical investigations demand less analytic and computational power in relevance to the three-dimensional flows and also allow for the creation of different forms of stationary two-dimensional flows using the principles and concepts of magnetohydrodynamics. At the same time, it is also critical to consider that spatially periodic flows play an important role in terms of two-dimensional flows primarily due to their high levels of degrees of symmetry. There are three classes (or types) of these flows that have distinct and unique symmetry properties. As the instability of these spatially periodic flows happens to be a major theoretical problem, there is a need for conducting detailed and comprehensive investigations of the phenomenon. One of the first theoretical studies in this regard was conducted by Meshalkin and Sinai [5]. Most of the studies in this regard are based on the ideal assumption that is based on an un-Journal of Applied Mathematics and Physics bounded fluid. Furthermore, the governing mechanism behind these studies is the Navier-Stokes equation. The parallel flows were also reported to be unstable against the various perturbations that used a very large scale in comparison to the periodicity length and other related features of the basic flow. In addition, this form of instability, which is also referred to as the negative viscosity instability, has also been found in a number of rhombic as well as square eddy lattices. However, no form of large-scale instability was detected in terms of triangular vortices. Instead, there was a form of oscillation reported for a specific form of vortex lattices. For about ten years now, the studies conducted on the topic in the field of magnetohydrodynamics have empowered us to generate periodic two-dimensional flows in all forms of laboratory experiments. One of the first successful attempts in this regard was made by Bondarenko et al. [26]. They observed the k-flow inside a specific electrically conducting fluid that was driven by an electromagnetic field. The indications and results of the studies conducted on square and triangular vortex arrays have been unable to conform to the different theoretical predictions [27].
At the same time, the specific Reynolds number in addition to the predicted large-scale instability was not observed. It was also reported by Bondarenko and a number of other researchers that the friction inside the layers of fluids is highly crucial for the instability and for comprehending the dynamics of the fluid flow [26]. Moreover, the observed instabilities that were observed were dependent heavily on the varying number of spatial periods. In other words, these instabilities were influenced by the degree of confinement present in the system. Despite an increasing number of studies that were focused on the confinement and wall friction factor, a detailed and comprehensive quantitative approach was always lacking [27].
Stability Analysis & Drag Reduction
Few of the many early studies on the hydrodynamics of non-Newtonian fluids contained analytic results and, even then, the results were obtained only for exceedingly simplified problems. In the early 1990's, Brutyan and Krapivskii [28] published one of the first studies to provide a thorough analysis and investigated the stability of k-flow in an incompressible viscoelastic fluid. The authors mathematically determined the critical Reynolds number and obtained the first known analytic result in the theory of stability of non-Newtonian liquids at the time of their study. Around the same period, Andr'e Thess [29] also performed an extensive study of the instabilities in two-dimensional spatially periodic flows. His work was divided into three published parts; in part I Thess examined the linear stability of parallel two-dimensional flows (i.e. Kolmogorov flow). In part II, he modifies his study to arrays of vortices with square symmetry [30].
Finally, in part III he further extended his study to a third symmetry, assuming triangular alternating vortices [31]. Through these papers, Thess proposed a re- results, however, demonstrated that the instability threshold is strongly overestimated, suggesting that this was due to magnetohydrodynamic parameters discussed in [32] and [33], but the wave number was in good agreement with the study's theoretical results. In the second part of his work, Thess attempted to bridge the gap between theoretical and experimental studies via three major methods paying attention to the case of high/infinite Reynolds numbers, studying the influence of lateral confinement and revealing the spatial structure of unstable modes. Finally, in the third part of his study, Thess extended his prior studies by investigating an inviscid flow with hexagonal symmetry. Although the study displayed a good replication of previous experimental results in electromagnetically driven flows, the author suggests that other specific aspects, including magnetohydrodynamical, would have to be considered in relevant studies. In another paper published soon afterward, Thess examined the inviscid instabilities in two-dimensional periodic flows [34]. Although largely based on the previous publications, this paper presented new theoretical results on the stability of non-parallel flows driven by a Lorentz force, displaying that waves propagating along the symmetry directions occur in the triangular lattice. The results were in good agreement with previous experimental studies, developing a theory that could reproduce the experimental values for the instability threshold in a triangular lattice.
Dubrulle and Frisch also published their findings on flow instability in the early 1990's. In their work, the authors embraced a general formalism to determine eddy viscosities for the incompressible flow of arbitrary dimensionality subject to periodic forcing in space and time [35]. A section of their manuscript is devoted to layered flow, with detailed results for time-independent parallel flows, including variants of the k-flow. However, the authors noted that flow regimes presenting negative viscosity instabilities cannot be examined using the restricted framework assumed for this study, noting that a correct theory should also include dissipative and nonlinear terms. In a companion paper, however, Henon and Scholl used the numerical observations of Dubrulle and Frisch and, performing simulations made with a lattice-gas algorithm, predicted a non-transverse instability for a modified k-flow [36].
Several years later, Frisch et al. [37] published their work regarding large-scale dynamics of the k-flow near its threshold of instability in the presence of the beta effect (Rossby waves). The paper was centered on a single-dimensional "toy model" for studying an instance of the interaction of turbulence and waves. The paper is divided between results specific to the β-Cahn-Hilliard equation [38] and results that may apply to a broad class of problems involving resonant wave interactions.
Wang et al. [39] studied turbulence as well, although their work took a different, quantitative approach. The fundamental hypotheses underlying Kolmogo-Journal of Applied Mathematics and Physics rov-Obukhov turbulence theory, also known as the "K62" theory, were quantitatively examined [40]. The authors performed direct Navier-Stokes simulations (DNS) at 5123 resolution with Taylor microscale Reynolds number up to 195.
Three very different types of flow were considered: free-decaying turbulence, stationary turbulence forced at a few large scales, and a 2563 large-eddy simulation (LES) flow field. Both the forced DNS and LES flow fields showed realistic inertial-subrange dynamics. While their results were limited to moderate turbulence Reynolds numbers, the authors advised the readers not to draw definite conclusions based on the DNS results available at the time and noted that their results were supportive evidence of the K62 theory. Three years later, the authors published a paper on the same subject [41]. Using direct numerical simulations (DNS) and large-eddy simulations (LES) of velocity and passive scalar in isotropic turbulence (up to 5123 grid points), they quantitatively examined the refined similarity hypotheses as applied to passive scalar fields (RSHP) with Prandtl number of order one. For the first time, the exact energy and scalar dissipation rated have been used and scaling exponents were quantified as a function of the local Reynolds number. Their study demonstrated that the velocity increments depend on the locally averaged dissipation rate and enstrophy, while the scalar increments depend on the local average dissipation rates alone. The results of the study compared well with those of other numerical and experimental studies. The authors specifically mention an interesting outcome of their study; the fact that the small-scale features of the scalar field and those of the velocity field share both differences and similarities.
Another study concerning the stability of k-flow investigated the critical Reynolds number while examining the instability of the flow in soap films [42]. The study declares that the idealized theoretical model, which is based on a linear stability analysis, predicts a critical Reynolds number nearly fifty times lower than the critical value derived from a soap film experiment. The study suggested a model with two-dimensional motion equations that provides better agreement with the experimental results than previously suggested models; however, as stated by the authors, the model still has inadequacies and only a full three-dimensional analysis of the system would fully describe the actual three-fluid flow mechanism.
The viscous dissipation also is an important aspect of turbulent flow and, therefore, the bounds associated with it are of great importance. The elementary bounds on various aspects of the viscous dissipation rate in Navier-Stokes flow with Kolmogorov forcing have been examined in the early 2000's by Childress et al. [43]. The study revealed that the bounds are rather generic and, thus, any improvements must capture key features that typify the turbulent flow response.
In 2002, Chen and Price [44] performed an instability analysis of liquid-metal Kolmogorov flow in a straight duct. After a thorough analysis of the proposed mathematical theorem, the authors performed numerical experiments to determine the instability thresholds of metal fluid flow. The results of this analysis were a good match with those of Thess [29] and Kolesnikov [45]. Oparina and Journal of Applied Mathematics and Physics Troshkin [46] performed an analytical study to determine the stability of the k-flow in a channel with rigid walls. Their study displays that k-flows with short periods will remain stable inside a channel with rigid walls. The study was based on theorems that describe the bifurcation of the solution into a new flow regime, either steady-state or self-oscillating, when the Reynolds number attains a certain critical value. A few years later, in 2005, the authors also published a paper on the secondary electromagnetically driven flows [47]. The main motivation behind the second paper was to show the qualitative differences between elementary (N = 1) and extended (N > 1) wall-bounded flows. Based on similar numerical experiments as those of their previous work, the authors offered insight on the predicted development of secondary flows in a duct, concluding with a numerical model that can approximate such flows in bifurcating steady-state solutions.
The instability observed in the k-flow has also managed to generate a large interest in the domain of mathematics and other related fields. This flow is able to show a large-scale instability in terms of the negative viscosity type. For most of the super critical conditions, it is important to note that the inverse cascade happens to be a distinctive feature of for the different two-dimensional flows. Initially, we should consider the two-dimensional flow of any incompressible fluid that is being governed by a specific dimensionless equation. The mathematical treatment for the instability equation is determined by taking into account the type of system i.e. whether it is bounded or not. It is also expressed on the basis of some proper boundary conditions [27]. It is because of this behavior that the small-scale forcing can be termed an effective mechanism or means of generating a large-scale two-dimensional turbulence. It should also be noted that the linear stability in relevance to large-scale perturbations has also been investigated by making use of multiple scale analysis. Flow instabilities have remained a classical subject in the field of fluid dynamics. Moreover, the theoretical studies about their presence and occurrence in most polymer solutions are of paramount importance in a number of industrial applications. A satisfactory understanding of these flows involves a consideration of the viscoelastic behavior of these fluids, which is the main reason why the subject of stability in relation to k-flow has been investigated in a number of studies and experiments [27]. When considering the linear stability of the parallel k-flow, it is important to consider the viscosity, the confinement, and the linear friction. These computations help in the provision of different neutral instability curves in terms of the parameter space together with the wave numbers and speeds. A great deal of evidence suggests that all stability parameters are dependent on confinement in a non-uniform manner. It has also been shown that all forms of weak transverse confinement decrease the resulting longitudinal wavelength of the perturbations at the onset of the instability. Moreover, strong confinement helps in changing the character or nature of the instability into the one having an oscillatory instead of an exponential nature [27].
Boffetta et al. [27] also performed numerical simulations to determine the Journal of Applied Mathematics and Physics drag reduction in turbulent k-flow [27]. Using a linear viscoelastic model, the authors examined the three-dimensional turbulent k-flow and demonstrated that drag reduction does take place above a critical Reynolds number. In their study, an expression for the dependence of the critical Reynolds number on polymer elasticity and diffusivity was proposed. They concluded that the drag coefficient can be expressed as a function of the rescaled Reynolds number only, that this function is universal with respect to the fluid characteristics, and that its shape can be derived by simple phenomenological arguments. However, the numerical verification of these expressions and conclusions using nonlinear models has not been addressed in their work.
The nonlinear dynamics of viscoelastic k-flow have been examined by Bistagnino et al. [48], both analytically and numerically. The authors specifically noted that the physical reason for their study was that, even though there are no physical boundaries, this flow has several analogies with channel flows and is one of the few known solutions of the Oldroyd-B model [49]. The study concludes that the weakly nonlinear dynamics are described by equations that resemble those introduced by Cahn-Hilliard [38]. However, the equations contain a fifth-order nonlinearity and with coefficients that depend on the Deborah number. They also performed a study on drag reduction, displaying that the injection of polymers induces an increase of the mean flow and reduces the drag coefficient. The main qualitative conclusion of their study is that drag reduction appears to be a phenomenon coupling large and small scales. Another study based on numerical simulations that investigated the dynamics of the two-dimensional periodic k-flow of a viscoelastic fluid, described by the Oldroyd-B model, has been presented two years later by Berti and Boffetta [50]. The authors investigated the destabilization of the k-flow induced by the elastic forces associated with the dynamics of polymer molecules in the solution. The study revealed that above a critical Weissenberg number (Wic ≈ 10), a transition to new dynamical states was observed. The authors noted also that the establishment of mixing features is possible in that state.
Mishra et al. [51] recently performed numerical simulations on two-dimensional Kolmogorov flows, studying and identifying all of the possible flow regimes and their bifurcations, yet focusing on the reversal and condensate regimes. The parametric study was mainly performed for a varying Rh, which represents the ratio of the inertial and viscous terms, revealing that its increase would render the flow unstable, following a series of transitions thoroughly described in the study.
The results of this study were in good agreement with both other similar numerical studies and with experimental observations. Finally, the study includes the analysis of the energy transfers among Fourier modes and displayed the symmetry of flow reversals in Kolmogorov flows.
Stability of the Beta Plane Kolmogorov Flow
The consequences and outcomes for the varying geophysical Beta effect will need to be explored. It can be shown that even in the limit β→0, the Reynolds Journal of Applied Mathematics and Physics number can be reduced in response to the generic effect of beta. The stability of the Beta plane of geophysical k-flows has been studied by a number of researchers and authors. Lorenz [52] and Gill [53] evaluated the stability of this flow at α = 0 using an inviscid case. In terms of a viscous case, Frisch et al. [37] took a different value for the a. An analysis of these studies has revealed that different values of alpha have been used in this regard [54].
Stability of an Oscillating Kolmogorov Flow
For oscillating flows, there is a need for considering the time and traverse direction. This issue can be reduced to an infinite algebraic problem. By using a number of recurring fractions, it can be proved that the time-independent flows are quite unstable to the different perturbation modes. It should be noted here that these modes do not possess the periodicity of the basic flows when considered in traverse directions. Instability has also been identified for a number of inviscid cases even when the different perturbation modes had the same periodicity [55].
It is important to consider that the stability of k-flow has been studied using a number of different materials and fluids. A number of studies have been carried out using the ordinary viscosity and the lateral walls. In terms of the strongly confined systems encompassing only a single period of the k-flow, a form of oscillatory instability was identified. On the other hand, the instability of the flow without any boundaries has also been evaluated using various quasi-periodic perturbations. This specific viewpoint has helped in understanding the stability of the flow while using the confined systems. It can be said that the future studies should be dedicated to exploring these instabilities in more detail [56].
Bifurcation Analysis
The studies in this regard have shown that with the increasing strength of the variable force, the system exhibits a number of different forms of bifurcations in response to steady states. In addition, the traveling waves, torus solutions and all other related factors also come to influence these bifurcations. Different types of bifurcation analysis techniques have also been used for exploring the topic in detail [56].
In 1991, Platt et al. [57] The recent paper from Tithof et al. [62] presented a combined experimental and theoretical study on the instabilities of k-flows meant to compare the validity of a numerical model to real-world results. This also is the first k-flow study to provide a quantitative analysis of the secondary instability that generates a time-dependent pattern of vortices. The authors performed physical experiments using electromagnetic forcing to drive a quasi-two-dimensional shear flow in a thin layer of electrolyte suspended on a thin lubricating layer of a dielectric fluid. Their theoretical study was based on the 2D model from one of their previous works [21] and, according to the authors, the numerical model predicts the modulated flow pattern with fairly reasonable accuracy. It was indicated, however, that the accuracy of the numerical predictions does decrease as the Reynolds number increases.
Stratification and Heat Transfer
On the subject of stratification and heat transfer, k-flows can be assessed and investigated by using a weakly stratified two-dimensional fluid flow. Firstly, the amplitude equations and measures for the whole system will need to be derived. Both high, as well as low Peclet numbers, will be used in this regard. To begin Journal of Applied Mathematics and Physics with, it is important to note that the stability of the viscous sheer flows is a comparatively difficult problem. For decades now, the problem has continued to interest and influence scientists and researchers from all over the world. Without taking into consideration the effects of stratification together with compressibility, the linear theoretical analysis of the problem might not be easy. It should be noted that this analysis is primarily based on providing the solution for the famous Orr-Sommerfeld equation. It is also important to consider that only a few generalized results and solutions have been obtained for this specific equation. As mentioned earlier, deriving the critical Reynolds number is crucial for understanding the occurrence of stratification and heat transfer in k-flows. More specifically, the resulting instability can be measured more effectively using different values of the Reynolds number. In an effort to grasp the nature and functions of stratification, a number of studies used this approach. Stratification and heat transfer can be assessed using gravity. More appropriately, gravity is used in a direction that happens to be transverse to the k-flow.
This strategy can be used to explore all forms of weak stratification. It can also be deployed for modifying the basic versions of linear instabilities in addition to the nonlinear development. As the stratification exerts a strong stabilization effect, an inverse cascade effect can be anticipated. Yet again, there are a number of geophysical motivations that play an important role in this regard. For this reason, it is also important to measure (and, in some ways, control) these motivations. One such motivation is the stability of the different vertical shear flows generated by the atmosphere [63]. Moreover, the internal gravity waves that possess finite amplitude can also be classified as one of these geophysical motivations. The phenomenon of stratification and heat transfer in k-flow has also been investigated in the laboratory settings. However, the direct application of these experiments will not be considered here. The focus is primarily to offer a description of the mechanical problem [1] [2]. In order to successfully formulate an assessment, it is imperative to start out with the factors of vorticity as well as heat equations for the two-dimensional stratified flows. This flow is defined using the x-z plane while measuring the gravity along the z coordinate. Secondly, it is crucial to exploit the incompressibility of the flow in order to express the different components of velocity in terms of the different stream functions. At the same time, the Kolmogorov shear flow that is present in the background is measured and characterized by Ψ0 = U0 l cos (z = l), where U0 is the amplitude. In the formulation and construction of the equations, it is important to keep a number of things in mind: 1. the equations are dimensionless, and 2. these equations will be used for recovering the unstratified k-flow [1]. The studies conducted by Balmforth and Young [1] have confirmed that the introduction of varying forms of stratification can be helpful in suppressing the observed instabilities. The second instability in this regard happens to be a conductive one that operates primarily through the creation of a wide scale thermal diffusion. This specific instability is reported to arise with a much stronger form of stratification. It also leads to the generation of prominent Journal of Applied Mathematics and Physics staircases in relevance to the buoyancy field. The different steps seen in the staircase are marked for having their own nonlinear dynamics. These steps have also been reported to show coarsening during the various phases of the stratification process. In the second study conducted by Balmforth and Young [2], the available configurations and parameters were such that only viscous instability had been present due to which there was no evidence that could confirm the layering [1]. A further study based on [1] and [2] by Sarris et al. [64] examined the laminar convection flow in enclosures driven both by a nonuniform Lorentz force of Kolmogorov forcing type and by a buoyancy force. The researchers found that a proper combination of the magnetic and gravitational forces may enhance heat transfer up to 40% over the usual natural convection heat transfer rates.
The stratified shear flows also arise in a number of astrophysical fluids. One of the core issues in this context is realizing the manner in which the unsteady eddy motions result from the different forms of steady forcing. Furthermore, it is crucial to know how the motion is able to rearrange as well as transport all of the fluid properties. The k-flow has also been rationalized as a mechanism for understanding the unstratified flow dynamics and transition of turbulence. Numerous laboratory experiments and studies have reported the generation of different staircases separated by sharp interfaces. In most of the laboratory settings, these staircases have been created using dragging bars or grids from inside the salt-stratified water [65]. The turbulent environment of the ocean and other related settings are believed to cause the same effect on the flows. Thermal convection and other similar measures have also been used for gaining a deeper understanding of the heat transfer in combination with stratification [2]. Although the phenomenon has not been proved empirically, it has been assumed that the turbulent field is the primary ingredient for accessing the layering problem. In other words, the process of layering offers a number of insights about the stratification process. Based on the same premise, a number of mathematicians and researchers have tried to produce simple models for evaluating turbulent stratified flows. However, it should be noted that most of these models tend to rely on simplistic and empirical parameters of different forms of turbulent transport. The prevailing notion in this regard is that the point at where the flux decreases together with the gradient, the observed stratification is quite unstable. It has also been reported that the staircases can also be seen for lower values of the Reynolds number. An analysis of this process gives rise to a number of key questions for studying stratification in terms of k-flows. Moreover, it leads to the need for more detailed and analytical explorations using the principles and concepts of fluid mechanics [2].
The instabilities that are catalyzed by the different viscosities have also been measured and investigated in a number of other contexts. Researchers also report a temptation of rationalizing the prevailing instabilities as analogies. Furthermore, a number of different indications have been reported in current theories and principles that are being used for understanding stratification. A recent E. D. Fylladitakis Journal of Applied Mathematics and Physics paper from Ponetti et al. [66] describes the transitions in a stratified Kolmogorov flow, with the researchers performing a numerical study to examine the transitions that lead the flow to chaotic states, identifying that the flow reaches chaotic configurations through two different routes: one involving drifting states and one involving a gluing bifurcation.
Hydrodynamic Fluctuations in Kolmogorov Flow
Bena et al. studied the hydrodynamic fluctuations in the k-flow in both the linear [71] and non-linear [72] regimes. Their work was based on Landau-Lifshitz [73] fluctuating hydrodynamics, mainly because, as the authors proclaim, of its relative simplicity. The main purpose of the articles was the study of the statistical properties of k-flow via numerical calculations. The first study [71], focusing on the linear regime, presented that the validity of the incompressibility assumption is flawed, leading to unsatisfactory results; however, the problem becomes too complex if compressible hydrodynamic equations are used, due to the boundary value problem. By basing their theory on the relative simplicity of the k-flow, the authors managed to display that, in the long time limit and for linearized fluctuating hydrodynamic equations, the flow behaves as an incompressible fluid irrespectively of the Reynolds number. In their second published paper [71], which explored the nonlinear regime, the authors verified that the incompressibility assumption leads to a wrong form of static correlation functions, except near the instability threshold. They used a perturbation technique to find the limits of where the macroscopic behavior of the fluid is not affected, displaying that the stochastic dynamics of the system is governed by two coupled nonlinear Langevin equations in Fourier space when the system is close to the instability threshold.
Mansour et al. also [74] performed particle simulations of the k-flow and analyzed them by the Landau-Lifshitz fluctuating hydrodynamics. The authors concluded that a spurious diffusion of the center of the mass has no effect on the average macroscopic behavior of the system, yet corrupts the statistical properties of the flow and is an issue for microscopic simulations. Their study provides an analytical expression for the corresponding diffusion coefficient. Several years later, a molecular dynamics simulation of spheres was performed in order to study the behavior of k-flows in granular matter [75]. The spheres interacted via elastic collisions and a force mimicking the effect of capillary bridges. It is noted that the instability of the flow is present even in dry granular matter, where particle interactions are limited to inelastic collisions.
The advection of passive particles in the k-flow has been studied by Beyer and Benkadda for two different regimes of the flow [76]. The regimes were cross-checked based on the same parameters used in Platt's study of chaotic k-flow [57]. Their study displays that the advection of particles is different within different regimes, even though the asymptotic diffusion remains normal in all cases. The authors concluded that time characteristics alone are inadequate to define anomalous transport, which requires both time and space characteristics to be simultaneously present. Mitchell and Grigoriev [77] presented a numerical study, investigating the change of the mixing properties associated with the transition from laminar to turbulent regime in a two-dimensional k-flow. The au-Journal of Applied Mathematics and Physics thors concluded that the mixing efficiency improves as the forcing is increased, i.e. steady flows are the worst mixers and turbulent flows the best. However, neither the complexity of the flow or the mixing efficiency increase monotonically. It was noted that the mixed area fraction of a class of time-periodic and quasi-periodic flows can be accurately described by a perturbative approach, although the flows considered in the study cannot be considered weakly perturbed.
Recent Numerical and Experimental Research on Kolmogorov Flow
A number of experimental studies and investigations have been carried out in order to understand the nature and functions of k-flow. We will begin this section by presenting the most widely accepted and used modeling and simulation methods, highlighting their primary advantages and limitations. The section will continue with the presentation of the experimental studies that have been performed on Kolmogorov flows, summarizing their purpose and outcomes.
Modeling and Simulation Methods
Mathematical and all other forms of modeling are useful methods for the analysis of complex systems. Modeling is also used at times when it is not feasible to conduct experiments with the real systems. Provided that models offer adequate descriptions of the different casual relationships, conducting experiments with them using computer aided mechanisms is one of the most effective and efficient measures. In terms of k-flows, there is a number of modeling and simulation techniques that can be used, including conceptual, functional, constraints, declarative, and multi-model designs [78].
Conceptual Models
Conceptual models are used for making systems that are easier to assess and use. However, the techniques that are used in terms of these models design might not be good enough to understand and grasp the complexities and difficulties of the k-flow. A detailed conceptualization process can formulate and evaluate the model. During the entire process, it is also expected that new principles and concepts will be formed. The techniques that are used in this regard have been primarily formalized for being used with the most simplistic models. One of the main advantages of the design is that it helps in formulating systems that are simple to use. However, it can also be taken as a disadvantage because the process limits the functions of the resulting systems [79].
Declarative Models
A model can be termed declarative if it is successful in determining and analyzing the actions of different agents and the manner in which these states can be changed. More specifically, these models are used for specifying the reactions to states. Moreover, these models allow for adding qualitative facets and considerations without any compromise on the accuracy. It can also be said that declara-Journal of Applied Mathematics and Physics tive modeling is based on developing a model in a diagrammatic manner. It should also be pondered that the term points out towards the use of a procedural approach. Moreover, the model is represented in the form of facts that are true. In other words, these facts are used for defining the model [79].
Functional Models
Functional models help to determine the mode of the functions and operations
Constraint Models
The different entities in declarative models are defined in terms of constraints that determine their nature as well as the relationship. The program code is preferably kept separate from the specific modeling description that has been used.
In order to develop a model from the constraints, it is imperative to modify its description using the natural language. The success of the model is based on using the right constraints and modeling techniques [80].
Multi-Models
This form of modeling can be seen as an extension of the object-oriented designs and techniques. One of the major contributions of these modeling techniques and methodologies is that they allow for different forms of mapping between the real and digital worlds. Furthermore, this form of mapping allows for a more realistic view of the design. The process of generalization and aggregation is used in order to form a number of hierarchical structures. More specifically, these models are created using a number of constructing objects and then connecting them. It can be said that the resulting models share the advantages and benefits of the different modeling procedures and designs. It has also been observed that some levels might be functional or declarative, while others might be marked for possessing the features of other modeling designs [79]. One of the first numerical simulations related to k-flows has been performed to investigate the stochasticity properties of dynamical systems [81]. Although stochasticity is defined in a qualitative way, the proposed method allowed for the definition of a quantitative parameter, the "entropy-like quantity", which is related to the Kolmogorov entropy for associated flow.
Simulation Techniques for Kolmogorov Flows
The primary purpose of a simulation is to gather the maximum possible infor-mation about a system using the most convenient measures and strategies [79]. However, the use of different models and simulation techniques for understanding the Kolmogorov flows has been subject to a number of controversies. Although a wide range of studies and researches has been conducted in this regard, there are a number of drawbacks and complexities that need to be ameliorated.
Kalis and Kolesnikov [82] performed a numerical study of the k-flow in a strong magnetic field. The authors proposed that instead of two linear electrodes located perpendicular to the field, the linear electrodes should be positioned periodically along the x axis, through which a DC current I is periodically applied. It has been suggested that this approach allows for the formation of k-flow in a channel with non-conductive walls. In 1997, Posch and Hoover [83] suggested an alternative method for the simulation of hydrodynamic flows, which has been applied on a two-dimensional k-flow. The method was based on Smooth Particle Applied Mechanics and, thus, the authors baptized it SPAM. The two-dimensional k-flow that the method was applied on had the fluid at the top and bottom half of a tube accelerated in opposite directions. They concluded that their method reproduces the transition from a laminar to secondary stationary flow, albeit qualitatively. If the Reynolds number is increased and the secondary flow is no longer characterized by an array of stationary vortices, SPAM can determine the transition to fully developed turbulence. However, the authors could not determine the Reynolds number in the unstable flow regimes.
As mentioned earlier, k-flow arises when fluid is subject to an artificial and sinusoidal force. The resultant flow happens to be periodic as well as similar to common shear flows observed in different forms of modeling and simulation techniques. A number of researchers and mathematicians have studied multidimensional simulations of k-flows. Firstly, the flow was analyzed in two-dimensional numerical simulations. Shebalin and Woodruff performed a three-dimensional simulation of the flow using different forms of viscous stress [84]. Moreover, three-dimensional simulations using the measures of hyperviscosity were carried out by Borue and Orszag [85]. Now the question about alternative grid size dependencies using the Smagorinsky model can be approached using two different approaches and methods. Firstly, a detailed prior analysis is to be performed using direct numerical simulation (DNS) on the available data sets. Later onwards, the resulting velocity field is filtered out using a number of filter widths. For each of these filter widths, the Reynolds stress, as well as the Smagorinsky formula, are evaluated. On the other hand, a second approach can also be used. This method employs the use of the LES experimentation. For the different numerical resolutions, a number of simulations are administered using different or varying values of the Smagorinsky constant [86].
The value that leads to the best resolution helps in determining the grid size dependence. It is important to consider that both of these approaches lead to similar results and conclusions in terms of grid size dependence. An intermediate range of wave numbers has also been reported for grid size dependencies. Journal of Applied Mathematics and Physics It is imperative to have an enhanced grid size dependence where the low wave numbers are observed. At the same time, satisfactory and reliable results have also been achieved using low resolutions, especially when an appropriate grid size was used. However, a deterioration was seen in terms of the different predictions as well as in the turbulent shear stress. A number of papers and studies confirm the idea of alternating the grid size dependence in order to offer a compensation for all forms of limited under resolution, but still a lot more needs to be done in order to reduce the problems and complexities encountered in simulation and modeling techniques. The goal can be achieved more efficiently and effectively using a specific Smagorinsky constant that can provide the best results. The most appropriate value of the Smagorinsky constant may be assessed by a number of sophisticated approaches. Moreover, another major trend in this regard is measuring and assessing the usefulness of the LES [86].
The results of the study conducted by Woodruff Seiner et al. [87] indicate that most of the low-resolution simulations are able to reproduce the resulting kinetic energy. Furthermore, these simulations are also able to reproduce almost all of the statistical quantities. However, the low-level resolutions failed to determine the correlation coefficient Cxz. It is evident that the low-resolution simulations are the most recommended under similar circumstances in order to provide for optimal and reliable results. The results also reported no form of sensitivity in terms of the correlation coefficient and the changing values of the Smagorinsky constant. Thus, it is clear that the simulations that were performed using the Smagorinsky constant failed to determine the exact value of the correlation coefficient. For this reason, it is clear that no simple tuning method of the constant is good enough to determine the correct value for the correlation coefficient. In simple words, it can be said that a more drastic adjustment and restructuring of the model is needed for more appropriate measurements and calculations of the coefficient [86].
Simulation of turbulent flows is also attracting a great deal of attention these days. Modeling and simulation of k-flow can be analyzed through large eddy simulations (LES). The core idea behind any form of large eddy simulation is that almost all of the largest turbulent scales can be resolved in a numerical manner. At the same time, it also asserts that only the smallest, as well as self-similar scales, might be helpful in the modeling process. If one could resolve all scales in terms of large and small scales of the inertial range, the developed model would be relatively basic and effectively computable. One such model is the Smagorinsky model that, despite its faults and drawbacks, has been used successfully until now for carrying out large eddy simulations. However, it is a reality that this concept can be used with complete accuracy for the simplest models. For the more complex models, the available computational techniques and resources only allow for a resolution into two transitional regions: the inertial and energy containing ranges [87].
The complex and fastest computational machines will be able to make a small dent in the problem. We can use these computational devices and methodologies Journal of Applied Mathematics and Physics for the solution of complex flows. An important question in this regard is to ask what might be the consequences and outcomes of making use of these inadequate resolution methods and what measures should be taken for their amelioration. It is also evident that there is a need to look for alternative and more effective simulation and modeling techniques. It is also important to ask whether an alternative non-grid size dependence and measure be used for improving LES of k-flow. The possibility of having different alternatives in terms of grid size dependencies has been raised by a number of researchers and experts. With this said, it is possible to make use of alternative grid size dependencies of the Smagorinsky model in relevance of the LES of k-flow [87].
In addition to having a number of computational and numerical advantages, the k-flow also offers a mechanism for testing different turbulence and simula- for the non-equilibrium turbulent flows. Mainly due to the explosive rate at which computing power increased, most of the numerical simulation studies took place during the last decade. The exponentially increasing processing power of modern computers was also the herald of more detailed, complex numerical models. In order to formulate more detailed and comprehensive models of turbulence, there is a need for performing accurate predictions and estimates of the dissipation factor. Furthermore, the values of the Reynolds numbers will also need to be evaluated in this regard. The study of turbulence driven by a number of Kolmogorov forces using different single number profiles was conducted by Borue & Orszag [85]. Since then, a number of attempts have been made for understanding the force shape dependence in terms of turbulence [9].
By using direct numerical simulations, Schaefer et al. [88] tested the model equations for the mean dissipation using a k-flow. The authors compared the standard model [89] and a transformed Menter's kφ model [90], which included a cross diffusion and second production terms, against the results of direct numerical simulations. Due to the second production term, the transformed model displayed superior behavior than the standard model; however, the cross-diffusion term held no importance for obtaining a steady solution. Zhang Journal of Applied Mathematics and Physics and Fan [91] simulated two-dimensional k-flow via the direct Monte Carlo method. Their simulations have been performed for a Knudsen number of 0.005 and the authors observed two main regimes, with each of them corresponding to different ranges of the Reynolds number. The results of their simulations were consistent with those obtained by solving the incompressible viscous Navier-Stokes equations.
Sarris et al. [92] studied the Kolmogorov flow generated by a stationary one-dimensional forcing varying sinusoidally in space using direct numerical simulations with periodic boundary conditions. The study aimed to display the effect that computational box size has on the calculation of the properties of the Kolmogorov flow. The study concluded that turbulence statistics are heavily dependent on the boundary conditions that have been chosen, observing that some symmetries compatible with both the boundary condition and the forcing are broken in the statistical sense. The authors suggested that Kolmogorov flow could thus be considered as an appropriate test case for assessing large-eddy simulation of inhomogeneous, anisotropic, and sheared turbulent flow, without having to deal with the problem of wall modeling.
The energy-enstrophy method, a nonlinear stability method, was introduced in 2008 by Tsang and Young [93]. The proposed method was specialized for two-dimensional hydrodynamics and developed from a nonlinear stability analysis of the k-flow. The study was focused on the limit in which drag is much stronger than viscosity, motivated by the possibility of applying the technique of Doering and Constantin [10] [94] to two-dimensional turbulence, where it is essential to take into account the enstrophy conservation. In an attempt to simplify the great computing power required to model large complex flow systems, Kramar et al. [95] presented an analysis of Kolmogorov flow and Rayleigh-Benard convection using persistent homology as a data reduction method. Two Kolmogorov flow regimes are studied: chaotic dynamics from the appearance of unstable fixed point and a periodic flow that displays drift in the direction of symmetry. The authors took a general approach in order to maximize the applicability of their method on open problems that exhibit complex spatiotemporal behavior. According to the results of the study, persistent homology is an effective method both for quotienting out symmetries in families of solutions and for identifying multiscale recurrent dynamics. The authors concluded that persistent homology is a method robust to noise and sensitive to complicated dynamics, appropriate for studying experimentally acquired data sets.
Experimental Study of Kolmogorov Flows Using Cylindrical
Surface In this experiment, a laboratory model of the k-flow was investigated using a cylindrical surface. The different number of half periods in terms of the external force was varied from 2 to 22. It was shown that the specific type of secondary flows is determined and dependent on the number of half periods in relation to the basic flow. The half periods include the traveling wave for all of the odd number of periods, a self-oscillating regime and a quasi-steady vortex structure. This form of theoretical analysis has been based on the Galerkin approximation. Moreover, the system of equations that were obtained was solved numerically in direct conjunction with the analysis [96]. The different experiments that have been carried out in the laboratory settings have shown that there is a specific interval marked for its super-criticality in relevance to the secondary flows. One reason can be the friction of the fluid generated with the channel bottom due to which the stability curve is modified. The confinement of the observed flow with the sidewalls also leads to similar results [90]. The results of the study showed that some of the most dangerous disturbances are marked for having wave numbers close to 0.3l. It is an indication that the number of vortices formed alongside the x-axis happen to be independent of the channel width. At the same time, the nature of the fluid motion in terms of the super critical regime is determined based on the imaginary part. In terms of the supercritical regime, three different solutions were observed. It was also reported that the behavior of the observed flow in the supercritical regime was confirmed with the numerical integration of the system being investigated [96].
An experimental apparatus with mechanical periodic (but not sinusoidal) forcing has been presented, which the authors used to investigate the instability of k-flows in a soap film [42]. The results of the experimental study were used for comparison against the numerical models of older studies, which displayed virtually no convergence with the experimental results.
For the purpose of an undergraduate laboratory experiment, Kelley and Ouellette have constructed an apparatus to create a quasi-two-dimensional flow, using electromagnetically driven thin-layer flow [97]. The authors noted that this approach has been selected over soap films because of its simplicity, as the students will be called to setup the experiment. The paper summarizes the most important, basic theory regarding the k-flow, places focus on the experimental setup and data acquisition procedures and, finally, discusses the pedagogical aspects of the project.
By using the experimental setup suggested by Rivera and Ecke [98], Suri et al. [21] investigated the velocity profiles in two-dimensional k-flow. The authors confined their comparisons between theoretical and experimental results to the laminar flow because, as they proclaim, they sought closed form expressions for the coefficients in the 2D vorticity equation, in order to gain insight into how they depend on various experimental parameters. Within these parameters, their study displayed excellent agreement between experimental measurements and analytical predictions, while the authors also concluded that increasing the viscosity of the electrolyte relative to that of the dielectric would improve the uniformity of the flow.
Lamination and Mixing in Laminar Flows Using Lorentz Body
Forces In this experimental investigation, a relatively newer approach was demonstrat-Journal of Applied Mathematics and Physics ed for the designing of different mixtures. The approach is centered on using a sequence of different tailored flows in combination with a new procedure for quantifying varying levels of striation. The process, referred to as lamination, can be seen as translating to the specific distance over which different forms of molecular diffusion will need to act. In situ, the process of mixing was also achieved using a tailored sequencing of different flows. The degree of mixing that was observed showed an exponential growth before the saturation was achieved. This form of saturation is seen when the thickness of the striations happens to be smaller than the length scale [99].
It should be noted that without the molecular diffusion, the thickness of the striations would have been smaller in comparison to the size of the atom. The results of the study showed that 3 minutes are enough for mixing the species with low levels of diffusivities. Moreover, the stretching, as well as the lamination, showed an exponential growth. For each of the forcing periods, the lengths of the line in combination to its lamination were multiplied with 23. After a timing gap of three minutes, the average lamination was reported to be 3000 with the striation thickness being about 3.3 μm. It was also shown that the in situ mixer did not demand a mean flow inside the pipe for efficient mixing. The primary mechanism for the mixing process was the control of two local jets. It should also be considered that such flows can be designed using a number of other devices. The study also pointed out the need of using green mixing so that it might consume lower amounts of energy [99].
Numerical and Experimental Study of a Circular Shear Layer
The experiment by Chomaz et al. [100] showed that the dynamical behavior of the observed flow was dependent on the aspect ratios of the cell. In terms of the larger cells, the transition from the mode having a lower number of vortices towards a mode with higher number of vortices is determined using a number of localized processes. The transition is seen to occur after a series of different bifurcations that happen to be in correspondence with the successive breaking of the different symmetries of the flow being studied. The results of the study showed that a two-dimensional simulation of the flow is sufficient for recovering the varying dynamical processes of the experimental flow. The rational variance was visible during the different phases of the experiment. In the experiment, there was no sort of translational variance reported. It was also shown that for the different velocities, a number of specific forces could be easily neglected. It was due to this that the rotating frames were undistinguishable just like the Galilean frames. A distinction between the convective and absolute instabilities was also observed. The case is important for having a spatially periodic flow as reported by the researchers who performed the experiment. The numerical simulations observed also happened to be similar to the simulation of linear shear flows. The researchers reported that the experiment was seen to be lagging in were some dimensions and domains. However, the study managed to provide a number of new insights about the experimental investigations of k-flow [100].
An analysis of Forced Periodic Flows and Their Spatio-Temporal
Dynamics The work reported in [99] showed that spatiotemporal chaos could be caused due to the competition between the varying unstable modes. The instabilities and dynamics of a specific localized vortex were analyzed using different experimental procedures. A double bifurcation was seen in addition to a new periodic state. The results showed that the instability threshold was in close accordance with the experimental one. The findings of this experiment were in close relevance with the previous studies and investigations that were carried out in this regard. Moreover, this study pointed out to the need of more future studies for evaluating the forced periodic flows in more detail [101].
Experimental Investigation of the Quasi-Two Dimensional Shear
Flows In the experimental study reported in [100], forced shear flows were investigated in a thin layer using an in-viscous fluid. In order to obtain the stream function of the observed vertical flow, a number of streak photographs were taken. Different flow characteristics were determined by investigating these flows. For the purpose of evaluation, the experimental flow was observed using a MHD apparatus. Moreover, a magnetic field was also created using circular magnets. In order to generate a shear flow, a number of cylindrical electrodes were utilized. The Kolmogorov flow was generated using a number of different devices and instruments [102].
The mean velocity, vorticity and Reynolds stress were also measured in [100] and a harmonic analysis of the resulting disturbances was performed keeping in view the dynamics of the system. The conclusions of the experiment showed a verification of quasi-two-dimensional approximation for thin-layered fluids. The results also gave an indication about the applicability of Q2D approximations. At the same time, the possibility of reconstructing Q2D shear motions was also investigated. Correct behavior was determined for both of the profiles being investigated. The force profiles were analyzed in relevance to these approximations. Another important result of the experiment was that Q2D flows could also be applied to the varying atmospheric flows. However, it should be noted that the specific method being proposed here could not be applied directly to the atmosphere, with the main reason being the need for a complete resolution of the observed vertical structure. Keeping in view the data obtained from the horizontal fields at varying altitude levels, this specific procedure can be generalized for different forms of reconstruction using the vorticity transformation equation reported in [102].
Turbulence of Shallow Water Flows Modeling
The most recent modeling study using the Kolmogorov approach was performed by Pu [103], who explored the turbulence of shallow water flows. To that end, the author combined a model of shallow water equations with Kolmogorov's k − E turbulence model and verified the simulations by comparing them to experi-mental data. He also compared the results of both the newly developed model and the validation experiments to previous studies, mainly focusing on comparisons with the Boussinesq model [104]. According to the author's conclusion, the newly developed model reproduced the flow characteristics of multiple-obstructions induced flow reasonably well. The author also notes that the Kolmogorov scaling model should be given more attention by future studies as an achievable approach to resolve computationally demanding flow turbulence.
Simulations on Turbulent Kolmogorov Flow without Boundaries
Musacchio and Boffetta published the results of numerical simulations of turbulent Kolmogorov flow without boundaries [105]. The main aim of the study was to examine the dependence of turbulent drag on the Reynolds number, but the researchers also presented a detailed analysis of the scale-by-scale energy balance that shows how the kinetic energy is redistributed among different regions and scales. The study derives a prediction for the spatial transport of kinetic energy, describing how it is redistributed among different regions of the flow. The authors conclude that the Kolmogorov flow is the ideal framework to investigate the properties of spatial transfer of kinetic energy in nonhomogeneous, turbulent sheared flows.
Spatiotemporal Dynamics in Two-Dimensional Kolmogorov Flow
Lucas and Kerswell [106] studied the spatiotemporal dynamics in two-dimensional Kolmogorov flow over large domains. The numerical study was aiming to examine the 2D Kolmogorov flow over an extended domain that would display spatially localized chaotic flows, i.e. points that would approach 2D turbulence.
The results displayed rich spatiotemporal behavior once larger domains are considered, focusing on the existence of localized flow structures. However, the authors conclude that the disparity between the large domains used for the means of their study and the spatial extent of the localized chaos that exists is a major challenge, requiring the development of more efficient recurrent flow analysis strategies.
Transition to Turbulence in the Three-Dimensional Kolmogorov Flow
One of the few studies that examine three-dimensional Kolmogorov flows is the recently published paper of Veen and Goto [107], in which they examine the transition from a three-dimensional Kolmogorov flow to turbulence via numerical simulations. The authors study the subcritical transition process assuming the "simplest possible circumstances" of a flow on a triply periodic domain with aspect ratios equal to unity and forcing with the smallest wave number in one direction only. Their work reveals the presence of an equilibrium state close to the laminar flow with no drift in the streamwise or spanwise directions.
Conclusions
The research works of Andrei Nikolai Kolmogorov have aided science in terms Journal of Applied Mathematics and Physics of getting the answers and solutions for some of the most perplexing phenomena including turbulence, shear flows, fluidic behaviors, and probabilities. The study of two-dimensional flows that was initiated by Kolmogorov was continued by future researchers and mathematicians, and the analysis of magnetohydrodynamics and the mathematics behind them clearly indicate that the Kolmogorov flows (k-flows) are a major subject of investigation. Numerical simulations and investigations have helped a lot in advancing our understanding of these flows.
As specified in this review paper, it is evident that a number of efforts have been made in terms of understanding the laboratory measurements and realizations of these flows. The contributions of Kolmogorov in the field of fluid dynamics cannot be undermined at any level. It was due to this works that we are now able to better understand the velocity fields in a number of intermediate scales, chaotic flows, and the inertial shear range. In addition, his 5/3 law has also remained a major landmark in this field, which was used by a number of researchers and scientists have put forward revolutionary theories on fluid dynamics.
It can be concluded that the study of these flows has helped in assessing the stability of the viscous shear flows and the different behaviors that are exhibited by fluids. At the same time, the phenomenon of turbulence has been studied in detail using the principles and concepts put forward by Kolmogorov. It is also evident that we are still too far from grasping a number of randomized and chaotic behaviors that are exhibited by fluids. The Reynolds number has also been critical in terms of these investigations. All the studies and researchers cited should be to use Kolmogorov's works in something that will help science. The applications of k-flows in the domain of engineering and all other related fields are numerous due to which special attention should be paid to understanding them in detail. The contributions of Kolmogorov are not limited to fluid dynamics, mechanics, magnetohydrodynamics and mathematics. It was one of his major interests to apply statistical theories and principles to real life settings. For those who do not know, k-flow has also been extensively applied in the field of economics [108], biology [109] and even data encryption [110]. More specifically, some studies are concerned with the applications of his equations in the simulation of financial activities. In simple words, his works have been applied in financial simulation modeling. His equations and works have been used for formalizing comprehensive mathematical models. The behavior of turbulent eddies is now more thoroughly understood, the credit for which directly goes to the works of Kolmogorov. Modern methods and techniques of computerized computational are also somewhat based on the insights and solutions that were derived from the works of Kolmogorov. Whether it is turbulence, heat transfer, stability of shear flows, bifurcation, stratification or simulation, the applications of k-flow can be seen in all these domains.
As a recommendation, it is suggested that future studies should be dedicated to limiting the intricacies and complexities of turbulence and shear flows. These studies should not only focus on understanding the theory put forward by Kolmogorov, but also on its applications in different fields. In addition, it is also right to say that more detailed and perfect turbulence models can be formulated using the work of Kolmogorov. Scientists and mathematicians have been interested in applying the concepts of turbulence to complex flows. That is not all, as Kolmogorov's ideas have also been used in the formulation of algebraic turbulence models. In short, the field of magnetohydrodynamics can excel through understanding and applying the works and studies of Kolmogorov.
Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper. | 18,279.8 | 2018-11-19T00:00:00.000 | [
"Physics"
] |
IoT Network Attack Detection and Mitigation
Cyberattacks on the Internet of Things (IoT) can cause major economic and physical damage, and disrupt production lines, manufacturing processes, supply chains, impact the physical safety of vehicles, and damage the health of human beings. Thus we describe and evaluate a distributed and robust attack detection and mitigation system for network environments where communicating decision agents use Graph Neural Networks to provide attack alerts. We also present an attack mitigation system that uses a Reinforcement Learning driven Software Defined Network to process the alerts generated by the attack detection sysem, together with Quality of Service measurements, so as to re-route sensitive traffic away from compromised network paths using. Experimental results illustrate both the detection and re-routing scheme.
I. INTRODUCTION
The IoT [1] has the potential to improve the critical processes that are at the heart of our socio-economic systems [2], [3]. However, it creates raises risks that go way beyond the individal technologies such as the Internet, wireless networks and machine to machine systems [4], [5]. In addition to risks related to system malfunctions [6], quality of service (QoS) failures, and excessive energy consumption, the theft and tampering of data, conventional network attacks and attacks that deplete the energy of autonomous sensors and actuators also need to be considered [7]- [13]. Since IoT devices can carry out real-time measurements and controls much faster than human reaction times, we must design IoT networks that both detect and mitigate security risks automatically and adaptively, while preserving Quality of Service (QoS), and energy efficiency [6], [14]. Thus we propose an autonomic [15] scheme offering (a) distributed attack detection based on deep learning (DL) and graph neural networks to achieve high detection probabilities with low false alarm rates [16], [17], and (b) mitigation that exploits network Self-Awareness [18], [19] centered on Software Defined Networks [20] to achieve secure QoS based routing of traffic flows using machine learning and adaptivity [21], [22].
Thus Section II discusses a multi-agent system (MAS) for network attack detection, and summarizes its performance. The overall system architecture for attack detection and mitigation is presented in Section III. The node attack detection probability estimated by MAS is used to compute safer paths in the network using reinforcement learning as described in Sections III-A and III-B. Experimental results are presented in Section III-C, and Section IV presents conclusions and future work.
II. DISTRIBUTED ATTACK DETECTION
IoT systems are distributed have a heterogeneous structure which is an additional challenge for real-time anomaly detection [23], [24]. Thus the distributed MAS for detecting attacks monitors the network traffic in a distributed manner, and outputs to the novel routing system described in Section III, to mitigats attacks with a SDN based routing engine. The MAS's mutually communicating multiple agents can improve its robustness by incorporating redundancy in the detection algorithm [25]. The MAS also offers scalability, since its modularity allows new agents to be added if the IoT network grows, and agents exchange information [26] in a structure inspired by Graph Neural Networks [16], [17].
The structure of the IoT network is reflected by the graph G(V, E), where V corresponds to the set of nodes of the IoT network, and E ⊂ V × V is a set of edges which represent the nodes which communicate (directly or indirectly) with each other through the IoT network. The nodes can represent sensors or actuators, edge nodes, servers or routers in the IoT network. We associate a real-valued feature vector x i ∈ R N V to each i ∈ V , where N V is its length. Similarly we associate the feature N E -vector of real numbers e ij ∈ R N E with each edge (i, j) ∈ E. An example of the features for the nodes and edges is given in Table I. Measurements that collect the feature vector parameters are taken in the IoT network during successive time slots [(t − 1)T, tT where T is the slot length and t is the slot index. The slots are long enough to provide representative data, but short enough to reflect time variations in the system. Thus all feature vectors are also associated with individual slots and successive values. Thus x t,k i the k − th successive value of x i within the t − th slot, while e t,k ij is the k − th successive value of e ij in the t − th slot. We will denote by e t ij and X t i , respectively, the feature vector values at the end of the t − th slot, while e 0 ij and X 0 i are their values when the measurement system starts to operate and the first slot begins. The MAS uses four Deep Neural Networks (DNNs): • The EDNN (edge DNN) which undertakes the update: EDNN uses an edge's current features, and the features of the two nodes at its edges, to update its features.
• The NDNN (node DNN) which undertakes the update: and updates a given node's features using the average value of the features of the nodes with which it communicates and of the related edges, with: where m j = |{i s.t. (i, j) ∈ E}| is the number of neighbours of j ∈ V , and s.t. stands for "such that".
• The third DNN, CLN builds p k,t N i , the probability that node i is compromised, only using its own feature vector: Finally the fourth DNN, CLNEI builds p k,t N ij the probability that i determines that its neighbour j is compromised: These four networks constitute the node agent, and are duplicated in each node, and can be trained off-line. They operate in each node separately and asynchronously. Starting from feature vectors from data gathered during the previous time slot, they update the decision probabilities and communicate their updated feature values and decison probabilities to their neighbours. These computations are shown schematically in The mutiple iterations of these operations represented by the integer k, allow nodes and edges to update and exchange information multiple times within each time interval as shown in Figure 2. The entire network of nodes is trained in a supervised manner using the back-propagation algorithm with cross-entropy as the cost function for classification.
In the system that we have described, each node agent can perform anomaly detection not only on itself, but also with regard to its neighbouring nodes. This redundancy improves the algorithm's robustness in cases where some agents may fail, since the agents at neighbouring nodes may still detect anomalies that occur at their neighbours. Finally, to combine the overlapping decisions of different agents into a single decision for each node in the IoT network, we use a simple aggregation method, where a node is considered anomalous if at least one agent has reported it as being anomalous. More sophisticated aggregation schemes can be considered in future work.
To evaluate the proposed approach, we have uses a simulated infiltration attack, where the attacker tries to infiltrate the network by scanning a range of IP addresses in order to run services, and performs a dictionary attack in order to find vulnerable IoT devices. The resulting Receiver Operating Characteristic (ROC) is shown in Figure 3. The overall results are summarized in Table II. The metrics used forthe evaluation are the Area Under the Curve (AUC) score, the detection accuracy, the utilized Bandwidth, and the Power consumption [27]. For the first two metrics, which measure detection efficiency, the proposed approach outperforms all other methods we have tested for anomaly detection, achieving Area Under the Curve (AUC) score and accuracy of 0.99, compared to 0.97 for the second (random forest) and third (decision tree) classifiers. With respect to the last two metrics, i.e. Bandwidth and Power consumption, the proposed decentralized approach greatly reduces the bandwidth required for monitoring, which in turn reduces the power that is consumed. However, the execution time and the power consumed at each node will determine the energy consumed by our approach, so that a slower low power approach may consume more energy than a very fast method that uses higher power.
III. SYSTEM ARCHITECTURE AND ROUTING ENGINE
The Architecture of the SerIoT system is shown in Figure 4, with interconnected smart forwarding engines (SFE) that are connected to sfixed or mobile IoT devices, IoT gateways, and to Cloud Servers which may also be Fog servers. SFEs may be connected to Honeypots (H) whose role is to attract and interpret attacks. Specific sofware at IoT devices and gateways may be used to detect attacks [28], but here we us the decisions provided by the distributed attack detector of Section II. QoS, Energy and Security are monitored and forwarded to "smart controllers and routing engines" (SRE) which operate as OpenFlow SDN Controllers to choose paths and download the to the SFEs [29], [30].
The SRE uses the Cognitive Packet Network (CPN) routing algorithm [31], implemented with the Random Neural Network (RNN) and Reinforcement Learning [32] which has also attracted interest from industry [33]. It extends a standard SDN network using the SRE, with SFEs which are extensions of SDN forwarders, and the Monitoring and Anomaly Detection (MAD) module which detects potential threats from data collected by SerCPN, Active Honeypots that attract attacks, deflect them to safe IP locations, and inform the SRE, and local attack detectors at nodes and gateways [28].
Each SFE, shown schematically in Figure 5, switches SDN flows according to the OpenFlow protocol. In addition to payload traffic, SFEs also forward smart packets (SPs) which gather security, QoS and energy usage data from the SFEs, IoT devices and gateways. Each SFE has a Cognitive Packet Agent (CPA) that unpacks the SP, adds its own data to the list stored inside, packs it again and forwards it to the SFE. SPs travel over paths, carrying information provided bythe SFEs on the path. When a CPA recognizes that a SP has attained the end of its path, it encapsulates it and forwards it to the corresponding SRE, where its data is unloaded into the local Network State Database (NetStatDB). SFEs can also forward data that is monitored, such as packet counters or byte counters) to the MAD at the SRE.
Each SRE is based on ONOS [34] and its software implemented as an ONOS application with the three main modules shown in Figure 6. The heart of the system is the Cognitive Routing Module (CRM) that implements decision taken by a RNN [35] with Reinforcement Learning for path selection based on QoS, security or energy consumption in the network. The MAD detects attacks at nodes using MAS of Section II. Other attack detection methods will also be considered in futire work [28].
The SRE selects paths based on a Goal Function G(f, P ) which has non-negative real values and which must be minimized, where f denotes the packet flow to or from an IoT device or end-user software, and P denotes a path travelled by and the df , and the MAS in Section II provides the probability p i that node i is under attack. For some SFE or network node i, the Trust Level T (f, i) is non-negative number that is high when i is not deemed secure enough to convey the flow secure f . Also, S(f, i) is defined as the sensitivity of f to attacks at node i.
A. Linking T (f, i) to the p i from MAS
Let A > 0 be a large positive constant used so that T (f, i) may take values comparable to QoS values such as the delay of links, and p i is the probability that an attack is detected at node i by MAS. Then T (f, i) = A.(1 − p i ) is the security level of f related to node i. Let S(f, i) be the sensitivity of f to the security of e. The Insecurity Factor I(f, i) is then used to "separate" e and f : where use the notation [X] + = X if X > 0, and [X] + = 0 if X < 0. If we take S(f, i) = A, then I(f, i) = A.p d (i). and we see that as p i increases, the "security cost" incurred by f as it travels through i increases. the "Insecurity Factor" that relates flows to paths, is: When less attention is paid to security, we may take the smaller value I(f, P ) = max i∈P I(f, i).
Let L(f, p) be the packet loss ratio, and D(f, P ) be the forwarding delay for a packet of f on path P , while J i is the energy consumption per packet at node i. The packet retransmissions due to packet losses [31], [36] result in: where θ ≥ 0 is a security threshold that can be chosen based on the importance of security considerations for this system. G(f, i) or G(f, P ) are quantities to be minimized, but Reinforcement Leaning (RL) requires a "reward" R(f, i) that should be maximized, where: , R(f, P ) = 1 G(f, P ) .
B. Reinforcement Learning
The metrics that feed into the quantity R(f, i) arecollected via measurements, except the ones that are initially fixed, namely θ, S(f, i) and the parameters such as α, β, γ describing the relative importance of different factors. Therefore the RL based routing scheme to improve network security, QoS and energy consumption, collects at each node i the quantity G(f, i) and hence R(f, i), at successive arrivals of a SP packet to an SDN controller. The SP will collect bring back the relevant data for R(f, i) concerning each node i that the SP has visited, to the SDN router that exploits the RL algorithm to compute a "next hop" for SPs. Let the integer l refer to the l − th value of the reward R l (e, f ) computed by the SDN router for the node i and flow f . The RL algorithm will first compute the quantity: that describes the historical behaviour of the reward, and tells how well the network has been doing. The RL algorithm will then compute a set of RNN [35] weights as follows.
For an N node RNN, where N is the number of outgoing links for node i, we associate with each outgoin link i a neuron whose state is represented by the "excitation probability" q i of the RNN. The RNN weights are real numbers W + ij , W − ij ≥ 0 for i, j ∈ {1, ... , N }. From RNN theory [35] we know that: where is the "total firing rate" of the neuron j. λ + j , λ − j are, respectively, the arrival rate of excitatory and inhibitory spikes to neuron j from outside the neuron i, which are set so that when all connection weights are equal, then all neurons in the network have an excitation probability of q j = 0.5.
Let k be the index of the neuron for which, after the v − 1-th update of the RNN we have q k = max{q 1 , ... q N }. Also save the current value r j ← N l=1 [W + jl + W − jl ]. Note that the node from which a SP entered the node where the next-hop decision is being taken will not be used as the next-hop, so that the decision at a given node will select one outgoing link among N − 1. The RNN's weights are updated as follows: If R l ≥ T l−1 : ∀ j = k, j = i(P revious), i = k, If R l < T l−1 : ∀ j = k, j = i(P revious), i = k, where we divide by N −2 since we are excluding i(P revious) from which the SP initially arrived, also not increasing the inhibitory weights of the winner node when R l ≥ T l−1 , nor increasing the excitatory weights of the loser node when R l < T l−1 . We then also renormalize the weights as follows: Finally we calculate all the q j from equation (12), to select the new output link for flow f at node i by selecting the new output link k * with q k * = max{q j , j = i, 1 ≤ j ≤ N }.
C. Experimental Results
Experiments were run on a network with several with SFEs composed of Linux boxes with ARMv8 processors (1,4 GHz, 4 GBit Ethernet, 2.4GHz and 5GHz 802.11b/g/n/ac WiFi interface). They were configured to use Ethernet as the data plane interface shown in Figure 7, and WiFi for management, monitoring and for communications with the SRE. In the figure s1, ... , s7 denote SFEs, h1, ... , h4 are IoT devices each with a 633MHz MIPS processor, 100Mb/s Ethernet port, and 2.4Mhz WiFi connection used as a management port, the SRE is a workstation connected by WiFi to the test-bed, and the MAD is installed on a separate workstation connected by Ethernet port to the SRE, and by WiFi to the test-bed. The type of experiments we run are represented by the measured event trace shown in Figure 8.
In our experiments, every distinct pair of IoT devices in {h1, ... , h4} forward 20 packets/sec or roughly 20 − 40 Kb/sec with 12 ongoing connections so that each packet rate is compatible with IoT connections monitoring temperature or water flow in pipes, etc. SPs are generated by every edge SFE at 10 packets/sec. SRE management traffic includes OpenFlow commands, link and topology discovery packets, and traffic statistics. Management packet traffic through SFEs measured using the Wireshark packet analyzer was four to five times higher than SP traffic.
The experiments illustrate the system's aptitude to be Self-Aware and adapt, and we measure the SRE's reaction time to abrupt changes in the security conditions expressed by the trust level for connections, and track changes to parameters R l and T l−1 given in (11) for the Reinforcement Learning Algorithm's successive steps l. The SRE was programmed to change network paths every 5 seconds, so that the experimental results we present are limited by this constraint that has been placed to avoid frequent changes that may increase system overheads. The effect of changing the trust T F (., .) is shown in Figure 8. The quantity that is plotted is the proportion of the time it takes the SRE to respond to a large increase of 100 in the value of T F (f, i) for a node i on the path that is currently used. We see that the reaction tie is on average around 1 second, waitha maximum value around 2 seconds.
IV. CONCLUSIONS AND FUTURE WORK
In this paper we have described a system that detects node attacks in an IoT network using a deep learning based Mulltiple Agent System, and exploits attack detection in order to automatically mitigate the attacks by re-routing sensitive traffic using Reinforcement Learning, while also taking into consideration the QoS of different network paths. We have also provided a preliminary evaluations of the performance of both the attack detection and mitigation system. In future research, additional measurements, fine tuning of parameters, and experiments will be conducted to better evaluate the interaction of QoS and security in complex adaptive IoT networks. Using methods from diffusion processes [37], [38], we will investigate the transients due to SDN based frequent route updates, in response to potential attacks and changes in QoS. We will also test locally operating anomaly and attack detection software at nodes to reduce computation times for anomaly detection, and improve the response of the system to QoS and security changes, while possibly reducing the accuracy offered by the proposed network-wide anomaly detection scheme. | 4,649.4 | 2020-06-01T00:00:00.000 | [
"Computer Science"
] |
Vocal Communication in Androgynous Territorial Defense by Migratory Birds
Many temperate zone breeding birds spend their non-breeding period in the tropics where they defend individual territories. Unlike tropical birds that use song for breeding and non-breeding territorial defense, vocal defense differs strikingly between breeding and non-breeding territories in migrants. Song, restricted to males, is used during defense of breeding territories but callnotes are used to defend non-breeding territories. To explain why callnotes and not songs predominate in the non-breeding context, we present an empirical model based upon predictions from motivational/structural rules, ranging theory and latitudinal differences in extra-pair mating systems. Due to sex role divergence during breeding that favors singing in males, but not females, females may be unable to range male song. Ranging requires a signal to be in both the sender and receiver’s repertoire to allow the distance between them to be assessed (ranged). Non-breeding territories of migrants are defended by both males and females as exclusive individual (androgynous) territories. Ranging Theory predicts callnotes, being shared by both males and females can, in turn, be ranged by both so are effective in androgynous territoriality. Where songs are used for non-breeding territorial defense both sexes sing, supporting the evolutionary significance of shared vocalizations in androgynous territorial defense.
Introduction
A continuing challenge in evolution is to identify the sources of selection acting on signal structure in vocal communication.Research has recently shifted from an emphasis on information transfer, wherein the sender's goals determine signal structure, to the receiver's goals (e.g., [1]).Under the receiver control model, receiver assessment of signals feeds back to the sender by influencing what the sender's signals have accomplished [2][3][4].Assessment insures that signals are honest and allows us to identify results of sexual conflict (Sensu [5]).Here, we provide an example of how signal assessment can produce seasonal signal structure changes in the same individuals in the context of territorial defense.
Many migrant birds are territorial in temperate-zonebreeding areas and resume territorial behavior when they reach nonbreeding areas.They are, in essence, territorial throughout the year but breeding and nonbreeding territories are widely separated, often thousands of kilometers apart, in space as well as in time [6].
There are major changes in territorial behavior between breeding and nonbreeding periods that affect the context of communication.While breeding, the roles of the sexes of most migrant species are usually highly differentiated with males singing and defending territories, while females, build nests and incubate eggs.In the migrant's tropical nonbreeding period, females and males that have nonbreeding territories defend them against any and all conspecifics regardless of gender (androgynous territoriality) [6,7].Female mate selection and male territoriality often result in males being larger than females; thus, sexual selection during the breeding period often induces sexual inequality in the nonbreeding period when it comes to defending territories [8][9][10].Both periods of territoriality interact with territorial habitat quality in the long nonbreeding period to produce important carry over effects on reproductive success [11,12].The evolutionary context of communication changes from one dominated by sexual selection to one dominated by natural selection and sexual conflict.How have these changes affected vocal communication involved in territorial defense?
Here, we describe the changes in signals used in breeding and androgynous territoriality and in species that switch from breeding territoriality to nonterritorial nonbreeding social systems.To explain the evolution of signal changes in the two contexts, it is necessary to approach the question from two aspects, although they interact.One is concerned with the changes in signal categories used in breeding versus.androgynous territoriality, and the second concerns the physical structure of these signals.For signal categories, we apply ranging theory and for signal structure, we apply motivation-structural rules.Ranging describes selection on long-distance signals derived from perception of the distance and location of a sound's source [13].Motivation-structural rules predict signal structures in a two-dimensional mosaic in a motivational gradient from highly aggressive to fearful (or friendly) [3,14].
A Priori Predictions from Two Models
2.1.Ranging Theory Predictions.After a sound leaves a bird's mouth, it begins to change in the relative amplitudes of its mix of sound frequencies, reverberation, and amplitude.When a bird hears a sound, it perceives these features of acoustical degradation and uses them to estimate its distance to the sound's origin.It can do this because the amount of degradation correlates with distance ( [13,15] reviewed in [16]).Distance assessment compares the degraded signal with one in the receiver's own neural song control system (memory) [13,[17][18][19], perhaps via mirror neurons in the HVC song center [20].
How does a bird use this distance assessment mechanism to defend its territory?If the assessor can range the signal, it knows how close it is to the defender.A signal that is simply detectable can be ignored but one that "sounds close" cannot be ignored.For a signal to sound close, it must be in the assessor's memory so that the defender can use the assessor's own distance assessment mechanism to threaten it [13].The widespread use of matched countersinging is a familiar example of territory defenders using the ranging ability of intruders to threaten them [20,21].
Singing is a male-only trait in most temperate zone passerines [22], used for repelling other males and attracting females as mates and extra pair partners (reviewed in [23]).In contrast, when these same birds defend their tropical androgynous territories they exclude all conspecifics.Song is not efficient in androgynous territorial defense because the nonsinging female gender may be incapable of ranging song [13].Furthermore, males from diverse areas in the breeding range have different songs.When males converge in the tropics, a given male's song may not be in the repertoire of other males he is attempting to repel.As songs are perceived categorically, parts of songs or phrases that are shared between dialects probably would not permit ranging [24].Therefore, ranging theory predicts that song will not be favored for androgynous territorial defense because many intruders will not be able to range songs and so cannot be threatened by singing.Ranging predicts that vocalizations shared among all members of a species would be favored.
Motivation-Structural Rules
Predictions.Ranging of songs has been studied but call notes are used in a great many contexts, for example, in mobbing, alarming, or in contact, so vary greatly in structure.If ranging involves shared sound types how should call notes used as territorial vocalizations be structured, given their great variation in contextual usage?The motivation/structural rules (M-S) model [3,14] provides a model.M-S rules tie the sender's motivation to the physical structure of calls.To briefly summarize this relationship, an aggressive bird use low-pitch and harsh sounds whereas friendly, fearful, or appeasing birds use high pitch and tonal sounds.This dichotomy represents motivational "endpoints", used when communication is about to end and the animal is on the verge of fight or flight.Most communication events lie somewhere between these endpoints and most vocalizations do too.Sounds between the endpoints contain a mix of both: they rise and fall in pitch and appear chevron-like in spectrograms [3,25].
A common avian call note with this intermediate structure and is often described, onomatopoetically, as chip.We call them "barks" here, as a general term, to exemplify their similarity in function and structure to a dog's bark.In androgynous territorial defense, barks fulfill all the requirements for ranging that song does not.They are shared by all conspecifics and, as a consequence, males and females from all populations may be able to range distance when hearing them.Barks, often used periodically, with no need for exogenous stimulation, are a form of tonic communication [26], that reduce intrusions and save energy in territory defense [27].Barks can be motivationally "neutral" or tend towards motivation endpoints by changing in frequency and sound quality to signify changes in aggressiveness [3].Barks are also rife with degradation cues for ranging because their frequency sweeps are ideal for producing reverberation [28].Barks can be produced with high source amplitude (loudly) and bark amplitude may be useful for ranging [29].The predictions from ranging theory and M-S rules coincide in predicting that vocal defense of androgynous territories is mediated through call notes, particularly motivationally "neutral" barks.
Methods
We documented vocalizations used by migratory passerine birds during the nonbreeding season in the course of field work in Panama, Mexico, Cuba, Venezuela, and Colombia over the last 40 years.We also queried colleagues who were familiar with certain species and consulted field guides to use their standardized onomatopoetic renditions of call notes or song that we documented were used for nonbreeding territorial defense.We classified species into the general social categories of territorial or flocking with a few sharing both attributes due to changes in behavior coincident with tropical dry or wet seasons [7].Territorial species were defined as those whose members defend an area for 2 mo or more during the nonbreeding period."Flocking" species refer to those occurring in conspecific social groups to distinguish these from species that, although they join mixed species flocks, exhibit territorial defense because they defend these flocks against joining by other conspecific individuals.We defined three categories of vocalizations, song, bark, and growl based upon their physical structure as defined by M-S rules discussed above.Song was defined as a longer, mainly tonal, vocalization [30].We compiled our observations for four speciose families of New World passerines (tyrannids, turdids, vireonids, and parulids) and two species of thraupids (recently considered to be cardinals by [31]) using the avian classification in Howard and Moore [32].
Results
Our data contained 18 New World flycatcher species (tyrannids), six thrushes (turdids), nine vireos (vireonids), 39 New World warblers (parulids), and two species of cardinals in the genus Piranga.Both songs and call notes were used.Most of the flycatchers (64%) used barks, all of the thrushes and warblers (100%), and none of the vireos (0%) used barks for defending territories (Table 1).The data are not phylogenetically independent, so we compare within and between genera in each family.Of the five genera representing the flycatehers, all members of one (Contopus, three species) used song, the eight Empidonax all used barks, one of the two Sayornis used a bark but the other used song, the single member of Myiarchus used song, and all four Tyrannus used barks.The flycatchers were territorial except for the four Tyrannus, which occurred in flocks.Species in the two thrush genera used barks as did all members of the 12 warbler genera represented in the sample (Lovette et al. [33] revised warbler genera, but our conclusions are unaffected).Most of the vireos used growls, but one used song.The territorial vireos used sequences, sometimes species-specific of growls, but the three flocking vireos used single growls.The territorial summer tanager (Piranga rubra) used barks, but the flocking species is quiet, according to two field biologists familiar with the scarlet tanager (P.olivacea) in its South American nonbreeding area.
Barks are the most common vocalizations used in longdistance territorial defense (Table 1).An example of a bark is shown in Figure 1.Acadian flycatchers (Empidonax virescens) repeat these call notes about 25 times per minute on their tropical territories in Panama, particularly at dawn and dusk (unpublished data).
Discussion
The ranging model succeeded in predicting that vocalizations used in territorial defense would be shared by both sexes during androgynous territoriality in birds.Sharing by both sexes was the major generality for there was much variation in type of vocalization.Most species used shared barks but some used song or growls, also shared by both sexes, and there was a definite influence of phylogeny (Table 2).
Barking Vocalizations.
The motivation-structural rules model predicted that call note structures would be motivationally neutral barks or growls.A territory holder could repel intruders simply by making its presence known by using any species-specific vocalization.Barks used to defend territories, like song, seem to occur either endogenously or as a specific response to an intruder.Why are barks, rather than song, the predominate form of vocalization used in territorial defense?
Perhaps an answer is that, unlike most song, which is tied to a mate choice context, barks are freed to vary in structure to symbolize motivation.They may lower in pitch or be uttered more rapidly (e.g., Kentucky warbler (Geothlypis formosus), [34], pers.obs.) (Figure 2).On their tropical nonbreeding territories, hooded warblers (Wilsonia citrina) bark in a regular cadence, but when given during border, confrontations are delivered more rapidly, culminating in rapidly delivered chippity-chups or "stuttering" barks, so rapid that it appears as though more than a single bird is giving them (Figure 3).If this fails, and they are face to face, the defender utters low and harsh growls, zrrr, the aggressive endpoint predicted by M/S rules and attacks [35].The chevron structure of barks makes the compass direction of the caller known to listeners [36].But all of these useful attributes are found in other vocalizations, such as song, so other causes for the abundant use of barks by migrant birds must be at work.We believe the evidence supports the idea that vocalizations found in all individuals, including the variation in call notes described above, can be ranged by all individuals and so are efficacious in defending nonbreeding territories.The widespread use of barks is predicted by motivation-structural rules because such "motivationally neutral" territorial signals can be endogenously produced to defend territories.Such production repels potential intruders, reducing the frequency of actual intrusion and saving energy and foraging time.
Growling
Vocalizations.Not all groups of birds use barks for nonbreeding territory defense.Barks are not used by vireos, breeding, or nonbreeding.Instead, vireo produce a series of growl-like calls used all year.In yellow-throated vireos (Vireo flavifrons), a series of growls are given, as stereotyped in delivery as songs, with a descending pitch and slowing cadence (cha-cha-cha-cha cha chaa..chaa chaaa).Both sexes use this "chatter growl" to repel conspecifics from the interspecific canopy flocks they defend during the nonbreeding period.Blue-headed vireos (Vireo solitarius) also growl but theirs lack a stereotyped sequence.Instead, they are variable in pitch and repetition rate.
We point out the great contrast between the vireos that defend winter territories and those that do not.The red-eyed vireo (Vireo olivaceus) and Philadelphia vireo (V.philadelphicus) tend to join mixed species flocks composed of highly frugivorous species.Neither species appears to defend a flock against conspecifics.These vireos are silent, except for occasional alarm notes, nyaah in the case of the red-eyed.Philadelphia vireos are partial to joining greenlet flocks (Hylophilus minor) in Panama, and it is difficult for an observer to tell vireos and greenlets apart.The Piranga cardinals exhibit similar adaptations in nonbreeding vocalizations.The summer tanager (P.rubra) maintains territories defended with barks whereas, the social flocking scarlet tanager (P.olivacea) remains silent (Table 1).
Singing in Overwinter Territorial
Defense.As an exception to the "bark, do not sing" rule, many territorial vireos sing, as well as growl, during the nonbreeding season.Because vireos sexes are indistinguishable in the field, perhaps this nonbreeding singing is performed only by males.However, for one species, the white-eyed vireo (Vireo griseus), females are known to sing in defense of winter territories [37].Female White-eyed vireos do not sing during the breeding season.Typically, their winter songs begin with barks.These barks are all mimetic, derived from thrushes and flycatchers and other species that use barks for nonbreeding territorial defense (pers.obs.).Male breeding songs do not incorporate mimetic barks.Perhaps these borrowed barks are useful in repelling other species from the Bursura fruit, this species depends upon for winter survival [38].In one of the few systematic playback studies of overwintering birds, Greenberg et al. [38] found that white-eyed vireos overwintering on the Yucatan Peninsula and resident mangrove vireos (Vireo pallens) respond more to playbacks of one another's chatter growls than to playbacks of songs, which differ greatly in the two species.Perhaps these growls, because they are closer to the aggressive endpoint of M-S rules are less species-specific than barks, and therefore, evoke more interspecific responses, useful in defense of fruit resources against other vireo species.
Vireos that sing in nonbreeding territorial defense, the first seven vireo species in Table 1, compose a subgenus whose members share the morphological patterns of eye rings and wing bars [39].Only males sing during the breeding season but, hormonally, males have low testosterone levels throughout the breeding season and prolactin levels equivalent to that of females, at least in blue-headed vireos [40].Sex roles during reproduction have converged relative to most temperate zone breeding songbirds, with males building nests and incubating eggs, coupled with genetic monogamy [18].Females choose mates that invest heavily in parental care and then abandon care of fledged young to these parentally-oriented males [41].We suggest that the hormonal convergence that underlies theirs unusually, for temperate zone breeding species, similar sex roles are related to the use of song by both genders during androgynous territoriality.Of course, we offer this as a stimulus for further research.
Other Trends in Overwintering Communication.
Resident and migratory yellow warblers (Dendroica petechia) illustrate divergence in territorial vocalizations.Overwintering migrant yellow warblers are highly territorial both intraand inter-specifically [42,43].Intraspecifically, the resident "mangrove" forms of the yellow warbler have dealt with While migrant birds often rely on barks for defense of nonbreeding territories, there are many exceptions (Table 1).We have suggested a model, a combination of ranging theory and M-S rules to predict and to explain why call notes and not songs are used in the non-breeding context for most species.The empirical data would seem to support the predictions from these models, but there is much variability in the types of vocalizations used in androgynous territoriality that remain to be described.The efficacy of the ranging and motivation-structural rules models to predict androgynous territorial vocalizations can and should be tested with species in other temperate/tropical migratory bird systems.
Figure 1 :
Figure 1: Territorial call note of an Acadian flycatcher (Empidonax virescens) from Panama showing the chevron-shape characteristic of barks.Note the "mossy" appearance due to reverberations arriving at the microphone slightly later than the directly transmitting sound.
Figure 2 :
Figure 2: Three consecutive barks from a Kentucky warbler (Oporornis formosus) showing changes in frequency range.In response to playback or territorial intrusion, this species uses a lower-pitch range of call notes, as predicted by motivationstructural rules (see text).
Figure 3 :
Figure 3: Hooded warblers (Wilsonia citrina) use a rich variety of call notes to defend nonbreeding territories.In the upper row, the left hand bark (a) shows reverberation.The upper central figure (b)shows the reverse chevron (down then up), a species-specific characteristic of barks given spontaneously in territorial maintenance.This is followed by a series of rapid call notes (chippity chups; (c)), highly variable in structure, used by aggressive individuals before attacking specific rivals.The four lower call notes (d) are from breeding individuals.Barks are used in defense of nestlings, and the high-pitch chevron note on far right shows and elevated frequency typically given by adults when a predator is near the nest.
Table 1 :
Nonbreeding vocalizations in five families of Nearctic-Neotropical migrant passerines a .
a Data for both sexes, collected by authors unless otherwise noted.b See text for description of these vocal categories.c I. Bisson, pers.com.d J. Townsend, pers.com.e R. Greenberg and J. Ahern, pers.com.
Table 2 :
Frequency of use of barks by both sexes of species in four taxonomic groups that defend nonbreeding territories.Such chip divergence is found in other subdominant warbler species.Magnolia warbler (Dendroica magnolia) barks sound like quince quince, a bark that sounds to the human ear very different from the sharp bark of the yellow warbler, and would probably not evoke aggression in dominant territorial yellow warblers.
the territorial birds from northern breeding populations in two ways.Their bark has diverged from the northern birds' sharp chip! to become a soft chup. | 4,485.6 | 2012-03-01T00:00:00.000 | [
"Biology"
] |
Electrospun Zein/PCL Fibrous Matrices Release Tetracycline in a Controlled Manner, Killing Staphylococcus aureus Both in Biofilms and Ex Vivo on Pig Skin, and are Compatible with Human Skin Cells
Purpose To investigate the destruction of clinically-relevant bacteria within biofilms via the sustained release of the antibiotic tetracycline from zein-based electrospun polymeric fibrous matrices and to demonstrate the compatibility of such wound dressing matrices with human skin cells. Methods Zein/PCL triple layered fibrous dressings with entrapped tetracycline were electrospun. The successful entrapment of tetracycline in these dressings was validated. The successful release of bioactive tetracycline, the destruction of preformed biofilms, and the viability of fibroblast (FEK4) cells were investigated. Results The sustained release of tetracycline from these matrices led to the efficient destruction of preformed biofilms from Staphylococcus aureus MRSA252 in vitro, and of MRSA252 and ATCC 25923 bacteria in an ex vivo pig skin model using 1 × 1 cm square matrices containing tetracycline (30 μg). Human FEK4 cells grew normally in the presence of these matrices. Conclusions The ability of the zein-based matrices to destroy bacteria within increasingly complex in vitro biofilm models was clearly established. An ex vivo pig skin assay showed that these matrices, with entrapped tetracycline, efficiently kill bacteria and this, combined with their compatibility with a human skin cell line suggest these matrices are well suited for applications in wound healing and infection control.
INTRODUCTION
Electrospinning is an established technique for the fabrication of nanoscale fibres (1)(2)(3). It continues to be studied extensively due to its various advantages such as high surface-to-volume ratio, tuneable porosity, and ease of surface functionalization. Indeed, the resulting fibres are extremely useful for applications in tissue engineering, drug delivery, and wound dressings. This is an area with huge potential for controlled release research. As electrospun fibres mimic the extracellular matrix (ECM) of tissues in terms of scale and morphology, there is the potential for them to be used as scaffolds. Taken together with their physical and chemical properties, electrospun scaffolds are being evaluated in various cellular studies, in sustained drug delivery, and as potential wound dressings (1)(2)(3). The localized and controlled drug delivery that might be achieved from electrospun micro/nanofibres could be applied in new treatments for burns or biomedical applications related to chronic wounds (1)(2)(3).
Burns and chronic wounds, such as diabetic ulcers, are hard to heal and require prolonged treatment due to a number of clinical complications (4,5). There is an increasing interest in the topical application of antimicrobials to overcome the problems associated with the low levels of antibiotic in the granulating tissue (5). In future wound treatments, there may also be a desire to leave dressings on for extended periods, as this will minimise damage to newly formed tissue. However, current application of topical antimicrobials requires daily or twicedaily changes of the dressing, leading to patient discomfort as well as being time consuming and costly. There is therefore a considerable interest in dressings that allow the controlled release of antibiotics (6).
Biofilms are prevalent in nature and appear to be associated with the majority of infections, e.g. wound infections, catheterlinked infections, endocarditis, dental caries, and cystic fibrosis (7). Microbial communities infect chronic wounds and they often involve biofilms rather than planktonic cells (8)(9)(10). A particular problem of bacteria within biofilms is that they are significantly more resistant to antibiotics compared to their freefloating planktonic counterparts (11). Indeed, as a result of this increased resistance of biofilms to treatment by antibacterial agents, it is important to test the controlled release of an antibiotic matrix against bacterial biofilms. In wounds that require antibiotic treatment, localized antibiotic delivery systems may overcome the problems associated with the low antibiotic levels in the granulating tissue (5). We have recently developed formulations in which tetracycline (Tet) hydrochloride has been successfully incorporated in multi-layered electrospun micro/ nanofibre matrices of zein and poly-ε-caprolactone (PCL) (12). We now report assays designed to test antibacterial activity achieved in a series of models of selected wound-associated biofilms of increasing complexity.
Alpha-zein is a corn (maize) protein containing a high percentage of non-polar amino acids with the ability to form aggregates and entrap solutes, such as drugs and amino acids. However, there are only a few papers, and those all recently published (12)(13)(14)(15)(16), reporting the incorporation of antibiotics within electrospun zein. Antimicrobial (chitosan) electrospun zein fibre structures provide a new strong antimicrobial ultrathin-structured system (13). Also, in order to develop biocompatible nanofibrous membranes for wound healing, the coelectrospinning of two proteins, zein and collagen, in aqueous acetic acid solution, was investigated where the combination with zein improved the electrospinning of collagen. The drug berberine was then incorporated in situ into the electrospun nanofibrous membrane, with little effect on fibre morphology and cell viability, in order to investigate its controlled release and antibacterial activity. Wound healing by these berberine releasing nanofibre membranes was examined in vivo using female Sprague Dawley rats and histology (14).
Luzardo-Alvarez and co-workers have used NMR spectroscopy to detect binding interactions and measure affinity between zein and three different drugs: indomethacin, and the antibiotics amoxicillin and Tet. Such protein-drug interactions show that zein is promising for the rational design of drug delivery vehicles (15). Luzardo-Alvarez et al. have also reported that treatment with Tet antibiotics within the periodontal pocket against Staphylococcus aureus bacterial infections represents a useful and adjunctive tool to conventional therapy for healing and teeth preservation. Thus, a two-polymer system of zein and PLGA has been developed as a biodegradable implant (16). Sustained release of Tet was obtained, and the proportion of zein in the inserts had a significant impact on the antibiotic release. Indeed, an effective release of Tet from the inserts against S. aureus achieved over 30 days of controlled delivery, and hence this may be suitable for the intra-pocket delivery of antimicrobial agents in the treatment of periodontitis (16).
In this paper, we report the incorporation of Tet in electrospun micro/nanofibre zein/PCL triple layers (3L) and its controlled release from these matrices (12). We show excellent antibiotic activity resulting in the destruction of clinicallyrelevant different S. aureus bacterial strains that are efficient biofilm formers, including activity against MRSA252, a representative of a lineage (EMRSA-16) that is endemic in UK hospitals (17). In particular, we investigate the biological activity of sustained release Tet in an ex vivo pig skin model relevant for wound dressing research and, for the first time, we report on the compatibility of such zein/PCL wound dressing matrices with human fibroblast (FEK4) skin cells.
Preparation and Characterisation of Electrospun Matrices of Zein or Zein/PCL
Triple-layered (3L) matrices were prepared as recently reported (12). Zein solution was prepared at 30% (w/v) in a 1:1 (v/v) mixture of 2,2,2-trifluoroethanol (TFE):dichloromethane (DCM). Tet was dissolved in TFE at 5% (w/w) of the weight of zein. For blended matrices of zein and PCL, zein was dissolved at 20% and PCL at 10% in 1:1 (v/v) TFE:DCM, resulting in solutions with a total polymer concentration of 30% (w/v) with Tet (dissolved in TFE) again incorporated at 5% of the weight of the polymer. The polymer solution was loaded into a syringe and electrospun at 18 kV and a flow rate of 0.75 mL/h, with a distance between the tip of the needle and the collector of 13 cm. The flow rate was controlled by a syringe infusion pump (Cole Parmer, 230 VAC). The collector was constructed of two parallel metal electrodes covered with aluminium foil. 3L matrices consist of outer layers free of drug and an inner layer with Tet 5% of the weight of the polymer. To make these matrices, each polymer solution was electrospun using a fixed volume for each layer (1 mL for the outer layers and 0.5 mL for the inner layer) in a layer-by-layer manner. Two matrices were produced: triplelayered zein with Tet in the middle layer (zein 3L) and triplelayered zein/PCL with Tet in the middle layer (zein/PCL 3L). Three replicates of each formulation were fabricated. The viscosity of the electrospinning solutions was measured using a Bohlin high resolution C-VOR 200 Rheometer equipped with plate accessory using the spindle type CP4/ 40 maintained at 25°C. The shear rate was 50-100 Pa for the zein solution and 100-500 Pa for the zein/PCL solution. The surface morphology of electrospun matrices was observed by scanning electron microscopy (SEM) on matrices cut into small cm 2 sized pieces (12).
In Vitro Drug Release
The electrospun matrices were cut into 1.2×1.2 cm squares and, to minimise the effect of drug release from the edges, the samples were adhered to plastic coverslips using a gene frame to give an available release surface of 1 cm 2 . Samples were placed under sink conditions in plastic vials containing phosphate buffered saline (PBS, 5 mL, pH 7.4) and incubated at 37°C. At set time-points, the PBS was replaced and Tet release was determined in the sampled buffer by measuring its UV absorbance at λ=360 nm against a standard curve. Cumulative Tet release was determined by comparing the mass released at each time point with the theoretical mass of Tet encapsulated in each sample. Triplicate samples were examined for each formulation and the experiment was performed three times with independently electrospun mats (12).
MIC Planktonic Bacteria Assay
Minimum inhibitory concentration (MIC) values against planktonic bacterial cells were determined with a microdilution broth method using MH Broth as previously described (18).
96-Well Microtiter Plate (MTP) Biofilm Assay
The effect of antibiotics on biofilms of S. aureus MRSA252 was tested as we have recently described (19)(20)(21), with minor modifications. Briefly, these bacterial cells were cultured (15 h) in TSB containing 0.5% glucose and 3% NaCl (TSB-GN) and diluted 20-fold. The diluted bacterial suspension (200 μL/well) was dispensed into a polystyrene 96-well plate, and the plates were incubated for 24 h at 37°C on a 3-dimensional plate rotator (40 rpm). The cell suspension was removed and the biofilms were washed carefully with sterile PBS (200 μL).
The effectiveness across a panel of antibiotics was then tested on preformed biofilms on the same plate, therefore on the same day and under the same conditions. To test the antibiotics, 200 μL fresh TSB-GN (control wells) or TSB-GN containing the appropriate concentration of antibiotic (50 μg/mL) was added to preformed (24 h) biofilms, and the plates were then incubated for a further 24 h. After this, non-adherent cells were removed, and the biofilms were washed with sterile PBS (3×200 μL/ well). The plates were dried (1 h at 20°C) and biofilms were then stained with crystal violet solution (1% w/v). After 15 min, the excess of crystal violet was removed, plates were washed briefly with water (4×250 mL), and the crystal violet was dissolved in aqueous acetic acid (30% v/v in distilled water). The absorbance, representative of the amount of biofilm remaining after treatment, was measured at λ= 595 nm (A 595 ) using a FLUOstar Omega spectrophotometer (BMG LABTECH, UK).
To test the effectiveness of Tet loaded electrospun matrices, fresh TSB-GN (200 μL) was added to preformed (24 h) biofilms of S. aureus MRSA252, and then matrices (6 mm diameter) or Tet solution were added to each well in 7 groups, with 3 replicates: matrices containing 30 μg Tet (zein 3L, zein/PCL 3L), matrices containing no Tet (single layer zein no Tet, zein/PCL no Tet), Tet solution (30 μg/well), a negative control (media+Tet solution 30 μg/well, in wells without biofilm), and a positive control (wells with biofilm, only fresh medium added). Plates were incubated again for 24 h at 37°C. The next day, the same discs were transferred to newly preformed biofilms, incubated for 24 h at 37°C, and this was repeated a third time. After each treatment, the bacterial suspension was removed and the biofilms were washed, stained, and analysed as above (19)(20)(21).
Colony Biofilm Model (CBM)
The effect of Tet loaded electrospun matrices on biofilms was tested in the CBM as recently described (20,22), with minor modifications. S. aureus MRSA252 was cultured (15 h) in TSB-GN, and diluted to an OD at 600 nm of 0.4-0.6. Three sterile 13 mm polycarbonate discs were placed on the surface of TSB agar plates and aliquots (50 μL) from the diluted bacterial suspension were spotted on to each disc. Inoculated discs were incubated for 72 h at 37°C to allow formation of the biofilm. The polycarbonate discs were carefully transferred to new agar plates with sterile forceps on a daily basis. On the fourth day, the discs were covered with zein 3L or zein/PCL 3L (13 mm diameter). The third polycarbonate disc was left without any treatment. Before covering the discs with the 3L matrices, the surface of the biofilms was wetted with TSB (10 μL) and, after coverage, another 20 μL of broth was applied on top of the matrices. The plates were then incubated again at 37°C for 24 h. Each disc was then transferred to a tube containing TSB (5 mL) and kept cool on ice during the experiment. The tubes were vortexed extensively (~5 min) in order to disrupt mechanically the biofilms and detach the bacteria from the discs. Suspended cells were then serially diluted to 10 −7 in broth, and aliquots (10 μL) of 10 −4 , 10 −5 , 10 −6 , and 10 −7 dilutions were spotted on TSB agar plates, to determine the colony forming units (CFU)/mL using the Miles-Misra method (23). The plates were incubated at 37°C for 24 h and the numbers of CFU were counted. The number of CFU/disc was determined using the following formula: CFU/disc = CFU counted × dilution factor × 100×5; this was then normalised by the number of CFU/biofilm disc.
Ex Vivo Pig Skin Infection Model
The routine preparation of the pig skin followed this procedure. Thawed pig skin (ex-abattoir, stored for weeks frozen) was epilated with a dry razor, stripped with a tape 10-times to remove the upper dead layers, and finally cut into 1×1 cm 2 pieces. The pieces were sterilised firstly by immersing in 70% ethanol for 20 min, dried for 20 min in a biosafety cabinet, followed by soaking in a solution of antibiotics for 16 h (kanamycin sulfate 20 μg/mL and ampicillin 50 μg/mL). On the day of the experiment, a cut 0.9 cm long was created along the skin to reach the epidermal layer using a sharp knife. The cut pig skin squares were then placed on TSB agar plates prepared with antibiotics (kanamycin sulfate 20 μg/mL and ampicillin 50 μg/mL).
S. aureus MRSA252 or ATCC 25923 were cultured (15 h) in TSB-GN. Inoculation was with an aliquot (20 μL) of bacterial suspension applied to the epidermal side of the skin and spread uniformly. The inoculated skin pieces were incubated for 5 days at 37°C in a humidified chamber. Every day the pieces were transferred to a new agar plate. Four pig skin pieces were used in each repeat. On day 5, the pig skin pieces were covered with zein 3L, zein/PCL 3L (1 cm 2 ), or a filter paper loaded with 30 μg Tet. A fourth pig skin piece was left untreated. Before covering the pig skin pieces with the 3L matrices, the surface of the biofilms was wetted with TSB-GN (10 μL) and, after coverage, another 20 μL of broth was applied on top of the matrices. The plates were then incubated again at 37°C for 24 h. Each skin piece was then transferred to a tube containing MH broth (5 mL) and kept cool on ice during the experiment. The tubes were vortexed extensively (~5 min) in order to detach mechanically the bacteria from the skin. Suspended cells were then serially diluted to 10 −7 in broth, and aliquots (10 μL) of 10 −4 , 10 −5 , 10 −6 , and 10 −7 dilutions were spotted on TSB-GN agar plates, to determine the colony forming units (CFU)/mL using the Miles-Misra method (23). The plates were incubated at 37°C for 24 h and the numbers of CFU were counted. The number of CFU/skin piece was determined using the above formula; this was then normalised by the number of CFU/untreated skin piece.
Cell Viability Test (MTS Assay)
Single layer zein and zein/PCL electrospun matrices containing 5% Tet were punched into 6 mm diameter discs and fixed to the bottom of the wells of 96-well plates using pieces of gene frame. FEK4 cells are passage-dependent human primary foreskin fibroblasts. They were routinely cultured in Earle's modified minimum essential medium supplemented with 10% fetal calf serum (heat-inactivated at 56°C for 45 min before use) and 50 IU/mL of penicillin and streptomycin, and maintained at 37°C in a humidified incubator with 5% CO 2 . FEK4 cells between passage 11 and 17 were seeded in each well at 750 cells/100 μL and incubated for 3 days before the relative cell number was assessed using the MTS assay. On the day of the assay, MTS reagent (20 μL) was added to each well and incubated for another 4 h at 37°C. After incubation, culture medium (50 μL) was then transferred from each well to a new 96well plate and the absorbance of the solutions measured at 490 nm (n=3, in triplicate). Tet solution alone and cells seeded directly on the tissue culture plastic wells were used as controls.
Student's t-tests were performed using Microsoft Excel to determine any statistically significant differences between the formulations and the commercially available Tet filter discs; p<0.05 is considered to be significant.
96-Well Microtiter Plate (MTP) Assay of Antibiotic Activity in Destroying MRSA Biofilms
A now well-established consequence of the formation and maturation of microbial biofilms is increased resistance to antibiotics and hence their failure in therapy (24). In part, this occurs by preventing access of the antibiotics to their sites of action, together with other mechanisms still not completely understood (24). We have tested a range of clinically important antibiotics against preformed MRSA252 biofilms using the Microtitre Plate (MTP) assay model.
The data show (Fig. 1) that vancomycin, the glycopeptide often named as the antibiotic of last resort for certain multidrug resistant Gram-positive infections, and gentamicin are only weakly active against this S. aureus strain once biofilms have formed. However, tetracycline (Tet), used for the treatment of skin infections, and chloramphenicol, less widely used except in over the counter (OTC) eye-drops against conjunctivitis, were active removing~50% of the biofilms (Fig. 1). In addition, we have also assayed the MIC values of these antibiotics against planktonic MRSA252 bacterial cells. The MIC values are: gentamicin 1 μg/mL; Tet 0.25 μg/mL; vancomycin 0.25 μg/mL; chloramphenicol 16 μg/mL (18). These MIC values are considerably (200-fold for Tet) lower than the concentrations used ( Fig. 1) in destroying MRSA biofilms. Gentamicin is known to be less active against intracellular bacteria (25,26), but that is not relevant in the MTP assay where the bacteria are adhered to polystyrene wells. The activity of these antibiotics against planktonic cells is significantly higher, and the lower activity is due to the biofilm state of the bacteria. Lack of biological activity against biofilms is due to a number of factors, including penetration of the antibiotic into the biofilm, and the low rates of cell growth in a biofilm as many cells in a biofilm are in a semi-dormant state (27).
With regard to potential wound dressings, we have therefore focussed our research on the sustained and controlled release of Tet in in vitro and ex vivo models of increasing complexity with bacterial targets where biofilms are known to be a clinical problem. Interestingly, Tet is normally considered bacteriostatic, yet at the concentrations used here it results in the removal of the biofilms. However, it should be noted that at concentrations > 0.3 μg/mL Tet was shown to be bactericidal for S. aureus (28). The concentrations used here are significantly above that value, and thus it is likely that Tet kills cells in the biofilms, albeit rather inefficiently. We have therefore designed and used a variety of assays in order to evaluate the antibacterial efficacy of electrospun Tet-loaded matrices including: a 96-well MTP biofilm assay, a colony biofilm model, and an ex vivo pig skin model.
Characterization of Electrospun Matrices of Zein or Zein/PCL
The diameter of electrospun zein fibres (in triple layers, 3L) was 0.99±0.36 μm (12). Electrospun zein/PCL fibres were thicker than zein fibres and had more variability, with diameter of 1.51±0.65 μm for zein/PCL (20:10) 3L. This can be explained, in part, by the difference in viscosities displayed by the polymeric solutions. The zein solution viscosity was 0.186 ±0.012 Pa.s, whereas the zein/PCL solution viscosity was much higher at 1.84±0.24 Pa.s. The increase in viscosity may indicate a greater polymer chain entanglement in the solution. Applying the same voltage to the more viscous zein/PCL solution led to less jet stretching during the electrospinning process, resulting in a larger fibre diameter.
Tet Release from Triple-Layered Matrices
Prior to release studies, the encapsulation efficiency of each matrix formulation was determined with mats cut into small discs (~6 mm diameter), weighed and then dissolved in methanol:DCM (1:1 v/v, 10 mL). From the UV absorbance (λ=360 nm, subtracting for zein absorbance at this wavelength), the amount of Tet HCl in the fibres was then calculated using a Tet HCl calibration curve and subsequently compared to the theoretical value (5%). Within experimental error, encapsulation was quantitative for all but the zein/PCL (20:10) 3L samples which had an encapsulation efficiency of 71±11% (12).
It is well known that light, temperature, moisture, and duration of storage influence the stability of Tet leading to a decrease in its microbiological activity and an increase in its toxicity (29). The physical and chemical stability of Tet in these electrospun zein matrices has therefore been examined in detail (12). Following our recent precedent with electrospun PCL matrices (29), we have applied 1 H NMR spectroscopy (Fig. 2) showing that the chemical integrity of loaded Tet HCl was maintained after the electrospinning process as all the signals of Tet HCl could be observed, comparable with the literature data (29). We have also demonstrated spectroscopic (UV) and spectrometric (MS) stability of electrospun Tet, and employed Raman microscopy to demonstrate the even distribution of Tet in the electrospun fibres (12). Significantly, we also show, vide infra, that the electrospun Tet is still biologically active, working equally effectively in comparison with commercial Tet. In our previous in vitro study using electrospun zein (12), a fast release of Tet from electrospun triple-layered zein (3L) was observed within the first 3 h (47% of the encapsulated drug). We have also previously shown that zein matrices shrink in aqueous media and lose their fibrous structure becoming a film (12) (Fig. 3). In order to overcome this, we blended PCL with zein. This successfully maintained the fibrous structure on contact with water and stopped the shrinkage (12) (Fig. 3). In addition, zein/PCL 3L showed a more gradual release of Tet as only 19% was released within the first 3 h. However, both formulations liberated 50% of the encapsulated Tet after 24 h (Fig. 3). In the following days, zein 3L sustained the release of Tet up to 20 days in which a further 27% of encapsulated Tet was released. Whereas zein/PCL 3L released a further 10% of Tet until day 15 when the release plateaued. In the first few hours, zein/PCL 3L showed a better gradual release than zein 3L, however, zein/PCL 3L released Tet in the following days very slowly compared to zein 3L. It is possible that the addition of PCL to the formulation increased the hydrophobicity of the layered matrix and thus restricted the water access to the drug encapsulated within the polymeric blended fibres.
Effect of Matrices on Biofilms Measured in the MTP Assay
The MTP assay has the advantage of being a simple, rapid, and inexpensive screen of anti-biofilm agents using crystal violet (19,30). We modified the design of the assay in order to investigate the antibacterial effect of sustained Tet release and observed that 3L matrices significantly decreased the biomass of biofilms formed by S. aureus MRSA252, and remained bioactive when re-used on fresh biofilms (for three consecutive days; more than 90% decrease, p<0.001; Fig. 4). Comparing 3L matrices to the Tet control results, Tet control decreased the absorbance more than the triple matrices did in the first 48 h (p<0.05), while there was no significant difference after 72 h (p>0.05). This is due to the fact that a significant proportion of the Tet load in the 3L matrices was encapsulated in the polymeric fibrous matrices and not rapidly released.
Colony Biofilm Model (CBM)
In this model, biofilms were grown on a polycarbonate membrane that sat on top of an agar plate in a system that mimics biofilms growing in a wound (20,22). The low fluid shear and the proximity to an air interface provided by this model simulate the wound environment (31). Additionally, the nutrient flow is similar to that of biofilms in a wound, with carbon and nitrogen sources (usually from the host tissue in vivo) coming from the agar, and the oxygen diffusing into the biofilm from the air interface on the opposite side of the biofilm (20,22,31). Figure 5 shows the antibacterial effectiveness of the formulations. Compared with the control biofilms, the 3L matrices significantly reduced the number of living cells in the biofilms on the polycarbonate discs. The CFU/disc was reduced from 100% for untreated biofilms to~35% for 3L matrices (p<0.01). There was no significant difference in CFU/disc achieved with zein 3L compared to zein/PCL 3L (p>0.05).
Ex Vivo Pig Skin Infection Model
Pig skin and human skin share many physiological and anatomical similarities. For example, both pig and man have a thick epidermis (50 to 120 μm in human compared to 30 to 140 μm in pigs). They both enjoy well-developed rete-ridges, papillary bodies, and abundant subdermal adipose tissue. They also demonstrate a similar size, orientation, and distribution of blood vessels, adenexal structures, type of keratinous proteins, collagen, body hair and lipid composition of the stratum corneum (32). Here, an ex vivo pig skin model was used to evaluate the antimicrobial efficacy of the 3L electrospun matrices against two bacterial strains, MRSA252 and ATCC 25923. Figure 6 shows the antibacterial effectiveness of the formulations against MRSA252 when grown on pig skin for 5 days ex vivo. Compared with the untreated pig skin samples (as a normalised control), the Tet loaded 3L matrices significantly reduced the number of living cells; the CFU/skin sample was reduced from 100% for untreated samples to 22% for commercial Tet impregnated filter discs, 19% for zein/PCL 3L matrices, and 10% for zein 3L matrices (all p<0.01). We did note, however, that S. aureus MRSA252 did not grow particularly well on pig skin, and for that reason another S. aureus strain, but a meticillin sensitive (MSSA) strain (ATCC 25923) was also used. This strain grew significantly better on pig skin and, similar to MRSA252, treating the infected pig skin with Tet loaded zein/PCL 3L matrices also led to a significant reduction in the number of CFU of ATCC 25923 (from 100±39% for the untreated sample to 27±4% for the treated sample, p<0.01). Thus, these Tet loaded 3L matrices efficiently killed both S. aureus MRSA252 and MSSA ATCC 25923 grown for 5 days on pig skin ex vivo and then treated with the matrix formulation for 24 h.
Cell Viability Test (MTS Assay)
Having established in three different models of bacterial biofilm formation that these electrospun fibrous 3L matrices are releasing bioactive Tet in a sustained manner, we wanted to show that this formulation is not toxic to human skin cells. The 3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium (MTS) assay was therefore used to demonstrate that there was no adverse effect on the growth of human primary skin fibroblast (FEK4) cells seeded on zein/PCL+Tet electrospun fibrous matrices (single layer). Commercially available Tet impregnated filter discs were used as a control to investigate whether Tet is toxic to these fibroblast cells or not. Figure 7 shows that the metabolic activity of the cells 72 h after seeding on the zein/PCL+Tet electrospun mats was similar to that of cells seeded on the tissue culture plastic with or without the commercial Tet filter discs (p>0.05), demonstrating that the zein/PCL blend, as a biomaterial, supported fibroblast adhesion and growth, and that the release of Tet from this formulation had no detrimental effect. This biocompatibility of soluble Tet with these cells was confirmed by the observation that commercially available Tet filter discs did not affect the growth of FEK4 cells on tissue culture plastic (Fig. 7). These results, demonstrating the cellular compatibility of zein, are similar to the results within a recent study examining the growth on murine fibroblasts on zein nanofibres containing different concentrations of curcumin (33). Therefore, these zein/PCL electrospun matrices provide an attractive structure for the attachment and growth of fibroblasts as cell culture surfaces and so they present a suitable candidate for potential further applications in drug delivery systems. Indeed, a related application on the release of the antibiotic amoxicillin from electrospun fibrous wound dressing patches has recently been reported in this Journal (34).
CONCLUSIONS
The potential and feasibility of a blend of zein and PCL as a drug delivery vehicle using designed and engineered electrospun matrices has been investigated. As electrospun Tet encapsulated zein fibres (i.e. zein without PCL) shrank significantly in aqueous media, Tet encapsulated micro/ nanofibre zein/PCL 3L electrospun matrices were prepared, where blending with PCL stopped the shrinkage (12). Tet release was controlled for over 10 days demonstrating a potential for application in wound treatments e.g. as dressings or even as implants. In this paper, we demonstrate that this sustained released Tet showed excellent antibiotic activity in destroying preformed biofilms from S. aureus MRSA252 in models resembling the situation found in wounds. In addition, applying these Tet loaded matrices, for the first time, to ex vivo pig skin models, growing either MRSA252 or ATCC 25923, significantly reduced the bioburden of these clinically relevant bacteria. These zein/PCL electrospun matrices were also shown to be compatible with human fibroblast FEK4 skin cells. Taken together, the clearly established ability of these Tet loaded zein-based matrices to destroy bacteria within increasingly complex in vitro and ex vivo biofilm models and their biocompatibility with human skin cells lead to the conclusion that these matrices could be well suited for applications in wound healing and infection control. | 7,056 | 2015-09-03T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
Benzoxathiol derivative BOT-4-one suppresses L540 lymphoma cell survival and proliferation via inhibition of JAK3/STAT3 signaling
Persistently activated JAK/STAT3 signaling pathway plays a pivotal role in various human cancers including major carcinomas and hematologic tumors, and is implicated in cancer cell survival and proliferation. Therefore, inhibition of JAK/STAT3 signaling may be a clinical application in cancer therapy. Here, we report that 2-cyclohexylimino-6-methyl-6,7-dihydro-5H-benzo [1,3]oxathiol-4-one (BOT-4-one), a small molecule inhibitor of JAK/STAT3 signaling, induces apoptosis through inhibition of STAT3 activation. BOT-4-one suppressed cytokine (upd)-induced tyrosine phosphorylation and transcriptional activity of STAT92E, the sole Drosophila STAT homolog. Consequently, BOT-4-one significantly inhibited STAT3 tyrosine phosphorylation and expression of STAT3 downstream target gene SOCS3 in various human cancer cell lines, and its effect was more potent in JAK3-activated Hodgkin's lymphoma cell line than in JAK2-activated breast cancer and prostate cancer cell lines. In addition, BOT-4-one-treated Hodgkin's lymphoma cells showed decreased cell survival and proliferation by inducing apoptosis through down-regulation of STAT3 downstream target anti-apoptotic gene expression. These results suggest that BOT-4-one is a novel small molecule inhibitor of JAK3/STAT3 signaling and may have therapeutic potential in the treatment of human cancers harboring aberrant JAK3/STAT3 signaling, specifically Hodgkin's lymphoma.
Introduction
The JAK/STAT signaling cascade was originally characterized during interferon-mediated signal transduction studies in early 1990s Shuai et al., 1992;Müller et al., 1993;Watling et al., 1993). JAKs belong to a family of non-receptor tyrosine kinases and STATs are latent cytosolic transcription factors that activate signals from the cell membrane to the nucleus. The JAK and STAT protein families are composed of four and seven members in mammals, respectively and JAK/STAT pathways are up-regulated by more than fifty different cytokines and growth factors (Schindler and Plumlee, 2008). The binding of cytokines and growth factors to their corresponding transmembrane receptors subsequently activates membrane-associated JAK and STAT proteins by phosphorylation of specific tyrosine residues. STAT BOT-4-one inhibited cytokine (upd)-induced tyrosine phosphorylation of STAT92E. S2-NP cells transiently transfected with an expression plasmid for STAT92E-HA were co-cultured with upd-producing cells for 24 h in the presence of either vehicle (DMSO) alone or BOT-4-one. Immunoblot analysis was performed with phospho-STAT92E and HA antibodies. STAT92E-HA served as a loading control. (C) BOT-4-one inhibited STAT92E transcriptional activity. Cultured Drosophila S2-NP-STAT92E cells expressing a STAT92E reporter gene were co-cultured with upd-producing cells for 24 h in the presence of BOT-4-one. The STAT92E luciferase activity was measured and the firefly luciferase activity was normalized to Renilla luciferase activity. Results are shown as the mean of three independent experiments (± SD indicated by error bar). *P < 0.001, significant difference when the value of treatment was compared to that of the control.
proteins are a family of cytosolic transcription factors that have dual functions -they transduce signals through the cytoplasm and have a function as transcription factors in the nucleus (Takeda and Akira, 2000).
The JAK/STAT-mediated signaling cascade represents essential roles for proliferation or differentiation, development, hematopoiesis, and immune responses (Park et al., 1995;Meraz et al., 1996;Darnell, 1997;Neubauer et al., 1998). However, recent studies showed that persistently activated JAK/STAT signaling correlates with tumorigenesis and cancer progression through its intimate connection to growth factor signaling and observed high frequency in human cancers. Numerous studies have shown that constitutively activated JAK kinases are found in a variety of cancer patients with lymphoblastic leukemia, myeloproliferative diseases, acute megakaryoblastic leukemia, and acute lymphoblastic leukemia (James et al., 2005;Walters et al., 2006;Bercovich et al., 2008;Flex et al., 2008;Mullighan et al., 2009;Oh et al., 2010). In addition, STAT3, in part STAT5 and STAT6, is also constitutively activated in multiple human cancers as well as in various hematopoietic malignancies (Klampfer, 2006;Yu et al., 2009;Haftchenary et al., 2011). Therefore, regulation of inappropriately activated JAK and/or STAT signaling is valuable therapeutic targets for the treatment of human cancers. Several JAK/STAT inhibitors have been developed and are used on clinical trials for the cancer treatments (O'Shea et al., 2004;Atallah and Verstovsek, 2009;Fletcher et al., 2009;Haftchenary et al., 2011).
Benzoxathiol derivatives, especially 6-hydroxy-1,3-benzoxathol-2-one (called also tioxolone) have been used in the local therapy of psoriasis vulgaris and acne, and also reported to have anti-bacterial, anti-mycotic, and cytostatic properties (Goeth and Wildfeuer, 1969;Wildfeuer, 1970;Lius and Sennerfeldt, 1979). Recent report showed that benzoxathiol derivativs have anti-inflammatory and antitumorigenic effects through inhibition of NF-κB and STAT3 activation (Kim et al., 2008a(Kim et al., , 2008c. We herein identified 2-cyclohexylimino-6-methyl-6,7dihydro-5H-benzo[1,3]oxathiol-4-one (BOT-4-one) has a potent anti-cancer activity via inhibition of JAK/ STAT3 signaling in both Drosophila and human cancer cells. BOT-4-one inhibited persistently activated cancer cell proliferation and survival through induction of apoptosis by down-regulation of antiapoptotic gene expressions, which are known to STAT3 downstream target molecules. BOT-4-one predominantly induced cell death in Hodgkin's lymphoma L540 cells that are aberrantly activated JAK3/STAT3 signaling. Human cancer cell lines that express constitutively-active STAT3 were incubated with either vehicle (DMSO) alone or BOT-4-one (30 μM) for 24 h. Total RNA was extracted and subjected to quantitative real-time PCR. BOT-4-one inhibited STAT3 expression in all cell lines, but inhibition of STAT1 and STAT5 expression was cell type-dependent manner. Results are shown as the mean of three independent experiments (± SD indicated by error bar). *P < 0.001, significant difference when the value of treatment was compared to that of the control. (B) Whole cell extracts were prepared from L540 cells after treatment for 24 h with either vehicle (DMSO) alone or BOT-4-one, and immunoblot analysis was performed with antibodies specific for the molecules indicated. BOT-4-one inhibited tyrosine phosphorylation of STATs, and its inhibitory effect on STAT3 and STAT5 was much higher than those of STAT1. GAPDH served as a loading control. (C) Cytosolic and nucleic fractions were extracted from L540 cells after treatment for 24 h with either vehicle (DMSO) alone or BOT-4-one, and immunoblot analysis was performed with phospho-STAT3 and STAT3 antibodies. BOT-4-one inhibited STAT3 activation. C, cytosolic, N, nucleic.
BOT-4-one inhibits STAT92E activation in Drosophila cells
Drosophila cells have one JAK and one STAT protein called Hop and STAT92E compared with those of mammalian cells, respectively (Hou and Perrimon, 1997). To identify small molecules that are potential inhibitors of JAK/STAT signaling, we performed a cell-based high throughput screening using Drosophila cell line as previously described (Kim et al., 2008b(Kim et al., , 2010a and identified 2-cyclohexylimino-6-methyl-6,7-dihydro-5H-benzo[1,3]oxa thiol-4-one (BOT-4-one; Figure 1A) as a potential inhibitor of STAT92E signaling. Cytokine (upd)induced STAT92E transcriptional activity was increased more than 21-fold compared to that of vehicle treatment and BOT-4-one was found to inhibit STAT92E transcriptional activity in a dosedependent manner ( Figure 1B). Cytokine-induced phosphorylation of tyrosine residues is a key step in STAT activation. To determine whether BOT-4one could affect tyrosine phosphorylation, we examined tyrosine phosphorylation levels of STAT92E followed cytokine treatment. Treatment with 30 μM BOT-4-one almost completely suppressed STAT92E phosphorylation ( Figure 1C). These results indicate that BOT-4-one is a small molecule inhibitor of STAT92E signaling in Drosophila cells.
BOT-4-one inhibits STAT3 activation in human cancer cell lines
We next examined the effect of BOT-4-one on the expression levels of STATs in various human cancer cell lines. Treatment with 30 μM BOT-4-one showed reduction of STAT3 expression in all cancer cell lines, and the effect was much stronger in L540 cells compared to MDA-MB-468 and DU145 cells (Figure 2A). STAT1 and STAT5 expression levels were also decreased in L540 cells by BOT-4-one, however, their expressions were not affected in MDA-MB-468 and DU145 cells. We therefore examined the dose effect of BOT-4-one on STATs phosphorylation in L540 cells. BOT-4-one significantly decreased STAT3 and STAT5 phosphorylation compared to that of STAT1 ( Figure 2B), indicating that BOT-4-one selectively inhibits STAT3 and STAT5 phosphorylation in L540 cells. Phosphorylated STAT proteins on tyrosine residues undergo dimerization and translocation to the nucleus, where they initiate transcription and translocated STAT3 proteins to the nucleus undergoes dephophorylation and exports to the cytosolic region through the nuclear pore complex by nucleocytoplasmic shuttling (Herrmann et al., 2007). We next examined whether BOT-4-one could reduce tyrosinephosphorylation status of STAT3. BOT-4-one inhibits phosphorylation of STAT3 in the cytosolic and nucleic regions, but its expression is not altered in the both regions ( Figure 2C). In L540 cells, JAK3 is , and DU145 (C) cells were incubated with either vehicle (DMSO) alone or BOT-4-one for 24 h. Whole cell extracts were proceeded for immunoblot analysis using antibodies specific for the molecules indicated. BOT-4-one inhibited constitutively-active STAT3 in all cell lines. Interestingly, BOT-4-one predominantly inhibited JAK3-mediated STAT3 phosphorylation and SOCS3 expression known to STAT3 downstream target molecule than those of JAK2-mediated. However, BOT-4-one inhibited ERK1/2 phosphorylation in a cell type-dependent manner, and has weak effect on Src family kinases such as Src and Lyn in all cell lines. GAPDH served as a loading control. constitutively activated and JAK family kinases are upstream regulator of STATs activation. These results therefore suggest that BOT-4-one inhibits STAT3 activation, but not STAT3 expression and BOT-4-one may suppose the inhibition of JAK3 activity in L540 cells.
BOT-4-one predominantly inhibits JAK3/STAT3 signaling L540 cells are persistently activated JAK3/STAT3 pathway, whereas MDA-MB-468 and DU145 cells are persistently activated JAK1/STAT3 and JAK2/ STAT3 pathways (Kim et al., 2010b). BOT-4-one strongly decreased STAT3 phosphorylation, in part STAT5, in L540 cells than in MDA-MB-468 and DU145 cells. In order to identify the effect of BOT-4-one on specificity of JAK3, we examined the effect of BOT-4-one on phosphorylation of JAK2, JAK3 and Src family kinases as well as ERK signaling. Reduction of JAK3/STAT3 activation by BOT-4-one in L540 cells was stronger than that of JAK2/STAT3-activated MDA-MB-468 and DU145 cells ( Figures 3A-C). In addition, expression of the STAT3 target protein SOCS3 also inhibited and the effect was parallel as JAK/STAT3 inhibition in the cells. However, phosphorylation of Src family tyrosine kinases such as Lyn and Src was weakly affected upon 30 μM BOT-4-one in all cell lines and ERK phosphorylation was inhibited only in MDA-MB-468 and DU145 cells, but not in L540 cells. These results suggest that BOT-4-one inhibits STAT3 activation through a little different pathways in various cancer cell lines.
BOT-4-one inhibits cancer cell survival
A number of studies reported that inhibition of STAT3 signaling reduce cancer cell survival (Al Zaid Siddiquee and Turkson, 2008). We next examined whether BOT-4-one reduces cancer cell survival by down-regulation of STAT3 activation. For the assay, L540 or DG-75 cells were treated with either vehicle alone or various concentrations of BOT-4-one. We found that viability and proliferation of L540 cells were significantly decreased by BOT-4-one in a dose-and time-dependent manner ( Figures 4A and C). However, viability and proliferation of DG-75 cells were not affected by BOT-4-one, where STAT3 pathway was not activated (Kim et al., 2008b). IL-6 activates JAK/STAT signaling pathway by binding with IL-6R/gp130, and increases cancer cell survival and proliferation. To know that BOT-4-one could affect exogenous cytokine-induced cancer cell survival, we cultured L540 cells with IL-6 and measured cell viability. BOT-4-one inhibited IL-6 indued cancer cell survival in a dose-dependent manner and the effect was a little weaker than without IL-6 treatment (Supplementary Figure S1). Together, these results suggest that BOT-4-one affects cancer cell survival by downregulation of JAK/STAT3 signaling.
BOT-4-one induces apoptosis through down-regulation of anti-apoptotic gene expression
To identify whether the inhibition of cell survival in BOT-4-one-treated L540 cells resulted in induction of apoptosis, we performed a TUNEL assay. Treatment with BOT-4-one increased TUNEL-positive cells about 6-fold compared to that of control cells ( Figure 5A). Cleavage of Poly (ADP-ribose) polymerase (PARP) and caspase-3 are known to hallmarks of apoptosis (Decker et al., 2000). BOT-4-one increased cleaved fragments of both PARP and caspase-3 in a dose-dependent manner ( Figure 5B). The inhibition of STAT3 signaling also reported to induce apoptosis by down-regulation of antiapoptotic gene expression (Iwamaru et al., 2007;Al Zaid Siddiquee and Turkson, 2008). To elucidate the molecular mechanism of BOT-4-one on apoptosis, we examined the expression levels of antiapoptotic proteins known to STAT3 targets. BOT-4-one decreased the expression levels of antiapoptotic mRNA and proteins such as Bcl-2, Bcl-xL, Mcl-1, and survivin in a dose-dependent manner ( Figures 5C-E). These results indicate that BOT-4one decreases cancer cell survival by inducing apoptosis through down-regulation of anti-apoptotic genes.
Discussion
Although benzoxathiol derivatives have been used in the treatment of psoriasis and acne, and reported to have anti-bacterial and cytostatic properties (Goeth and Wildfeuer, 1969;Wildfeuer, 1970;Lius and Sennerfeldt, 1979), the molecular basis of the pharmacological properties has not been defined yet. Psoriasis and acne are common inflammatory skin diseases that involved in immune responses. Recent reports showed that an antiinflammatory effect of benzoxathiol derivatives was due to inhibition of NF-κB activation by targeting IKK as well as inhibition of STAT1 phosphorylation (Kim et al., 2008a(Kim et al., , 2008dChung et al., 2009). NF-κB is one of the transcription factors implicated in inflammatory diseases. Therefore, blocking of NF-κB activation by benzoxathiol derivatives for the treatment of psoriasis was expected. JAK/STAT3 signaling is also activated in psoriasis and inhibition of this signaling may have therapeutic target for the treatment of the diseases (Chang et al., 2009;Miyoshi et al., 2011).
Persistent activation of JAK/STAT signaling, especially JAK/STAT3, is observed in various types of human cancers and contributes to tumorigenesis and cancer progression. The activation of STAT3 proteins in cancers is implicated to phosphorylation of JAK and Src family kinases (Niu et al., 2002;Klampfer, 2006;Yu et al., 2009;Hazan-Halevy et al., 2010). Accumulated results imply that development of new drugs to regulation of constitutively activated JAK/STAT3 is valuable therapeutic targets for cancer treatment. We found small molecule BOT-4-one as a potential inhibitor of JAK/STAT signaling using a cell-based high throughput screening in Drosophila cell line ( Figure 1). The fruit fly Drosophila consists of only one JAK and one STAT (Hou and Perrimon, 1997). Despite the simplicity of the Drosophila JAK/STAT pathway, the mode of action of the JAK/STAT pathway in Drosophila is similar to that of mammals (Bach et al., 2003). Therefore, Drosophila to identify small molecule inhibitors of JAK/STAT signaling can serve as an excellent model organism (Arbouzova and Zeidler, 2006). BOT-4-one effectively inhibited cytokine-induced transcriptional activity and phos- Our previous results showed that identified small molecule inhibitors of STAT92E activity using Drosophila model were well-fitted in human cell lines (Kim et al., 2008b(Kim et al., , 2010a. In fact, the benzoxathiol derivatives were synthesized for the development of anti-cancer drugs by targeting NF-κB signaling pathway, and BOT-4-one has anti-cancer and anti-inflammatory effects by inhibition of the pathways (not published yet). BOT-4-one decreased mRNA expression level of STAT3 in different types of human cancer cell lines, but the effect was stronger in Hodgkin's lymphoma cell line L540 compare to breast cancer cell line MDA-MB-468 and prostate cancer cell line DU145 (Figure 2A). In addition, the compound strongly inhibited mRNA expression and phosphorylation of STAT3 and STAT5 rather than those of STAT1 in L540 cells ( Figure 2). However, mRNA expression levels of STAT1 and STAT5 in MDA-MB-468 and DU145 cells were not affected by BOT-4-one (Figure 2A). These results reveal that BOT-4-one has differential effect on the inhibition of STAT activation. As evidence for this hypothesis, BOT-4-one showed differential inhibition against JAK2 and JAK3 phosphorylation, and the effect was parallel compared to inhibition of STAT3 phosphorylation and STAT3 target protein SOCS3 expression. Non-receptor Src family kinases and ERK pathway can also regulate STAT3 phosphorylation (Garcia et al., 2001;Steelman et al., 2004). BOT-4-one strongly inhibited ERK1/2 phosphorylation in MDA-MB-468 and DU145 cells, but not in L540 cells. However, BOT-4-one showed weak effect on the activation of Src family kinases (Figure 3). We previously showed that JAK3 is important for STAT3-mediated signaling in L540 cells and JAK1 and JAK2 are important in MDA-MB-468 and DU145 cells (Kim et al., 2008b(Kim et al., , 2010a(Kim et al., , 2010b. Together, our results suggest that BOT-4-one has more selectivity for the regulation of JAK3/STAT3 signaling in L540 cells than that of JAK1/STAT3 and JAK2/ STAT3 signaling in MDA-MB-468 and DU145 cells. This conclusion was further supported by reducing cell survival and inducing apoptosis through down-regulation of the expression of anti-apoptotic genes such as Bcl-2, Bcl-xL, Mcl-1, and survivin that are known to STAT3 downstream targets (Figures 4 and 5).
In summary, we identified a small molecule inhibitor of JAK/STAT signaling, especially JAK3/ STAT3 signaling using Drosophila and human cancer cell lines. Inhibition of JAK3/STAT3 signaling by BOT-4-one decreased cancer cell survival, and induced apoptosis by down-regulation of antiapoptotic gene expression in L540 cells. Therefore, BOT-4-one can be used as a lead compound to develop new group of anti-cancer drugs to target cancer cells harboring aberrant JAK3/STAT3 signaling.
Drosophila cell line, transfection and reporter assay
Maintenance of parental macrophage-like Drosophila Schneider (S2-NP) cells and reporter assay were conducted as previously described (Kim et al., 2008b(Kim et al., , 2010a. Briefly, cells were cultured in Schneider's Drosophila medium containing 10% FBS and antibiotics (Invitrogen, Calsbad, CA) in an incubator at 25 o C. S2-NP-STAT92E cells that stably express both the 10×STAT92E-firefly luciferase reporter gene and the PolIII-Renilla luciferase gene were also grown in the same medium supplemented with 500 μg/ml G418. For experiment STAT92E transcriptional activity, parental S2-NP cells were transiently transfected with Actin promoter-driven upd using Effectene Transfection Reagent (Qiagen, Valencia, CA) according to the manufacturer's protocol and the cells were co-cultured with S2-NP-STAT92E cells for 24 h in the presence of BOT-4-one at various concentrations. The reporter activity was quantified by measuring relative luciferase units (RLU) and the firefly luciferase activity was normalized to Renilla luciferase activity.
Human cancer cell lines
The Hodgkin's lymphoma cell line L540 and the Burkitt's lymphoma cell line DG-75 were purchased from the German Collection of Microorganisms and Cell Cultures (DSMZ, Braunschweig, Germany), and cultured in RPMI 1640 supplemented with 20% FBS and antibiotics. The breast cancer cell line MDA-MB-468 and the prostate cancer cell line DU145 were purchased from the American Type Culture Collection (Manassas, VA), and cultured in DMEM supplemented with 10% FBS and antibiotics. Cells were cultured in a 37 o C humidified incubator containing a mixture of 95% air and 5% CO2. DMEM, RPMI 1640, fetal bovine serum (FBS), and antibiotics (penicillin/streptomycin) were obtained from Invitrogen (Carlsbad, CA).
Cell viability, proliferation and FACS analysis
L540 cells (5 × 10 4 cells/ml) were treated with either vehicle (DMSO) alone or various concentrations of BOT-4-one in the presence or absence of IL-6 and incubated for the indicated time periods. Trypan blue exclusion assay was performed to count total and viable cells. Apoptosis assay was conducted using Terminal Transferase dUTP Nick End Labeling (TUNEL) assay system as previously described (Kim et al., 2008b(Kim et al., , 2010a. Briefly, L540 cells (1.0×10 6 cells/ml) were treated with either vehicle (DMSO) alone or BOT-4-one (30 μM) for 48 h. Cells were harvested, stained using an APO-BRDU kit (Phoenix Flow Systems, Inc., San Diego, CA), and subsequently subjected to Elite ESP flow cytometry (Coulter Inc., Miami, FL).
RNA isolation and quantitative real-time PCR
Total RNA was isolated from human cancer cell lines treated with either vehicle (DMSO) alone or BOT-4-one for 24 or 48 h. For real-time PCR analysis, cDNA was synthesized from 1 μg of total RNA by reverse transcription using QuantiTect Rereverse Transcription Kit (Qiagen) and performed real time-PCR using the KAPA SYBR fast qPCR Kit (KAPA biosystems, Woburn, MA). Primers were purchased from Qiagen.
Statistical analysis
Data obtained from independent experiments are represented as means ± SD. Statistical analysis was performed using a two-tailed Student's t test. P values were considered to be statistically significant at *P < 0.001 or **P < 0.05.
Supplemental data
Supplemental data include a figure and can be found with this article online at http://e-emm.or.kr/article/article_files/ SP-43-5-07.pdf. | 4,534.6 | 2011-05-31T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Activation of Bicyclic Nitro-drugs by a Novel Nitroreductase (NTR2) in Leishmania
Drug discovery pipelines for the “neglected diseases” are now heavily populated with nitroheterocyclic compounds. Recently, the bicyclic nitro-compounds (R)-PA-824, DNDI-VL-2098 and delamanid have been identified as potential candidates for the treatment of visceral leishmaniasis. Using a combination of quantitative proteomics and whole genome sequencing of susceptible and drug-resistant parasites we identified a putative NAD(P)H oxidase as the activating nitroreductase (NTR2). Whole genome sequencing revealed that deletion of a single cytosine in the gene for NTR2 that is likely to result in the expression of a non-functional truncated protein. Susceptibility of leishmania was restored by reintroduction of the wild-type gene into the resistant line, which was accompanied by the ability to metabolise these compounds. Overexpression of NTR2 in wild-type parasites rendered cells hyper-sensitive to bicyclic nitro-compounds, but only marginally to the monocyclic nitro-drugs, nifurtimox and fexinidazole sulfone, known to be activated by a mitochondrial oxygen-insensitive nitroreductase (NTR1). Conversely, a double knockout NTR2 null cell line was completely resistant to bicyclic nitro-compounds and only marginally resistant to nifurtimox. Sensitivity was fully restored on expression of NTR2 in the null background. Thus, NTR2 is necessary and sufficient for activation of these bicyclic nitro-drugs. Recombinant NTR2 was capable of reducing bicyclic nitro-compounds in the same rank order as drug sensitivity in vitro. These findings may aid the future development of better, novel anti-leishmanial drugs. Moreover, the discovery of anti-leishmanial nitro-drugs with independent modes of activation and independent mechanisms of resistance alleviates many of the concerns over the continued development of these compound series.
Introduction
New, safer and more effective treatments are required for visceral leishmaniasis (VL), a disease endemic in parts of Asia, Africa and South America. VL results from infection with the protozoan parasites Leishmania donovani or L. infantum and is responsible for~50,000 deaths per annum, with the number of cases estimated between 200,000 and 400,000 [1]. In 95% of cases, death can be prevented by timely and appropriate drug therapy [2]; however, current treatment options are far from ideal [3]. At present, miltefosine and liposomal amphotericin B are considered the front-line therapies and, while both drugs are considerably more effective than previous treatment options, they have their limitations. The principal drawbacks of amphotericin B include high treatment costs, the requirement of a cold chain for distribution and storage, an intravenous route of administration and unresponsiveness in some Sudanese VL patients [4]. Problems associated with miltefosine, the only oral drug, are its teratogenicity and high potential to develop resistance [5]. Thus, there is a pressing need for better, safer efficacious drugs that are fit-for-purpose in resource-poor settings.
In the search for more effective drugs for VL and other "neglected tropical diseases", researchers have reassessed the therapeutic value of nitroheterocyclic compounds. Previously avoided in drug discovery programs due to potential mutagenicity and carcinogenicity issues, a nitro-drug is now being successfully used as part of a combination therapy for human African trypanosomiasis (HAT). Nifurtimox-eflornithine combination therapy (NECT) consists of oral treatment with the nitrofuran nifurtimox alongside infusions of eflornithine and has resulted in cure rates of around 97% for the Gambian form of the disease [6]. The 2-substituted 5-nitroimidazole fexinidazole is now in clinical trials for use in the treatment of both HAT and VL [7] (www.dndi.org), and has shown potential for the treatment of Chagas disease [8]. In addition, until recently DNDi had a nitroimidazole compound (DNDI-VL-2098) [9] and nitroimidazole back-up compounds at an advanced stage of pre-clinical development for use in the treatment of VL (www.dndi.org). Thus, nitroheterocyclic compounds look set to play an important role in the future treatment of these diseases.
Given the new found prominence of nitroheterocyclic drugs, concerted efforts are now being made to elucidate their mechanisms of action. The mode of action of nifurtimox in the trypanosomatids involves reductive activation via a NADH-dependent, type I bacterial-like nitroreductase (NTR, LinJ.05.0660) resulting in the generation of a cytotoxic, unsaturated open-chain nitrile derivative [10]. NTR has also been implicated in the bio-activation of fexinidazole and its sulfonic metabolite, with overexpression of the leishmanial homolog in L. donovani found to increase sensitivity to fexinidazole sulfone by 15-fold [7]. Indeed, modulation of the NTR levels within the trypanosomatids has been shown to directly affect sensitivity to several nitroheterocyclic compounds in vitro, with reduced enzyme activity leading to drug resistance [11][12][13]. The potential for NTR-related cross-resistance brings into question the rationale of developing further NTR-activated nitro-compounds for the treatment of the trypanosomatid-related diseases. Therefore, it is crucial to determine if any new anti-trypanosomatid nitroaromatics are bio-activated by the NTR at an early stage in development.
Recently, we established that the novel nitroimidazo-oxazine (R)-PA-824 and the nitroimidazo-oxazole delamanid (Deltyba, OPC-67683) have potential as effective anti-leishmanial drugs [14,15]. Delamanid is an approved drug for the treatment of multi-drug resistant tuberculosis and (R)-PA-824 is the opposite enantiomer of (S)-PA-824 (pretomanid) currently in Phase II trials for tuberculosis. The mechanism of action of these bicyclic nitro-compounds does not involve bio-activation via NTR [14,15]. The des-nitro forms of both compounds were inactive against L. donovani, suggesting that the nitro-group plays a key role in the anti-leishmanial activity of this compound series. This raises the possibility that bio-reduction of (R)-PA-824 and delamanid may be mediated by an as yet unidentified nitroreductase within L. donovani. Here, we describe the identification and characterisation of an FMN dependent NADH oxidoreductase (NTR2) in L. donovani which is responsible for the bio-activation of (R)-PA-824, delamanid and other bicyclic nitro-drugs including DNDI-VL-2098. The broad implications of a novel mechanism for the activation of anti-leishmanial nitroheterocyclic compounds are discussed.
(R)-PA-824-resistant Leishmania
To investigate the NTR-independent mechanism of action of (R)-PA-824, leishmania parasites were selected for resistance against this nitroimidazo-oxazine. Starting with a clonal line of drug-susceptible L. donovani, promastigotes (the insect stage of the life-cycle) were cultured in the continuous presence of (R)-PA-824 for a total of 80 days. Starting at 300 nM (3 x EC 50 , Table 1), three independent cultures were exposed to increasing concentrations of drug until they were routinely growing in 10 μM (R)-PA-824 ( Fig 1A). Following drug selection, resistant parasites were cloned by limiting dilution. The susceptibility of each cloned cell line to (R)-PA-824 was determined and compared to that of wild-type parasites ( Fig 1B). All three cloned cell lines were found to be completely refractory to (R)-PA-824 at concentrations up to and including 100 μM. Resistance to (R)-PA-824 in clone RES III was found to be stable over 45 passages in culture in the absence of drug. In addition, resistance to (R)-PA-824 was retained by RES III in the intra-macrophage amastigote stage of the parasite (S1 Fig).
At this point clone RES III was chosen for in-depth study. Although RES III was completely resistant to (R)-PA-824, this clone remained sensitive to drugs from other chemical classes: miltefosine and amphotericin B were equally potent against resistant and parental wild type cells with EC 50 values of 6.1 ± 0.3 and 5.9 ± 0.5 μM for miltefosine and values of 320 ± 23 and 390 ± 32 nM for amphotericin B against WT and RES III, respectively. However, RES III promastigotes showed marked cross-resistance to a number of structurally-related bicyclic nitroimidazo-oxazine compounds, including delamanid (>3,200-fold) and the preclinical candidate DNDI-VL-2098 (>1,600-fold). In contrast, RES III showed little or no sign of cross-resistance to the monocyclic nitro-drugs, fexinidazole sulfone (1.1-fold) and nifurtimox (3.3-fold) ( Table 1). Structures of the compounds tested are shown in Fig 2. Proteomic analysis of (R)-PA-824 resistant promastigotes Bio-activation of nifurtimox and fexinidazole sulfone is catalysed by an oxygen-insensitive nitroreductase (NTR) in both T. brucei [10] and Leishmania [7] and loss of, or mutations in, this enzyme has been shown to play a key role in drug resistance mechanisms in the trypanosomatids [16]. Therefore, we hypothesised that should an alternative nitroreductase be involved in the bio-activation of (R)-PA-824 in Leishmania, changes in this enzyme may be evident in parasites resistant to (R)-PA-824. Thus, a comparative proteomic analysis of drug-resistant and WT promastigotes was conducted using stable isotope labelling by amino acids in cell culture (SILAC). WT parasites were grown in modified SDM-79 medium in the presence of normal L-arginine and L-lysine (R0K0) and mixed 1:1 with RES III cells grown for at least 6 cell divisions in the presence of stable isotopes of L-arginine and L-lysine (R6K4) to achieve uniform labelling. Additionally, a label-swap experiment in which the 'heavy' and 'light' culture In the combined proteomic dataset, 2119 proteins were identified by at least one uniquely mapped peptide, prior to filtering the datasets and combining the label-swap experiments. This resulted in the identification of 1472 proteins with a quantifiable expression change between the parental and the RES III cell line. Statistical significance was assessed using significance B, leading to the identification of 38 proteins with significantly altered expression levels compared to the wild type (S1 Table). The most striking change was of a hypothetical NADH:FMNdependent oxidoreductase (Uniprot: E9AGH7; GeneDB: LinJ.12.0730) identified to bẽ 16-fold less abundant in RES III parasites ( Fig 3A).
Role of a putative FMN-dependent NADH oxidase in the bio-activation of (R)-PA-824 To determine if this putative FMN-dependent NADH oxidoreductase had any role in the bioactivation of (R)-PA-824, the open reading frame for LinJ.12.0730 was transfected into WT L. donovani promastigotes. Overexpression of the enzyme was verified by western blotting (S2 Fig). Parasites overexpressing the enzyme were~40-fold (EC 50 = 3.5 nM) more susceptible to (R)-PA-824 than WT promastigotes (EC 50 = 140 nM) ( Fig 3B). To further verify the role of this hypothetical oxidoreductase in bio-activation, the enzyme was overexpressed in our drugresistant cell line RES III (Fig 3C). Overexpressing the oxidoreductase in RES III promastigotes fully restored sensitivity to (R)-PA-824 (EC 50 = 1.0 nM). These data provide compelling evidence that the hypothetical FMN-dependent NADH oxidoreductase, identified in our SILAC studies, is involved in the bio-activation of (R)-PA-824 in L. donovani. Henceforth, this enzyme is known as NTR2 and the previously identified type I oxygen-insensitive nitroreductase as NTR1 (LinJ.05.0660).
Genomic analysis of NTR2 in (R)-PA-824-resistant parasites
In an attempt to understand the mechanisms involved in the depletion of NTR2 from our drug resistant cell lines and also to identify additional factors that may be involved in resistance, the complete genomes of each independently derived resistant clone (RES I, II and III) were sequenced. Surprisingly, first-pass analysis revealed only 12 single nucleotide polymorphisms (SNPs) resulting in nonsynonymous changes in 3 ORFs in the drug-resistant clones (S2 Table). At this point in our studies, SILAC analysis focused our attention on the role of NTR2 in nitroheterocyclic drug activation and resistance. Complementing this finding, closer examination of NTR2 and its flanking sequences identified the deletion of a single cytosine (genomic position 483544 on chromosome 12 in LdBPK; C457 in the open reading frame) within NTR2 that results in a frame shift and premature termination of NTR2 translation ( Fig 3D). Despite the fact that each clone appeared to be genetically distinct (S2 Table), this deletion was identified in both allelic copies of NTR2 in all 3 independently generated resistant clones. The reason for this unusual finding is not clear. Nonetheless, these data, alongside our failure to detect full length NTR2 in RES III parasites (S2 Fig), confirm that each (R)-PA-824-resistant clone is effectively NTR2 null and further strengthen our hypothesis that NTR2 is principally responsible for (R)-PA-824 bio-activation. A comprehensive analysis of the sequencing data from our (R)-PA-824-resistant parasites will be reported in a subsequent publication.
Can NTR2 activate other nitroheterocyclic drugs?
Having established its role in the bio-activation of (R)-PA-284, we assessed the possible role of NTR2 in the activation of other Leishmania-active nitroheterocyclic compounds ( Table 1). The potencies of these compounds were determined against WT promastigotes and WT transgenic parasites overexpressing NTR2, where hypersensitivity in transgenic parasites is indicative of a compound activated via NTR2. As expected, the (S)-enantiomer of PA-824, an anti-tubercular clinical candidate [17], was 27-fold more potent against parasites with elevated levels of NTR2 than WT. Structurally-related compounds including delamanid, CGI-17341 and DNDI-VL-2098 are also activated by NTR2. NTR2-overexpressing parasites showed a <2-fold increase in susceptibility to fexinidazole sulfone suggesting that this nitroimidazole, previously shown to be activated by NTR1, is unlikely to be an efficient substrate for NTR2. However, another known substrate of NTR1, nifurtimox, was 15-fold more potent against NTR2 overexpressing parasites suggesting that this compound may be a substrate of both enzymes in L. donovani.
Metabolism of (R)-PA-824 and DNDI-VL-2098 in L. donovani
Levels of (R)-PA-824 metabolism in WT and NTR2-overexpressing parasites were monitored by UPLC-MS/MS in cultures of promastigotes treated with 160 nM of drug over a 24-h period.
(R)-PA-824 was essentially stable in culture medium alone with a t 1/2 of >24 h ( Fig 4A). The addition of L. donovani promastigotes to culture medium resulted in a marked increase in the rate of disappearance of the drug (t 1/2 = 14 h) associated with the appearance of several drug metabolites. Metabolism was further increased in cultures of parasites overexpressing NTR2 (t 1/2 = 0.5 h), such that drug levels had dropped below the limit of quantification (0.31 nM) by 4 h. Similar rates of drug metabolism were also observed in cultures incubated with both 15 nM delamanid (EC 50 value, Fig 4B) and 20 nM DNDI-VL-2098 (10 x EC 50 value Fig 4C). However, the instability of these compounds in medium alone was higher than those seen with (R)-PA-824. This instability can be explained by the fact that delamanid is known to be primarily metabolised in plasma by albumin [18]. Likewise, DNDI-VL-2098 is reportedly unstable in plasma [9], presumably for the same reason.
Enzymatic analysis of recombinant NTR2
To further study the substrate specificity of NTR2, the recombinant enzyme was expressed and purified to homogeneity in three chromatographic steps to obtain a yield of 15 mg l −1 of a yellow (Fig 5A), indicative of a flavoprotein. Using an established spectrophotometric method, FMN was confirmed as the bound co-factor in NTR2 [19]. Analysis of the recombinant protein by size-exclusion chromatography revealed that NTR2 elutes primarily as a monomer at *40 kDa (Fig 5B), close to the predicted molecular mass of 39.6 kDa (Fig 5B). This was confirmed by MS to be 39.4 kDa for the recombinant protein by MALDI-TOF analysis.
Our preliminary studies indicate that this enzyme was able to utilise NADH or NADPH as a reductant. The ability of NTR2 to reduce a variety of nitroheterocyclic compounds was then assessed in the presence of 100 μM NADPH (Fig 5C). The highest rates of activity were observed with the bicyclic nitro-compounds structurally related to (R)-PA-824. Further, the highest rates of NTR2 metabolism broadly correlate with the most potent anti-leishmanial NTR2-overexpressing parasites, respectively. The half-life of delamanid was 12 h in media alone and 1.6 h in WT promastigotes. As the disappearance of delamanid in NTR2-overexpressing parasites was plotted to a double exponential decay, two half-lives were calculated as 0.096 h for k 1 and 0.64 h for k 2 .
Quantitation of cellular NTR2 levels
Using a NTR2-specific polyclonal antiserum generated against our purified recombinant protein, we were able to confirm that NTR2 is expressed in all developmental stages of the Leishmania parasite by probing an immunoblot of whole cell lysates (Fig 5D). Single bands of approximately 40 kDa, close to the predicted molecular mass of NTR2 (39.6 kDa), were detected in cell lysates of log phase promastigotes (the dividing insect stage), metacyclic promastigotes (the insect stage infective to mammals) and axenic amastigotes (the intracellular mammalian stage). The cellular concentration of NTR2 in each of these parasite stages was determined by densitometry. NTR2 levels in each developmental stage were found to be remarkably similar with concentrations of 1.70 μM, 1.73 μM and 1.75 μM in promastigotes, metacyclics and amastigotes, respectively.
Cellular localisation of NTR2
Immunofluorescence studies confirm that L. donovani NTR2 localises to the cytosol of midlog promastigotes (Fig 6). Staining of promastigotes with an anti-NTR2 polyclonal antibody showed extensive and even staining throughout the cells, except for the nucleus and kinetoplast, demonstrating the cytosolic location of this enzyme (Fig 6C and 6D).
Assessing the essentiality of NTR2
The studies described above show that loss of NTR2 is strongly associated with resistance to (R)-PA-824. However, this does not exclude the possibility that additional genes may also be involved. To address this issue, we investigated the impact of NTR2 loss by gene deletion in a WT genetic background. Thus, NTR2 null parasites were generated by classical gene replacement. Both copies of the NTR2 gene were sequentially replaced with hygromycin and puromycin drug resistance genes. Southern blot analysis of genomic DNA from putative double knockout (DKO) cells confirmed that they were NTR2 null (Fig 7A). Loss of both copies of NTR2 had no obvious effects on the viability of these parasites with DKO promastigotes growing at the same rate in culture as WT and achieving similar cell densities. DKO parasites were found to be completely refractory to (R)-PA-824 at concentrations up to and including 100 μM (Fig 7C), as found in our resistant lines obtained by drug selection. Similar results were found for (S)-PA-824, delamanid and DNDI-VL-2098 (Table 1). Susceptibility to fexinidazole sulfone remained unchanged and susceptibility to nifurtimox decreased marginally in good agreement with RES III. Adding back an exogenous copy of NTR2 to DKO null parasites entirely recovered sensitivity to (R)-PA-824 demonstrating that NTR2 is necessary and sufficient for activation of toxicity with compounds such as (R)-PA-824, delamanid or DNDI-VL-2098.
The impact of NTR2 deletion on drug metabolism was determined by measuring the concentration of (R)-PA-824 in cultures of WT and DKO over a 24-h period. Samples of culture were removed at defined intervals and the supernatants analysed by UPLC-MS/MS, as previously described (Fig 7B). WT parasites metabolised (R)-PA-824 at a similar rate to that seen in our earlier study (t 1/2 = 12.5 h). In contrast, rates of metabolism in medium alone and in cultures of DKO promastigotes were negligible over the same 24-h period. The addition of an NTR2 add-back to DKO parasites recovered the ability of these cells to metabolise (R)-PA-824 (t 1/2 = 0.5 h). These data confirm that NTR2 alone is necessary and sufficient for metabolic conversion of bicyclic nitro-drugs.
Loss of functional NTR2 did not have a material effect on the ability of DKO or RES metacyclic promastigotes to infect peritoneal macrophages, as determined by comparing the mean numbers of amastigotes per infected macrophage to that seen in WT-infected macrophage cultures 24h following infection ( Fig 7D). However, there did appear to be a moderate but statistically significant effect on the ability of NTR2-deficient parasites to replicate within peritoneal macrophages with mean numbers of amastigotes per infected macrophage considerably lower in DKO and RES cultures at 72h. The reduced ability of NTR2 null amastigotes to replicate within macrophages was entirely alleviated by the addition of an NTR2-addback. Collectively, these data suggest that, while NTR2 is not essential for L. donovani survival, null parasites do appear to suffer a moderate but statistically significant loss of "fitness" in macrophage infections that may have implications for the propagation of NTR2-related drug resistance in the field.
Discussion
Drug discovery pipelines for the "neglected diseases" are now heavily populated with nitroheterocyclic compounds. Following the success of nifurtimox as part of NECT [6] and the rediscovery of fexinidazole [7,8,20], researchers have been quick to recognise and exploit the therapeutic potential of these compound classes. However, the development of multiple compounds with a likely shared mode of action for any disease indication is not without significant risk. An over-emphasis on one compound class can leave drug pipelines vulnerable to multiple compound failures associated with a single resistance mechanism. Specifically, it is well established that parasites resistant to one nitro-drug are often cross-resistant to a second [13]; for example nifurtimox-resistant T. brucei are cross-resistant to fexinidazole and vice versa [11,16]. Here, cross-resistance has been largely explained by a common, nitroreductase-related mechanism of drug activation [13,21]. This has made researchers wary of developing further nitro-compounds for the treatment of the trypanosomatid diseases. In this study we have confirmed that several bicyclic nitro-drugs, either in preclinical development or demonstrating promising anti-leishmanial activity, are not activated via NTR1, known to activate monocyclic nitro-compounds nifurtimox, benznidazole and fexinidazole [7,10,12,22]. Importantly, resistance to bicyclic nitro-compounds in Leishmania promastigotes does not result in striking levels of cross-resistance to NTR1-activated compounds. The discovery of anti-leishmanial nitrodrugs with independent modes of activation and independent mechanisms of resistance alleviates many of the concerns over the continued development of these compound series.
It is worth emphasizing the power of using orthogonal approaches such as pharmacology combined with genomics and proteomics to elucidate mechanisms of drug action. Several lines of evidence presented here establish that the primary enzyme target for metabolic activation of bicyclic nitro-compounds in Leishmania is NTR2, an NAD(P)H-dependent flavoprotein. First, whole genome sequencing and SILAC proteomic analysis confirmed that Leishmania promastigotes, resistant to (R)-PA-824 and cross-resistant to a number of bicyclic nitro-drugs, are effectively NTR2 null. Second, re-introduction of NTR2 into (R)-PA-824-resistant parasites restored drug susceptibility while overexpression of NTR2 resulted in hypersensitivity to bicyclic nitro-drugs. Third, and perhaps the most compelling evidence that NTR2 is primarily responsible for metabolic activation of these compounds, is the complete abrogation of susceptibility in NTR2 null parasites.
Increased metabolism of (R)-PA-824, delamanid and DNDI-VL-2098 in promastigotes overexpressing NTR2 and the absence of metabolism in NTR2 DKO cultures suggests that NTR2 catalyses metabolic conversion of these compounds. In Mycobacterium tuberculosis metabolism of (S)-PA-824 is catalyzed by an unusual deazaflavin-dependent nitroreductase (Ddn) [23][24][25], an enzyme which is absent in Leishmania spp. Incubation of (S)-PA-824 with recombinant M. tuberculosis Ddn leads to the formation of three primary metabolites, the most abundant being (S)-des-nitro-PA-824 [25,26]. Des-nitro-formation in this bacterium leads to the concomitant release of reactive nitrogen species, including nitric oxide. Transcriptional profiling suggests that respiratory poisoning by nitric oxide is likely to be central to the anti-mycobacterial action of (S)-PA-824 under hypoxic conditions [27]. Further studies will be required to elucidate the chemical identity of the metabolite(s) resulting from NTR2 bio-activation of bicyclic nitro-drugs and their role(s) in parasite killing.
The endogenous function of NTR2 in Leishmania remains to be determined. BLAST searches of NTR2 revealed high similarity to prokaryotic alkene reductases for the "old yellow enzyme" family. Members of this "ene"-reductases family catalyze a diverse range of reactions, usually on substrates with an α/β unsaturated carbonyl group [28][29][30]. The closest orthologue of NTR2 in Trypanosoma cruzi is the enzyme prostaglandin F2α synthase, also known as old yellow enzyme, which shares 44% identity with L. donovani NTR2. This NAD(P)H-dependent oxidoreductase has been implicated in both the mechanisms of action of, and resistance to, benznidazole and nifurtimox in the American trypanosome [31]. There is no enzyme equivalent to NTR2 in the genome of T. brucei, perhaps explaining the conspicuous lack of potency of bicyclic nitro-drugs against these parasites [15].
Our future studies will involve a comprehensive kinetic and structural characterisation of NTR2 and an elucidation of the active drug metabolites responsible for cell death. Understanding the binding mode of compounds in the active site of NTR2 may facilitate the design of improved bicyclic nitro-drugs for the treatment of VL.
Ethics statement
All animal experiments were approved by the Ethical Review Committee at the University of Dundee and performed under the Animals (Scientific Procedures) Act 1986 (UK Home Office Project Licence PPL 70/8274) in accordance with the European Communities Council Directive (86/609/EEC).
Cell lines and culture conditions
The clonal Leishmania donovani cell line LdBOB (derived from MHOM/SD/62/1S-CL2D, originally isolated from a patient in the Sudan in 1962) [32] was grown as promastigotes at 26°C in modified M199 media supplemented with FCS certified free from mycoplasma (6).
Test compounds
(R)-PA-824 and (S)-PA-824 were synthesized in-house following published procedures [7,15]. Fexinidazole sulfone was prepared either by oxidation of fexinidazole [33], or by modification of a published method [34]. DNDI-VL-2098 was prepared by adapting the reported syntheses of related compounds [35]. Methods for the synthesis of delamanid and (S)-OPC-67683 are described in detail in our recent publication [14]. CGI-17341 was prepared using a modification of a previous method [36]. The purity of all synthesized compounds was determined by liquid chromatography-mass spectrometry and found to be >95%. Where appropriate, the optical rotation of chiral compounds was checked against literature values and in all cases was found to be in good agreement. Full experimental details and analytical data for the synthesis of DNDI-VL-2098 and CGI-17341 are reported in S1 Text.
In vitro drug sensitivity assays
Drug sensitivity assays were carried out in triplicate promastigote cultures exactly as previously described [15]. Data were fitted by non-linear regression to a two-parameter EC 50 equation using GraFit version 5.0.13.
Generation of drug-resistant parasites
(R)-PA-824 resistant lines were generated by sub-culturing a freshly cloned line of wild-type L. donovani in the continuous presence of this compound. Starting at a sub-lethal concentration of 300 nM (R)-PA-824, the drug concentrations in 3 independent cultures were increased in a step-wise manner, usually by 2-fold. After a total of 80 days in culture, when promastigotes were able to survive and grow in 10 μM (R)-PA-824, the resulting cell lines were cloned by limiting dilution in the absence of (R)-PA-824. One clone (RES III) was selected for further biological studies.
For SILAC labelling of cultures, LdBOB WT or (R)-PA-824 resistant promastigotes, in the log phase of growth, were washed 3 times with phosphate-buffered saline, and resuspended at 1 × 10 4 cells ml -1 in either SDM-79 SILAC-L or SDM-79 SILAC-H . Cells were passaged every 2 days and grown for a total of 10 days under labelling conditions. Cells (5×10 7 ) were harvested by centrifugation (10 min, 4°C, 1600 g) and washed twice in PBS prior to being resuspended in Laemmli buffer (Bio-Rad Laboratories) and heated at 95°C for 10 min. The equivalent of 5×10 6 cells of each cell sample (WT and (R)-PA-824 resistant promastigotes) were pooled together and then subjected to electrophoresis on a 4-12% NuPAGE SDS/PAGE gel. When the sample had entered approximately 2 cm into the gel, electrophoresis was stopped and the gel stained with Instant Blue (expedeon). Sample lanes were excised and subjected to in-gel digestion for 18 h at 37°C with 12.5 μg ml -1 Trypsin Gold (Promega) in 10 mM NH 4 HCO 3 and 10% acetonitrile. Tryptic peptides were recovered in 45% acetonitrile, 1% formic acid and lyophilized prior to analysis.
Mass spectrometry data acquisition and processing-SILAC
Liquid chromatography tandem mass spectrometry was performed by the Proteomic Facility at the University of Dundee. Tryptic peptides were separated on a fully automated Ultimate U3000 Nano RSL Cnano system (Thermo Scientific) fitted with a 0.1 × 2 cm PepMap C18 trap column and a 75 μm × 50 cm reverse phase PepMap C18 nanocolumn (Thermo Scientific). Samples were loaded in 0.1% formic acid (buffer A) and separated using a binary gradient consisting of buffer A and buffer B (80% acetonitrile, 0.08% formic acid). Peptides were eluted with a linear gradient from 2 to 40% buffer B over 124 min. The HPLC system was coupled to an LTQ Orbitrap Velos Pro mass spectrometer (Thermo Scientific) equipped with a Proxeon nanospray ion source. The mass spectrometer was operated in data dependent mode to perform a survey scan over a range 335-1800 m/z in the Orbitrap analyzer (R = 60,000), with each MS scan triggering ten MS2 acquisitions of the ten most intense ions. The Orbitrap mass analyzer was internally calibrated on the fly using the lock mass of polydimethylcyclosiloxane at m/z 445.120025.
Data was processed using MaxQuant version 1.5.0 which incorporates the Andromeda search engine [39,40]. Proteins were identified by searching a protein sequence database containing L. infantum annotated proteins (downloaded from UniProt, http://www.uniprot.org/ proteomes/UP000008153) supplemented with frequently observed contaminants (porcine trypsin, bovine serum albumin and human keratins). Initial MS tolerance was set as 4.5 ppm with the MS/MS tolerance set at 0.5 Da. Cysteine carbamidomethylation was set as a fixed modification with protein N-acetylation and methionine oxidation as variable modifications. Peptides were required to be a minimum of 7 amino acids in length with only the uniquely mapped peptides used in the calculation of SILAC ratios. The minimum H/L ratio count was set to be 1 and peptide and protein false discovery rates of 0.01 were calculated by searching a database of reversed sequences. The SILAC ratios of proteins identified in both label swap experiments were averaged, whereas reported H/L ratios identified only in one experiment were included if the H/L percentage variability was <100% [41]. Differential expression was assessed using a significance B test built into Perseus v.1.5.0 with a Benjamini-Hochberg FDR threshold of 0.01 [40].
Generation of overexpression constructs
The primers used to generate constructs for genetic manipulation and protein expression (S3 Table) were designed using the L. infantum genome sequence (tritrypdb.org). Primers were designed against a putative NADH:flavin oxidoreductase/NADH oxidase (LinJ.12.0730). The accuracy of all assembled constructs was verified by sequencing.
LdNTR2 overexpression vectors were generated by amplifying the gene from genomic DNA using the LdNTR2-BamHI sense and antisense primers for cloning into pIR1-SAT and LdNTR2-SmaI sense and LdNTR2-XbaI antisense for cloning into pX63-3HA. PCR products were then cloned into the pCR-Blunt II-TOPO vector (Invitrogen) and sequenced. The pCR-Blunt II-TOPO-gene constructs were then digested with appropriate restrictions enzymes and the fragments cloned into either the pIR1SAT or pX63-3HA expression vectors.
Assembly of knockout constructs
NTR2 gene replacement cassettes were generated by amplifying a region of DNA encompassing the 5´-untranslated region (UTR), open reading frame (ORF) and 3´-UTR of LdBOB NTR2 from genomic DNA with primers 5´UTR-NotI_s and 3´UTR-NotI_as, using Pfu polymerase. This sequence was then used as a template for the amplification of the individual regions used in the assembly of replacement cassettes containing the selectable drug resistance genes puromycin N-acetyl transferase (PAC) and hygromycin phosphotransferase (HYG), exactly as previously described [42].
Generation of LdBOB transgenic cell lines
Mid-log-phase L. donovani promastigotes (LdBOB) were transfected with overexpression constructs using the Human T-Cell Nucleofector kit and the Amaxa Nucleofector electroporator (program V-033). Following transfection, cells were allowed to grow for 16-24 h in modified M199 medium [32] with 10% fetal calf serum prior to appropriate drug selection (100 μg nourseothricin μg ml −1 , hygromycin 50 μg ml -1 , puromycin 20 μg ml -1 and 100 μg G418 ml −1 ). Cloned cell lines were generated by limiting dilution, maintained in selective medium, and removed from drug selection for one passage prior to experiments.
Infectivity assays
In-macrophage infectivity assays were carried out using starch-elicited mouse peritoneal macrophages harvested from BALB/c mice [7] and metacyclic promastigotes, as previously described [12].
Metabolism of (R)-PA-824, delamanid and DNDI-VL-2098 in L. donovani promastigotes Metabolism studies were performed at 160 nM (R)-PA-824, 15 nM delamanid and 20 nM DNDI-VL-2098 in culture medium alone and in the presence of either wild type, NTR2 null or NTR2 overexpressing L. donovani promastigotes (1 x 10 7 parasites ml -1 ). At 0, 0.5, 1, 2, 4, 6, 8 and 24 h aliquots were removed, precipitated by addition of a 2-fold volume of acetonitrile and centrifuged (1,665 x g, 10 min, RT). The supernatant was diluted with water to maintain a final solvent concentration of 50% and stored at -20°C prior to UPLC-MS/MS analysis, as described below. Data were processed using GraFit (version 7.0.2; Erithacus software) and fitted to a single exponential decay (with the exception of delamanid incubated with NTR2 OE parasites which were fitted to a double exponential decay) and the half life (t 1/2 ) was calculated from the elimination rate constant (k):
Cloning, expression and purification of recombinant NTR2
LdNTR2 was amplified from genomic DNA using the primers LdNTR2-NdeI_s and LdNTR2-BamHI_s (S3 Table). The resulting PCR product was then cloned into the pCR-Blunt II-TOPO vector (Invitrogen) and sequenced. The pCR-Blunt II-TOPO-gene constructs were then digested with appropriate restriction enzymes and the fragment cloned into the multiple cloning site of the pET-15b-TEV expression vector. The resulting pET15b-NTR2 expression construct was transformed into BL21 (DE3)pLysS competent cells and recombinant expression was carried. Overnight starter cultures were used to inoculate one litre of LB media supplemented with 50 μg ml -1 ampicillin. NTR2 enzymatic activity NTR2 activity was measured by following the change in absorbance at 340 nm due to NADPH oxidation. A reaction mixture (1 ml) containing 50 mM HEPES, pH 7.0 and 100 μM NADPH was incubated at 25°C for 1 min. NTR2 was added to a final concentration of 500 nM and the background rate of NADPH oxidation was measured for 1 min. The reaction was initiated by the addition of 100 μM of nitroheterocyclic test compounds and initial rates of NADPH oxidation in the presence of these compounds were measured. Enzyme activity was calculated using ε = 6220 M −1 cm −1 and reported in μmol min -1 mg -1 .
Immunofluorescence studies
Mid-log L. donovani promastigotes were washed twice in PBS before being fixed in 2% (w/v) paraformaldehyde in PBS (0.15 M NaCl, 5 mM potassium-phosphate buffer, pH 7.4). Fixed parasites were then treated with Triton X-100 (0.1%) for 10 min, prior to the addition of 0.1M glycine for an additional 10 min. Parasites were then washed with PBS and air-dried onto polylysine coated microscope slides. Slides were then blocked by incubation in 50% (v/v) foetal calf serum (FCS), PBS for 10 min prior to incubation in L. donovani NTR2 antiserum diluted 1:50 in PBS containing 5% FCS for 1 h at room temperature. Following washing in PBS, slides were incubated for a further 1 h in fluorescein isothiocyanate-conjugated goat anti-rabbit secondary antibody diluted 1:200 in PBS. Slides were washed again in PBS before being mounted using the SlowFade Light Antifade Kit with 4,6-diamidino-2-phenylindole (DAPI; Molecular Probes), as instructed by the manufacturers.
Whole genome sequencing and genomic variant analysis
Genomic DNA was prepared from L. donovani (R)-PA-824 resistant clones RES I, II and III and the wild-type parental strain. For each sample, 1.5-2 μg of genomic DNA was used to produce amplification-free Illumina libraries of 400-600 base pairs (bp) length [43]. | 7,917 | 2016-11-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Global climate-change trends detected in indicators of ocean ecology
Strong natural variability has been thought to mask possible climate-change-driven trends in phytoplankton populations from Earth-observing satellites. More than 30 years of continuous data were thought to be needed to detect a trend driven by climate change1. Here we show that climate-change trends emerge more rapidly in ocean colour (remote-sensing reflectance, Rrs), because Rrs is multivariate and some wavebands have low interannual variability. We analyse a 20-year Rrs time series from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the Aqua satellite, and find significant trends in Rrs for 56% of the global surface ocean, mainly equatorward of 40°. The climate-change signal in Rrs emerges after 20 years in similar regions covering a similar fraction of the ocean in a state-of-the-art ecosystem model2, which suggests that our observed trends indicate shifts in ocean colour—and, by extension, in surface-ocean ecosystems—that are driven by climate change. On the whole, low-latitude oceans have become greener in the past 20 years.
Strong natural variability has been thought to mask possible climate-change-driven trends in phytoplankton populations from Earth-observing satellites. More than 30 years of continuous data were thought to be needed to detect a trend driven by climate change 1 . Here we show that climate-change trends emerge more rapidly in ocean colour (remote-sensing reflectance, R rs ), because R rs is multivariate and some wavebands have low interannual variability. We analyse a 20-year R rs time series from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the Aqua satellite, and find significant trends in R rs for 56% of the global surface ocean, mainly equatorward of 40°. The climate-change signal in R rs emerges after 20 years in similar regions covering a similar fraction of the ocean in a state-of-the-art ecosystem model 2 , which suggests that our observed trends indicate shifts in ocean colour-and, by extension, in surface-ocean ecosystems-that are driven by climate change. On the whole, low-latitude oceans have become greener in the past 20 years.
Climate change is causing alterations in marine ecosystems, and is expected to increasingly cause such changes in the future 3 . Surface-ocean ecosystems cover 70% of Earth's surface and are responsible for approximately half of global primary production 4 . Such communities are known to be changing at specific locations for which long-term data are available 5,6 . Detecting climate-change-driven trends in ocean ecosystems on a global scale, however, is challenging because of the difficulties of making oceanographic measurements at sufficiently large spatial and long temporal scales.
Satellite remote sensing is the only means to obtain time series of marine ecosystems on a global scale, because it is the only way to obtain measurements at the required scales. Ocean-colour satellites, which measure the amount of light radiating from the ocean and atmosphere from Earth's surface, have been collecting global measurements for decades. A great deal of research has focused on detecting long-term trends in ocean-colour data, particularly in chlorophyll a (Chl) and primary productivity over large regions [7][8][9][10][11] . However, several studies 1,2,12 have found that more than 30 years of data are required to detect climate-change-driven trends in satellite-derived Chl (μg l −1 ), the most frequently used product derived from ocean colour, even on regional scales. Chl provides information on the abundance of phytoplankton (the photosynthesizing microscopic organisms in the ocean), and can be estimated from empirically derived ratios and/or differences of ocean-colour R rs (ref. 13). Because no single satellite mission has lasted a sufficient duration, and the intercalibration of merged multi-satellite products for robust, quantitative trend detection is challenging 12,14-17 , it has not so far been possible to determine for a given location whether Chl is changing with climate. Advances in statistical methods have allowed the detection of trends in large-scale regional Chl averages 18 , but it is difficult to distinguish for a given location whether Chl is or is not changing, and to determine whether any trends can be attributed to climate change.
That said, the MODIS sensor aboard the Aqua satellite (hereafter, MODIS-Aqua) has far surpassed its originally planned mission duration of 6 years, having just completed 20 full years collecting high-quality global ocean-colour data. The key variable provided by MODIS-Aqua (and any ocean-colour sensor) is R rs , which is the ratio of water-leaving radiance to downward irradiance incident on the ocean surface. R rs is derived from MODIS-Aqua measurements in several wavebands within the visible spectrum, from 412 nm in the blue part of the spectrum to 678 nm in the red. Similarly to Chl, R rs is an indicator of the state of the surface-ocean microbial ecosystem; R rs is therefore considered an 'essential climate variable' by the Global Climate Observing System. Again similarly to Chl, trends in R rs are not trivial to interpret ecologically or biogeochemically [19][20][21][22][23] (Supplementary Information), but do reflect changes in surface-ocean ecology. There are persistent uncertainties in converting R rs to Chl and other ecosystem properties such as phytoplankton carbon. Nonetheless, as R rs does encode combined information about surface ecosystems and dissolved and particulate organic matter, any trend in R rs reveals notable changes in the components of surface-ocean ecology and biogeochemistry with optical signatures. Furthermore, any change in R rs corresponds to changes in the light environment itself, which will affect phytoplankton and thus ultimately lead to ecosystem changes.
Time-series data are the best way to identify long-term changes in an ecosystem 24 . Ocean-colour sensors are known to perform quite differently to each other-even copies of the same sensor on a different satellite platform 16 . Thus, the 20-year MODIS-Aqua record, as the longest single-sensor time series, constitutes a unique dataset. This dataset presents an opportunity to revisit the possibility of detecting trends in ocean colour from satellite data and attributing them to climate change. The principal reasons one might expect this to be possible are, first, that R rs is multivariate, being measured by MODIS-Aqua at several wavebands, whereas Chl is univariate, meaning that R rs potentially Article encapsulates a stronger signal than Chl (Extended Data Fig. 1); and, second, that some R rs wavebands exhibit lower interannual variability than Chl (ref. 2), meaning that R rs potentially has lower noise. In a model of complex global ocean ecosystems, climate-change-driven trends in R rs have been shown to indicate changes in phytoplankton community structure and become distinguishable from natural variability more rapidly than trends in Chl (ref. 2). However, these multivariate advantages may not be sufficient to permit the detection of trends because R rs is known to be strongly correlated between different wavebands 25 , reducing the effective dimension of the measurement 26 , and autocorrelation in R rs may persist even at the annual timescale, reducing the effective sample size of a given R rs time series. Solutions to both of these issues are possible, however. Multivariate regression allows the trends (and uncertainties in those trends) in multiple variables to be estimated simultaneously, while accounting for correlations between dependent variables 27 . Methods also exist to account for autocorrelation in regression analysis, such as the Cochrane-Orcutt procedure 28 , which estimates and subtracts the autoregressive component. In essence, then, such a regression maximizes the signal (number of simultaneous variables) used to detect a trend while also minimizing the noise (interannual variability in those variables) and accounting for correlations between variables and years.
Observations
To investigate possible trends in ocean colour, we performed such an autocorrelation-corrected multivariate regression on the first 20 years of MODIS-Aqua ocean R rs data, spanning July 2002-June 2022 (Methods). We find significant trends, here defined as a signal-to-noise ratio (SNR) higher than two, in 56% of the ocean, primarily equatorward of 40° ( Fig. 1; SNR > 2 corresponds to a confidence level around 95%). By contrast, only a small fraction of this portion of the ocean has significant trends in Chl (12%, black stippling in Fig. 1), such that even if the black stippled areas in Fig. 1 are excluded, 44% of the total ocean area has a significant trend in the R rs product of ocean colour. These results are insensitive to significance level or spatial resolution (Methods).
We also note that these trends are not associated with changes in sea surface temperature (SST (°C)). When the same analysis is performed for MODIS-Aqua-based SST (Methods), we find significant R rs trends in 58% of the ocean with a significant SST trend. Because 56% would be expected if R rs trends were unrelated to SST trends, this suggests that the detected changes in R rs are not related to changes in SST. Instead, changes in R rs might be due to other drivers, such as changing mixed-layer depth or upper-ocean stratification 29 . These drivers are known to affect plankton community structure and biomass, and are expected to change with climate, but are more difficult to detect trends in over shorter time periods (that is, 20 years) than SST because they are measured less precisely.
We thus find that a vast swathe of the ocean has a significant trend in R rs , when considering many wavebands at the same time. Significant trends tend to occur in low-'noise' (that is, weak interannual variability) subtropical and tropical regions, rather than high-'signal' regions (Extended Data Fig. 2). The likelihood of SNR exceeding 2 and a trend being detectable increases with decreasing noise levels, but does not increase with increasing signal levels. Significant trends are also neither spectrally narrow (that is, linked to any particular waveband) nor spectrally flat (that is, lacking a spectral signature) (Extended Data Figs. 3 and 4).
Model
A key question is whether the identified trends are driven by climate change. To test this, we performed the same analysis on MODIS-like R rs data simulated by a numerical model of a complex global ocean ecosystem and associated biogeochemical cycles 2,30 . The model simulates the changes to the marine ecosystem and optics over the course of the twenty-first century under a scenario of high greenhouse-gas emissions (Methods). By also considering a control simulation (that is, without perturbation from increased emissions), we can attribute changes to climate change. We analysed this model in terms of the time of emergence (ToE (years)) 31 , which quantifies how long it takes for the climate-change-driven trend in a simulation with climate change (that is, a forced simulation) to emerge (with a SNR of 2) from the natural variability in a simulation without climate change (that is, a control simulation), both over the period 2000-2105. For the model R rs , the ToE is 20 years or less in 46% of the ocean, a comparable fraction to the 56% of the ocean for which we find a significant trend in MODIS-Aqua R rs (Fig. 2a,b). The (area-weighted) median ToE across the entire model surface ocean is 22 years. By comparison, the ToE is 20 years or less for less than 10% of the ocean for Chl 2 , underscoring that climate-change-driven trends in R rs can emerge much faster than those for Chl, and on a similar timescale to the observational period investigated here. Given the coarse resolution of the model, it only crudely captures some of the features of the physical circulation in the ocean, such as narrow current systems (for example, the Gulf Stream or equatorial currents). As such, direct comparisons of finer-scale features between model and satellite observations should be done with care. Nonetheless, similar broad regions in both cases are responsible for the significant trends after 20 years, notably the North Atlantic and the subtropical Pacific. Although this is, arguably, the only numerical model suitable for such investigations, which limits the strength of any attribution statement that can be made from it, the consistency in the overall extent and the general location of significant trends in the observations and emerged climate-change-driven trends in the model suggest that the observed trends are indeed driven by climate change. In the model, because changes in community structure emerge much faster than those of Chl or other optically relevant properties, the early emergence of R rs trends is linked to phytoplankton community structure, which influences food webs, biogeochemical cycles and marine biodiversity.
Discussion
Changes to the surface-ocean ecosystem will affect R rs (see idealized examples provided in the Supplementary Information). From these considerations, the changes in R rs and the spatial patterns seen in Extended Data Fig. 3 are complex, likely to be multifaceted and defy simple description. In the broadest terms, increases in R rs are more frequent than decreases, and increasingly so for intermediate wavelengths, suggesting that the ocean is on the whole becoming greener. This greening could result for instance from an increase in detrital particles, which would increase backscattering at all wavelengths and absorption at shorter wavelengths. However, it could also result from other possible ecosystem shifts, such as a simultaneous increase in zooplankton and coloured dissolved material. Nonetheless, and regardless of any comparison with model trends, the observed changes in R rs will necessarily have ecological implications. Irrespective of which optical constituent(s) in the surface ecosystem changed to produce a trend in R rs , any such optical change will alter the light environment. Because light is a key driver of phytoplankton communities, any change in the light environment-whether due to changes in in-water optical constituents or changes in light availability entering the ocean-will lead to a change in the surface-ocean ecosystem.
Altogether, these results suggest that the effects of climate change are already being felt in surface marine microbial ecosystems, but have not yet been detected because previous studies have considered Chl or other univariate approaches. R rs facilitates the early detection of climate-change signals by integrating, and being sensitive to, changes in the properties of surface-ocean ecosystems. R rs , and thus surface-ocean ecology, has changed significantly over a large fraction of the ocean in the past 20 years. The changes in R rs that we have identified have potential implications both for the role of plankton in marine biogeochemical cycles and thus ocean carbon storage, and for plankton consumption by higher trophic levels and thus fisheries. Our findings therefore might be of relevance for ocean conservation and governance. For instance, knowledge of where the surface-ocean microbial ecosystem is changing might be useful for identifying regions of the open ocean in which to establish marine protected areas under the United Nations high seas treaty on the biodiversity of areas beyond national jurisdiction. The identified locations with changes in R rs are consistent with where changes are expected in drivers such as upper-ocean stratification, but might be more easily detectable on the global scale-as we have done here-thanks to the multivariate and low-interannual-variability nature of R rs . This highlights the value of long-term satellite missions like MODIS-Aqua and of space agencies maintaining missions for as long as is feasible. That significant trends occur primarily where interannual variability is low means that a similar signal may be expected to emerge in other portions of the ocean in coming years, although the MODIS-Aqua mission is scheduled to end in the near future. Thus for future work, merged multi-satellite products, as well as work that is currently underway to improve them, are essential. Ongoing work 32 interpreting R rs could shed light on what the trends found here indicate about precisely how surface-ocean ecology is changing 33,34 ; we hope that the results presented here will spur further work to this end. Given the key role of plankton ecosystems in marine food webs, global biogeochemical cycles and carbon cycle-climate feedbacks, detecting change in these ecosystems is of great utility.
Online content
Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41586-023-06321-z.
Methods
We generated a 20-year annual time series of MODIS-Aqua R rs and Chl by extracting the monthly level-3, 4-km R rs and Chl values from July 2002 to June 2022 from https://oceancolor.gsfc.nasa.gov/l3/. We use the first 240 months of the standard monthly 9-km MODIS-Aqua 7 ocean wavebands of R rs , centred at 412 nm, 443 nm, 488 nm, 531 nm, 547 nm, 667 nm and 678 nm (https://modis.gsfc.nasa.gov/about/specifications.php).
The 2022 reprocessing of R rs and Chl was used, which reduces atmospheric correction errors and, crucially, minimizes any instrumental drift through updated sensor calibrations. Monthly data were aggregated into years each beginning in July, and data were averaged spatially to 2° resolution, resulting in a 90-by-180-by-20-by-7 array (respectively latitude, longitude, year and waveband), and a 90-by-180-by-20 array for Chl. Years beginning in July were used because the earliest MODIS-Aqua output available is from July 2002, so our dataset represents the first 20 years of MODIS-Aqua data. Regression is performed on annual data because performing a regression on monthly data would provide negligible benefit in terms of distinguishing a multidecadal trend, while coming at the cost of having to estimate additional parameters to represent the seasonal cycle and while imposing additional assumptions about the annual cycle. MODIS-Aqua was selected because it is now a 20-year record, the longest single-satellite R rs product available at present. Merged products were not considered because although they incorporate additional data and reduce the risk of possible sensor degradation issues, there are known issues with satellite intercalibration that are challenging to deal with quantitatively in detecting significant trends over time 12,[14][15][16] . MODIS-Aqua also provides a daytime SST (°C) product, for which we generated a comparable time series (that is, 20 July-June years at 2° spatial resolution). For each 2°-by-2° grid cell, we then performed a multivariate regression of R rs versus time. All analyses were performed in MATLAB 2021b. In essence, we calculate the trend, represented by a vector b, in the seven-dimensional R rs space, while accounting for correlations between years and wavelengths. The uncertainties in the trends are the result of interannual variability, and are represented by a covariance matrix C. The off-diagonal elements of this matrix correspond to the covariance of uncertainties in the trends of different wavelengths, because if two wavelengths are correlated, the uncertainties in their trends will also be correlated. Before performing the regression, the serial autocorrelation in the signal was removed using the Cochrane-Orcutt procedure 28 . This works by iteratively estimating then subtracting the autocorrelated component of a signal until the autocorrelation is not statistically significant. For locations with significant autocorrelation (42% of grid cells), one iteration was applied, and then a second iteration was applied for grid cells whose autocorrelation continued to be significant (8% of grid cells). No more than two iterations were applied to any grid cell because 1% of grid cells had significant autocorrelation at the 5% level after the application of zero-to-two iterations. Our conclusions are not affected by this choice; for instance, applying one iteration to all grid cells equally yielded a negligible difference. The same approach is applied to the Chl time series. We then calculate the SNR in each case according to where b is the vector of trend estimates for each waveband and C is the variance-covariance matrix of b. In other words, SNR is the magnitude of the multivariate trend vector (see Extended Data Fig. 1), divided by the projection along this vector of the multivariate uncertainty of this multivariate trend. This is analogous to a z-score, or the number of standard uncertainties away from zero that a slope of a linear regression is in one-dimensional ordinary least squares regression. The only differences here are (i) we remove the autocorrelated component of each signal before performing the regression; and (ii) we have multiple dependent and correlated variables, so our trend is a vector rather than a scalar, and our uncertainty in that vector is a matrix owing to the correlations between the dependent variables, so we need to project that uncertainty matrix along that trend vector to get the ratio of the trend's magnitude to its uncertainty. For Chl, that is, the univariate case, this reduces to b C SNR = / , where b is the magnitude of the trend and C is the uncertainty of this trend. Note that uncertainty in these trends is effectively entirely due to interannual variability; a 2° × 2° annual measurement represents the aggregation of a vast amount of data, so by the law of large numbers there is negligible uncertainty in the sample average, and therefore trend uncertainty is dominated by interannual variability and the statistical method described above is justified. (For future work on small spatial scales, considering the uncertainty in the average of small numbers of data points might be important for robust uncertainty quantification). When computing fractions of the ocean with a significant trend, we account for the difference in surface area of different grid cells. We use the standard SNR = 2 as a threshold because this corresponds to a significance level of around 95% (strictly, 95.45%). Our conclusions are not sensitive to this choice: for a SNR ≥ 1.645, corresponding to a 90% confidence level, we find significant R rs trends over 63% of the ocean (of which 19% has a Chl trend), whereas for a SNR = 2.576, corresponding to a 99% confidence level, we find significant R rs trends over 47% of the ocean (of which 5% has a Chl trend). Note that our results are also not sensitive to the choice of spatial resolution; if we use a 1° or 4° resolution, we still find a significant R rs (Chl) trend in 56% (12%) of the ocean using a SNR = 2 threshold. (We report all values to two significant digits because the third significant digit is resolution-dependent.) Similarly, our results with respect to SST are not sensitive to choice of SST product; when using the COBE-SST product 35 , we find the same lack of relatedness between SST and R rs trends, with 59% of locations with significant SST trends having significant R rs trends (56% expected if they are perfectly unrelated; cf. 58% with MODIS-Aqua SST).
For Extended Data Fig. 3 we performed the same procedure as above for each individual MODIS-Aqua waveband of R rs . Extended Data Fig. 4 is identical to Extended Data Fig. 3 but with locations where SNR < 2 for all wavebands removed, to show that individual wavebands have significant trends in small and overlapping regions, underscoring that the detected trends are due to the multivariate nature of R rs and not associated with any individual waveband. We also performed this analysis for SST to compute the overlap between significant trends in R rs and SST as described in the main text.
The biogeochemical model is the same as that used in a previous study 2 . Model output was taken from https://doi.org/10.7910/ DVN/08OJUV. This is a complex ocean ecosystem and biogeochemistry model, resolving the major elemental cycles and eight types of phytoplankton. The ecosystem and biogeochemical cycles are forced with output from an earth system model of intermediate complexity 36 . From an 1860 spin-up, two simulations are performed: one is a control simulation run with constant 1860 concentrations of greenhouse gases, and a second is run with a high-emissions scenario with increasing concentrations of greenhouse gases (similar to Representative Concentration Pathway 8.5). Thus, the differences between the simulations indicate anthropogenically driven climate change. Each simulation is run for 250 years, nominally 1860 to 2110, and the analysis described here was performed on the last 106 years (that is, nominally from 2000 to 2105). The model resolves radiative transfer as described previously 30 to generate R rs at 25-nm resolution from 400-700 nm. We refer to previous work 2, 30 and references therein for further details and model validation. We linearly interpolate model R rs to the MODIS-Aqua spectral waveband peaks (412, 443, 469, 488, 531, 547, 555, 645, 667 and 678 nm).
Linearly interpolating the spectra to 1-nm resolution and convolving with the MODIS-Aqua spectral response functions did not affect the result. The model's spatial resolution is 2° by 2.5° with 22 vertical layers. The ocean physics shows a realistic year-to-year variability in surface temperature and produces interannual variability (for example,the El Niño-Southern Oscillation) with frequency, seasonality, magnitude and patterns in general agreement with observations. Because of the high computational demand of this model, we use a single climate simulation from an ensemble of perturbed physics, perturbed initial conditions and varied emissions scenarios, with a medium effective climate sensitivity of approximately 3.0 °C (ref. 36). The control simulation showed that there were no significant drifts in the ecological or optical properties discussed here.
Using this model, we perform the same multivariate regression as above. Note that we perform this regression on the full model time series, rather than the first 20 years, because the utility of the model for our study is to test whether it is possible for climate-change-driven R rs trends to emerge from interannual variability faster than Chl trends, and over a similar timescale to the period for which we have observations. We then calculate, following previous work 2 , the ToE for each grid cell according to ToE = 2 × (standard deviation)/(trend), where the standard deviation is that of the annual means at any grid location in the control run and the trend is that of the full forced simulation. Calculating and removing any drift in the control simulation negligibly affected this calculation.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Code availability
Code (in MATLAB 2021b) is available at https://doi.org/10.5281/ zenodo.4441150. Dotted arrow indicates correlation (ρ) between uncertainties of estimates in each variable, which determines the angle made by the ellipse. In this graphical illustration, estimated trends in β 1 and β 2 are not significant, but the estimated trend in β is, because the orange ellipse does not contain the origin, but the purple and teal error bars cross through the x-axis and the y-axis, respectively. Fig. 2 Note that full information on the approval of the study protocol must also be provided in the manuscript.
Field-specific reporting
Please select the one below that is the best fit for your research. If you are not sure, read the appropriate sections before making your selection. All studies must disclose on these points even when the disclosure is negative.
Study description
We analyzed the MODIS-Aqua satellite's remote sensing reflectance data for 20-year trends. We found significant trends over much of the ocean. The location and extent of these trends corresponds closely with the forced trends in the first 20-years of a simulation with a complex ecosystem model, indicating that these trends may be due to climate change.
Research sample
The data used here are remote sensing reflectance from NASA's MODIS-Aqua satellite, over the first 20 full years of its mission. These are chosen because this is the longest ocean color satellite mission and therefore most suitable for our research question investigating climatic trends. | 6,293 | 2023-07-12T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Impairment of NKG2D-Mediated Tumor Immunity by TGF-β
Transforming growth factor-β (TGF-β) suppresses innate and adaptive immune responses via multiple mechanisms. TGF-β also importantly contributes to the formation of an immunosuppressive tumor microenvironment thereby promoting tumor growth. Amongst others, TGF-β impairs tumor recognition by cytotoxic lymphocytes via NKG2D. NKG2D is a homodimeric C-type lectin-like receptor expressed on virtually all human NK cells and cytotoxic T cells, and stimulates their effector functions upon engagement by NKG2D ligands (NKG2DL). While NKG2DL are mostly absent from healthy cells, their expression is induced by cellular stress and malignant transformation, and, accordingly, frequently detected on various tumor cells. Hence, the NKG2D axis is thought to play a decisive role in cancer immunosurveillance and, obviously, often is compromised in clinically apparent tumors. There is mounting evidence that TGF-β, produced by tumor cells and immune cells in the tumor microenvironment, plays a key role in blunting the NKG2D-mediated tumor surveillance. Here, we review the current knowledge on the impairment of NKG2D-mediated cancer immunity through TGF-β and discuss therapeutic approaches aiming at counteracting this major immune escape pathway. By reducing tumor-associated expression of NKG2DL and blinding cytotoxic lymphocytes through down-regulation of NKG2D, TGF-β is acting upon both sides of the NKG2D axis severely compromising NKG2D-mediated tumor rejection. Consequently, novel therapies targeting the TGF-β pathway are expected to reinvigorate NKG2D-mediated tumor elimination and thereby to improve the survival of cancer patients.
INTRODUCTION
Transforming growth factor-β (TGF-β) is a potent suppressor of immune responses affecting many subsets of immune cells in various ways (1). For example, TGF-β impairs MHC class II expression (2,3), thus potentially impairing priming of CD4 T cells, and suppresses the activity of cytotoxic lymphocytes by inhibiting the differentiation, proliferation, and effector functions of CD8 T cells and NK cells (1,4). TGF-β also promotes the differentiation of suppressive immune cells subsets (5)(6)(7). In physiological settings, the TGF-β-mediated immune suppression is crucial for the establishment of immune tolerance and prevention of chronic inflammation, e.g., in the gastrointestinal tract (4,8,9), but in malignant disease TGF-β promotes immune escape, tumor progression and metastasis (4,(10)(11)(12)(13). Importantly, there is emerging evidence that TGF-β also impairs immunorecognition of tumor cells by NK cells and cytotoxic T cells through down-regulation of activating immunoreceptors such as NKG2D. NKG2D ligation by stress-induced MHC class I-like glycoproteins on tumor cells transmits a potent stimulatory signal into cytotoxic lymphocytes and therefore promotes immunosurveillance of malignant cells (14,15). Hence, evasion from NKG2D-mediated recognition is thought to represent a major mechanism allowing tumors to escape from tumor immunity. In this review, we specifically focus on the TGF-β-mediated impairment of immunorecognition through the NKG2D axis and its implications for tumor immunity and cancer therapies. The function and biology of TGF-β as well as of NKG2D will be summarized only briefly as both have extensively been reviewed elsewhere (4, 10, 16-18).
TGF-β: EXPRESSION, RECEPTORS, AND SIGNALING
The three members of the human TGF-β family, TGF-β1,−2, and−3 are synthesized as precursor proteins containing an Nterminal latency-associated peptide (LAP) (∼280-300 amino acids) followed by a shorter C-terminal polypeptide (112)(113)(114) amino acids) which represents the biologically active mature cytokine (19,20). During the intracellular processing of this precursor protein, LAP is cleaved but remains associated with the TGF-β dimer forming an inactive latency complex that is sequestered into the extracellular matrix. Activation of TGF-β requires release from this latency complex (21). In addition, TGF-β can be present on the surface of regulatory T cells (Tregs), endothelial cells, platelets, macrophages and microglia in a membrane-associated form (22, 23). Mature TGFβ homodimers bind, with or without the assistance of the accessory receptor betaglycan (BG, also called TGF-β receptor III), first to homodimers of the TGF-β receptor II (TGF-βRII) which then phosphorylate TGF-β receptor I homodimers (TGF-βRI, ALK5) under formation of a hexameric complex of TGF-β, TGF-βRII, and TGF-βRI homodimers. Subsequently, TGF-βRI phosphorylates the cytoplasmic SMAD2 and SMAD3 proteins, which then, under association with SMAD4, transmigrate into the nucleus and exert transcriptional activity (4,16). TGF-β receptors are expressed on virtually all immune cells. Of note, TGF-βRII expression was shown to decline in the course of mouse NK cell maturation (24).
TGF-β-MEDIATED IMMUNOSUPPRESSION
TGF-β1 is the predominant TGF-β family member expressed by immune cells and suppresses innate and adaptive immune responses at multiple levels (4,25). Amongst others, TGF-β has a prominent role in dampening T and NK cell responses: TGFβ impairs T cell proliferation and effector functions through down-regulation of IL-2 during T cell priming (26) and has been shown to induce cell cycle arrest and apoptosis of T cells (27-29). TGF-β directly inhibits the cytotoxic functions of CD8 T cells (30) and the differentiation of both Th1 and Th2 subsets by downregulation of their key transcription factors (31-35). Further, TGF-β downregulates the expression of MHC class II molecules via affecting CIITA expression (2, 3) thus impairing the capacity of antigen presenting cells (APC) for antigen presentation and CD4 T cell priming. TGF-β also inhibits the expansion, cytotoxicity, and cytokine production by NK cells (36-39). More recently, TGF-β was shown to block the IL-15-induced metabolic activity and proliferation of NK cells by inhibiting mTOR activity (24). In addition, TGF-β promotes conversion of NK cells into non-cytotoxic ILC1 in the tumor microenvironment (TME) thereby blunting tumor killing (40). TGF-β further promotes differentiation of Tregs (5, 6) and of myeloid derived suppressor cells (MDSC) (7). An eminent importance of TGF-β in affecting cancer immunosurveillance and efficacy of checkpoint blockade cancer therapy was recently highlighted by a series of studies on human cancer patients and of mouse tumor models: TGF-β produced by the TME was shown to restrict tumor infiltration by T cells and other cytotoxic lymphocytes and to block the acquisition of a Th1 effector phenotype (41-43). Inhibition of TGF-β activity not only facilitated T cell infiltration into central sites of the tumor, but also unleashed vigorous and efficient anti-tumor immunity, particularly in the course of checkpoint blockade (41-43). On the other hand, immunosuppression by TGF-β plays a central physiologic role in the establishment of immune tolerance and control of inflammation. Germline deletion of TGF-β1 in mice is lethal due to multi-organ inflammation (8,9). Loss of TGF-β signaling, particularly in T cells, is associated with uncontrolled adaptive T cell responses and severe inflammatory disease (4,(44)(45)(46). In the persistent presence of antigen stimuli, e.g., in the gastrointestinal tract, TGF-β aids in suppression of immune responses in order to prevent chronic inflammation (4).
PLEOTROPIC ROLE OF TGF-β IN THE DEVELOPMENT AND PROGRESSION OF TUMORS
Loss of function mutations in the TGF-β receptors or in SMAD proteins are found in many tumors indicating a function as a tumor suppressor (4). TGF-β inhibits cell growth (4,(47)(48)(49)(50), blocks the transition of pre-malignant cells to a more evasive phenotype and induces their apoptosis (51,52). In contrast, there is also broad evidence suggesting that TGF-β supports tumorigenesis and invasiveness, and enables tumor growth by establishing an immunosuppressive and T cell excluding TME. For example, elevated TGF-β levels in the TME impair anti-tumor T cell responses (11-13, 53) with restricting T cell infiltration into the tumors as shown for mouse models of metastatic colorectal, urothelial and epithelial ovarian cancers (41-43). TGF-β is thought to function as a tumor suppressor at the early stages of tumor development, but with the progression of disease, cancer cells may decouple growth-inhibitory paracrine TGF-β signals by obstructing their TGF-β receptor signaling pathway and rather exploit the immunosilencing capacity of TGF-β to facilitate immune evasion and metastatic dissemination (4, 16).
NKG2D-NKG2DL AXIS
NKG2D is a type II transmembrane glycoprotein comprising an extracellular C-type lectin-like domain, a transmembrane domain, and a short cytoplasmic portion without signaling motifs (54,55). NKG2D glycoproteins form disulfide-linked homodimers with both monomers building a single ligand binding site (56). In humans, NKG2D homodimers associate with two pairs of DAP10 adaptor proteins through interaction of charged residues in the respective transmembrane domains. Formation of this hexameric complex is required for cell surface expression of NKG2D and signal transduction (55,57). NKG2D is found on virtually all human NK cells and CD8 T cells, on most γδ T cells and iNKT cells, as well as on a few CD4 T cells (54,58). Ligation of NKG2D activates cytotoxicity and cytokine production of NK cells and provides stimulatory signals for effector CD8 T cells (54,(59)(60)(61)(62)(63). NKG2D expression is enhanced through cytokines promoting NK and T cell survival and expansion such as IL-15 and IL-2 (62)(63)(64).
NKG2D ligands (NKG2DL) are stress-inducible membranebound proteins distantly related to MHC class I molecules. In human, there are two families of NKG2DL, the MIC family consisting of MICA and MICB, and the ULBP family consisting of ULBP1 through ULBP6 (14, 65,66). All NKG2DL contain an ectodomain with an MHC class I-like α1α2-fold (14, 56, 67), but unlike MHC molecules NKG2DLs neither associate with β2microglobulin, nor present antigenic peptides (54,61). MICs contain an additional Ig-like α3 domain in their extracellular part that is absent from ULBPs (14, 56). Most MICs are singlepass transmembrane proteins, although there are also reports for GPI-anchored MICA variants (68,69). ULBP1 through ULBP3 and ULBP6 are GPI-anchored, whereas ULBP4 and ULBP5 are inserted into the membrane with a single transmembrane domain (67,70,71).
While NKG2DLs are typically absent from the cell surface of healthy cells, NKG2DL transcripts are found in almost all human tissues (72), indicating a dominant control of NKG2DL expression at the post-transcriptional level. NKG2DL are surfacing on activated hematopoietic cells which may contribute to an NKG2D-mediated regulation of immune responses and may dampen T cell responses (73,74), e.g., during the resolution of an infection (75,76). NKG2DL are also found on many human tumor cell lines and primary human tumors (77), and are up-regulated during viral infections, particularly during infections with viruses of the herpesvirus family (78,79). Such NKG2DL expression marks infected or malignant cells as "dangerous" for the immune system and facilitates their clearance through cytotoxic lymphocytes. NKG2DL on tumor cells enhance their susceptibility to NK cell killing (54,80), protects against tumor initiation (81) promotes tumor rejection and/or reduce the tumor progression (82)(83)(84)(85). Tumors utilize a variety of mechanisms to escape from NKG2D-mediated immunosurveillance: these mechanisms include the release of soluble NKG2DLs (sNKG2DL) either by proteolytic cleavage (71,(86)(87)(88) or by exosomal release of membrane-bound NKG2DLs (89,90). Release of sNKG2DL reduces the density of NKG2DL on malignant cells and thereby impairs NKG2D-mediated recognition and elimination of tumor cells by cytotoxic lymphocytes (82)(83)(84)(85). While some studies also report down-modulation of surface NKG2D on cytotoxic lymphocytes through sNKG2DL-mediated internalization (91,92), other studies were unable to confirm these findings or attributed NKG2D down-modulation instead to TGF-β (82,93,94). Possibly, potent NKG2D down-modulation by TGF-β (see below) in serum samples of cancer patients containing both TGF-β and sNKG2DL may have led to some erroneous conclusions regarding sNKG2DL-mediated NKG2D down-modulation in previous studies (15, [92][93][94]. Also, sera of tumor-free MICA-transgenic mice containing very high levels of sMICA did not affect NKG2D surface levels by splenic mouse NK cells (82). However, persistent exposure of NKG2D to membrane-bound MICA down-regulated surface NKG2D and reduced NK cell cytotoxicity in these MICA-transgenic mice as well as in other transgenic mouse models overexpressing NKG2DL (82,95,96). Hence, strong overexpression of NKG2DL may represent a strategy of tumor cells to blunt NKG2D-mediated immunosurveillance. In contrast to proteolytically shed monomeric sNKG2DL (i.e., most MICA variants, MICB, and ULBP2), exosomally released NKG2DL such as the prevalent MICA * 08, ULBP1 or ULBP3 may downmodulate surface NKG2D through multivalency-based crosslinking (89,90). Further escape mechanisms from NKG2Dmediated cancer immunosurveillance include down-regulation of NKG2DL through miRNAs (97,98), epigenetic changes or transcriptional repression (99,100), and TGF-β mediated signaling as outlined below. Intraindividual heterogeneity of malignant cells can also impair NKG2D-mediated tumor clearance: a recent study by Paczulla et al. showed that malignant cells of human acute myeloid leukemia (AML) patients are heterogeneous for NKG2DL expression with leukemic stem cells (LSC) being devoid of NKG2DL and therefore resistant to NK cell-mediated elimination (100). Poly-ADP-ribose polymerase 1 (PARP1) was shown to repress transcription of NKG2DL in LSC thereby enabling their escape from NKG2D-mediated immunosurveillance (100).
TGF-β IMPAIRS NK AND T CELLS FUNCTION THROUGH INTERFERENCE WITH THE NKG2D AXIS
Soon after cloning of the TGF-β1 cDNA (101), TGF-β1 was shown to inhibit both the proliferation of T cells (102) and the anti-tumor cytotoxicity of NK cells (36). While it was demonstrated that TGF-β impairs effector functions of NK cells against target cells, the underlying mechanisms remained elusive until it was reported by Moretta and colleagues that TGFβ downregulates the surface expression of the activating NK receptors NKG2D and NKp30, thereby impairing NK cytolysis of tumor cell lines in vitro (103) (Figure 1). Obviously, this effect depends on the extent of expression of NKG2DL and ligands of NKp30 by the respective tumor cells. Subsequent studies confirmed and extended these observations (104,105): TGF-β inhibits NKG2D-mediated lysis of target cells without altering the expression of perforin or Fas ligand, or without affecting NK cell viability, indicating that down-regulation of NKG2D is a major effect of TGF-β on NK cytolysis of tumor FIGURE 1 | TGF-β-mediated escape from NKG2D-mediated tumor immunorecognition by cytotoxic lymphocytes. NKG2D down-regulation on cytotoxic lymphocytes impairs their immunosurveillance of NKG2DL-expressing malignant cells and subsequent tumor elimination. Tumor cells release both soluble TGF-β and TGF-β-containing exosomes locally and systemically acting on NK cells and cytotoxic T lymphocytes (CTL), thereby inducing downregulation of NKG2D. In addition, tumor-derived exosomes may contain NKG2DLs and miRNA with the capacity to down-regulate NKG2D surface expression. TGF-β also acts on tumor cells in an autocrine or paracrine manner thereby reducing NKG2DL expression and further subverting cancer immunosurveillance by the NKG2D-NKG2DL axis. Other major source of TGF-β are platelets as well as regulatory T cells (Tregs) and myeloid derived suppressor cells (MDSCs) which also present membrane bound TGF-β. cells (105). A study on glioblastoma not only reported TGFβ-induced reduction of NKG2D expression on NK cells, but also on cytotoxic T lymphocytes (CTL). Decreased NKG2D expression resulted in the decreased cytolysis of NKG2DL positive targets by NK cells and a reduced NKG2D-mediated co-stimulation of CD8 T cells (104). The elevated TGF-β levels in sera of patients with lung and colorectal cancers were shown to down-regulate NKG2D on NK cells. Other studies linked increased tumor-associated TGF-β levels with the impairment of the function of NK cells and CTLs, and NKG2D down-regulation in various malignancies including Hodgkin lymphoma (106), gastric cancer (107) and head and neck squamous cell carcinoma (108,109). Hence, impaired NKG2D expression may serve as a biomarker for TGF-β-compromised cytotoxic lymphocytes. TGF-β-mediated down-regulation of NKG2D and associated impaired NK cell functions were also reported in the context of infections with hepatitis B and C viruses (110,111).
Elevated TGF-β levels as detected in glioblastoma patients were also shown to affect the expression of NKG2DLs (104,112): experimentally reduced TGF-β expression by glioma cells led to an increase of MICA, ULBP2, and ULBP4 transcripts and increased cell surface expression of MICA and ULBP2 as well as of a reduction of tumorigenicity in vivo (104,112). Thus, tumor derived TGF-β can act in a paracrine fashion to decrease NKG2D expression on cytotoxic lymphocytes in the TME and in an autocrine manner to diminish tumorassociated NKG2DL expression thereby impairing the innate recognition and clearance of tumors (104). Hence, TGF-βmediated repression of NKG2DL expression together with proteolytic shedding of NKG2DL has been suggested to facilitate the immune escape of glioma in the immune-privileged brain (112). However, there are also some reports that TGF-β treatment increases surface levels of NKG2DLs (113). The induction of cell surface expression of MICA and MICB upon culture with TGF-β was described for several human cell lines and appears at least partially dependent on mTOR signaling. In the case of HaCat cells, the increase in NKG2DL was associated with the TGF-β-induced epithelial-to-mesenchymal transition (113). These reports indicate that the regulation of NKG2DL expression by TGF-β may be dependent on the cell type and the context of the microenvironment.
ROLE OF MEMBRANE-BOUND AND EXOSOMALLY SECRETED TGF-β
TGF-β can be presented as a membrane bound form on the surface of several cell types (22, 23) and there is evidence that membrane-bound TGF-β can also regulate NKG2D expression.
Surface-bound TGF-β presented by Tregs was found to decrease NKG2D expression on NK cells and this correlated with the inhibition of NK cell cytotoxicity (114). Adoptive transfer of Tregs in Treg-deficient mice resulted in a decreased NKG2D expression and NK cell cytotoxicity in vivo and reduced the anti-tumor effector functions of NK cells in an NKG2Dsensitive tumor model in a TGF-β dependent manner (114). Other reports confirmed that TGF-β produced by Tregs impairs NKG2D-mediated NK cell killing of target cells in vitro (115). Decreased NKG2D expression was also found on NK cells in murine models of liver and lung cancer and correlated with the frequency of MDSC. MDSC isolated from cancer-bearing mice were able to impair NK cells functions and NKG2D expression on NK cells in vitro, and after adoptive transfer in healthy mice, and depletion of MDSC from tumor-bearing mice restored the functionality and NKG2D expression on NK cells and delayed the tumor progression in vivo (116). The observed effects were also mediated through a membrane-bound TGF-β presented by MDSC, while NK cells deficient in TGFβ signaling were resistant to the MDSC-mediated effects (116). Exosomal secretion of NKG2DL can impair NKG2D expression on cytotoxic lymphocytes thus desensitizing them for NKG2DLmediated tumor recognition (89). Exosomes derived from a panel of tumor cell lines and from patients with malignant pleural mesothelioma were also shown to contain TGF-β on exosomes and down-regulated NKG2D on the surface of CTLs and NK cells. Neutralizing TGF-β or MICA of exosomes indicated that TGF-β, and not MICA, is the main factor driving the observed NKG2D downregulation (94). Microvesicles derived from sera of AML patients were also shown to contain high levels of TGF-β and decreased NKG2D expression as well as NK cell cytotoxicity in a TGF-β dependent manner (117).
TGF-β IN THE PLATELET-NK CELL CROSS-TALK
Mouse models suggest that metastasis formation is dependent on the tumor-protective function of platelets, but the crosstalk between tumor-coating platelets and NK cells in the blood is not yet fully understood (118,119). Platelet-derived TGF-β may promote the immune escape of circulating disseminated tumor cells as activated platelets release factors reducing the activation and IFN-γ production of NK cells and the expression of a set of activating NK cell receptors including NKG2D. This effect is at least partially mediated by platelet-derived TGF-β (120). Platelet-derived TGF-β was shown to induce an invasive phenotype of tumor cells promoting metastasis in mouse models of colon and breast carcinoma. Abrogation of either TGFβ signaling in tumor cells or TGF-β expression by platelets suppressed metastasis formation and epithelial-mesenchymal transition (121). Accordingly, it was proposed that plateletderived TGF-β in the circulation provides a "pulse" to tumor cells enabling them to acquire a more invasive mesenchymal-like phenotype (121). Platelets were also shown to secrete TGF-βrich exosomes upon storage, e.g., before transfusions, that induce downregulation of NKG2D, NKp30, and DNAM-1 and modulate NK cell functions (122).
MECHANISMS OF TGF-β-MEDIATED DOWN-REGULATION OF NKG2D AND NKG2DL
The molecular mechanism underlying the TGF-β-mediated down-modulation of NKG2D surface expression are not yet fully elucidated. Several studies reported that TGF-β treatment only results in a moderate reduction of NKG2D transcripts (64,103) demonstrating that TGF-β mainly acts through posttranscriptional mechanisms on NKG2D expression. A more recent study provided conclusive evidence that induction of mature miR-1245 by TGF-β controls NKG2D expression in NK cells (123) (Figure 2). TGF-β augments processing of the pri-miR-1245 in NK cells and strongly increases the levels of mature miR-1245 in NK cells which acts on a target site in the 3'-UTR of NKG2D transcripts. Overexpression or silencing of miRNA-1245 markedly reduced or enhanced surface NKG2D on NK cells, respectively (123). Of note, IL-15 suppressed the maturation of miRNA-1245 which is detectable in tumor-derived exosomes in hematopoietic malignancies (123). Expression of miRNA-1245 is up-regulated by c-myc which directly binds to the miRNA-1245 promoter (124) indicating that exosomes of c-myc-driven tumors may harbor miRNA-1245 and thereby target NKG2D expression. However, TGF-β-mediated reduction of surface NKG2D levels is not completely abolished in miR-1245 knock-out cells arguing for further mechanisms (123). Accordingly, other studies reported that TGF-β treatment substantially decreases DAP10 expression both at mRNA and protein levels (64,125). Since NKG2D cell surface expression strictly depends on complex formation with DAP10 (55, 57), the TGF-β-mediated down-regulation of DAP10 indirectly complements the direct suppression of NKG2D expression by miR-1245 (64,123).
Multiple miRNA have also been shown to down-regulate expression of human NKG2DL by human tumor cells thereby impairing NKG2D-mediated tumor recognition (97,98,126,127). However, for most of these miRNA their tumor-associated regulation is not clear. In contrast, expression of the oncomiR-183, up-regulated by TGF-β in lung cancer, was shown to downregulate MICA and MICB glycoprotein expression in lung tumor cell lines through a binding site in the 3'-UTR of MICA/B transcripts. Accordingly, shRNA-mediated knock-down of either TGF-β or miR-183 resulted in an enhanced MICA/B expression and cytolysis by CD8 T cells (128). TGF-β-induced miR-183 was also reported to impair expression and function of several activating NK receptors such as NKp44 through down-regulation of the adaptor protein DAP12 (129), and hence, targets tumor recognition by NK cells at various receptors.
RESCUE OF THE NKG2D-NKG2DL AXIS IN CANCER BY TGF-β TARGETING THERAPIES
The crucial role of TGF-β in tumor progression and tumor immune escape renders this cytokine an important target for therapeutic intervention in cancer. Accordingly, multiple cancer therapies targeting the TGF-β pathway are currently FIGURE 2 | Therapeutic targeting of TGF-β-mediated NKG2D down-regulation by cytotoxic lymphocytes. TGF-β bound to a tetrameric complex of TGF-β-RI and TGF-β-RII homodimers causes phosphorylation of SMAD proteins, which, together with further contextual transcriptional regulators, alter the cellular transcriptional profile. This ultimately also leads to markedly reduced cell surface NKG2D expression by cytotoxic lymphocytes which appears to result from several direct and indirect effects: (i) decrease of NKG2D transcripts, (ii) maturation of miR-1245 interacting with the 3'-UTR of NKG2D transcripts thereby repressing NKG2D expression, and (iii) decreased levels of DAP10 transcripts and proteins with DAP10 being essentially required for NKG2D surface expression. Therapeutic strategies interfering with TGF-β signaling (marked in red) to rescue NKG2D expression include: (i) neutralization of TGF-β receptor through TGF-β specific antibodies or soluble TGF-β-RII, (ii) inhibition of TGF-β-RI-II activation through small molecules such as galunisertib, and (iii) engineering therapeutic lymphocytes prior to adoptive transfer with dominant negative TGF-β-RII chains.
being evaluated in clinical trials. Therapies targeting the TGFβ pathway have, amongst others, the potential to boost tumor elimination by cytotoxic lymphocytes through harnessing the NKG2D-mediated tumor recognition and boosting cytolysis by NK cells and cytotoxic T lymphocytes. For example, galunisertib (LY2157299), a small molecule inhibiting TGF-βRI kinase activity (Figure 2), prevented in vitro the TGF-β-mediated down-regulation of surface NKG2D (as well as of NKp30, DNAM-1, TRAIL) by activated NK cells and preserved their cytotoxic activity toward various tumor cell lines (130,131). Accordingly, administration of galunisertib markedly enhanced the anti-tumor effect of adoptively transferred activated human NK cells in NSG mice bearing human tumors (130,131). Significant therapeutic effects in phase II clinical trials were reported with galunisertib given either in combination with gemcitabine in pancreatic cancer (132) or as a monotherapy in hepatocellular carcinoma (133). Importantly, no adverse side effects and no cardiac toxicity were reported by several clinical trials (134). Encouraging pre-clinical studies show that a combined cancer treatment using galunisertib together with checkpoint blockade antibodies strongly potentiated cancer immunity (43,135).
Suppressive effects of TGF-β may also be overcome by targeted delivery of cytokines IL-2, IL-15, and IL-18 into the tumor. While TGF-β was shown to have a dominant effect over IL-2 or IL-15 alone with regard to NKG2D modulation on the surface of NK cells (64,105), a combination of IL-2 and IL-18 protected NK-92MI cells from TGF-β-mediated NKG2D downregulation and the associated impairment of NK cell function (136). An IL-15 superagonist/IL-15Rα fusion complex (ALT-803) rescued NK cytolysis of tumor cell lines from TGF-β1mediated immunosuppression in vitro and diminished TGF-β1mediated down-regulation of surface NKG2D (137). IRX-2, a poorly defined mixture of cytokines derived from the culture supernatants of activated lymphocytes, was tested in clinical trials for treatment of head and neck squamous cell cancer, and increased NKG2D surface expression and NKG2D-dependent NK cytotoxicity, even in the presence of TGF-β (109). TGF-β-neutralizing macromolecules such as TGF-β-specific mAb or soluble forms of TGF-βRII are currently evaluated in several phase I and II clinical trials in treatment of patients with various solid tumors (4). A recent report on a phase I/II clinical trial for treatment of chemo-refractory metastatic breast cancer with the TGF-β-neutralizing mAb fresolimumab during radiotherapy did not observe an objective or abscopal response in tumor patients treated with fresolimumab (138,139). Exploratory analyses of circulating T cells from these patients indicated that this treatment regimen with fresolimumab was not sufficient to reverse the impaired T cell function observed in these cancer patients (139). In addition, various chimeric molecules consisting of soluble TGF-βRII receptors, acting as TGF-β traps, linked to checkpoint blockade antibodies currently are tested in pre-clinical studies and clinical trials. Several preclinical studies have already shown substantially enhanced anti-tumor responses as compared to a monotherapy with anti-CTLA4 or anti-PD-L1 mAb in various mouse solid tumor models (140,141). For example, administration of a bifunctional fusion protein, termed M7824, with an anti-PD-L1 mAb coupled to the extracellular domain of TGFβ-RII, provided an efficient tumor control in preclinical models of colorectal and breast tumors. M7824 administration resulted in a shift of tumor-infiltrating immune cell populations toward an increase of cytotoxic CD8 T cells and NKG2D + NKp46 + NK cells which mediated tumor immunity (141). M7824 has already given to a small cohort of heavily pretreated patients with advanced solid tumors showing early signs of efficacy and a manageable safety profile (142), and is currently undergoing further clinical trials in patients with advanced solid tumors (e.g., NCT02517398, NCT02699515).
An elegant approach to shield adoptively transferred cytotoxic lymphocytes from the suppressive effects of TGF-β in cancer immunotherapy, such as NKG2D silencing, is the transduction of T cells or NK cells with a dominant negative form of TGF-βRII prior to adoptive transfer (143,144). Transduction of cord blood NK cells with a dominant negative TGF-βRII efficiently blocked TGF-β signal transduction and supported the maintenance of the cell surface expression of activating receptors and NK cell cytotoxicity in the presence of TGF-β (144). Treatment of a small cohort of chemorefractory Hodgkin lymphoma patients with TGF-βRII-transduced autologous EBV-derived tumor antigenspecific CD8 T cells showed complete remission in four out of seven patients (145) suggesting that this type of engineered cytotoxic lymphocytes is safe and efficacious.
Another elegant strategy attempts to convert immunosuppressive signals of soluble TGF-β into stimulatory signals using the chimeric antigen receptor (CAR) concept. A recent report created a chimeric receptor consisting of a TGFβ-binding scFv fused to the transmembrane segment of CD28 and the cytoplasmic signaling domains of both CD28 and CD3ζ (146). T cells ectopically expressing such a CAR were activated by TGF-β-induced CAR dimerization that led an activation of both NFAT and NFκB with a subsequent stimulation of Th1 cytokine responses and an enhanced T cell expansion (146). It will be of great interest to address the in vivo performance of such anti-TGF-β CAR T cells utilizing TGF-β as an activating growth factor in mouse models of solid tumors.
CONCLUDING REMARKS
TGF-β broadly and potently suppresses the effector functions of NK cells and cytotoxic T lymphocytes with the TGF-β-mediated impairment of the NKG2D axis representing an important facet of this phenomenon in cancer immunity. Down-regulation of both NKG2D, on cytotoxic lymphocytes, and NKG2DL surface expression, on tumor cells, facilitates the immune escape of tumor cells from induced-self recognition and elimination by cytotoxic lymphocytes. Hence, targeting TGF-β appears to represent a key intervention for an efficient boosting of tumor immunity and should be considered in future cancer treatment modalities. However, the intracellular mechanisms mediating the suppression of the NKG2D axis through TGF-β are not yet fully elucidated and further research is needed to define the underlying molecular and cellular pathways to allow for the development of more tailored and efficacious therapeutic options.
AUTHOR CONTRIBUTIONS
ML and AS wrote the manuscript. | 6,517 | 2019-11-15T00:00:00.000 | [
"Medicine",
"Biology"
] |
Fourth-Order Compact Formulation for the Resolution of Heat Transfer in Natural Convection of Water-Cu Nanofluid in a Square Cavity with a Sinusoidal Boundary Thermal Condition
In the present work, we numerically study the laminar natural convection of a nanofluid confined in a square cavity. The vertical walls are assumed to be insulated, non-conducting, and impermeable to mass transfer. The horizontal walls are differentially heated, and the low is maintained at hot condition (sinusoidal) when the high one is cold. The objective of this work is to develop a new height accurate method for solving heat transfer equations. The new method is a Fourth Order Compact (F.O.C). This work aims to show the interest of the method and understand the effect of the presence of nanofluids in closed square systems on the natural convection mechanism. The numerical simulations are performed for Prandtl number ( Pr = 6.2 ), the Rayleigh numbers varying between Ra ≤ ≤ × 3 5 10 5 10 and for different volume fractions χ varies between 0% and 10% for the nanofluid (water + Cu).
Introduction
In a better description, nanofluids are engineered colloidal suspensions of nanoparticles (1 -100 nm) in a base fluid.Common base fluids include water, oil, and Ethylene Glycol while nanoparticles are typically made of chemically stable metals, metal oxides or carbon in various forms.The use of particles of nanometer dimension was first continuously studied by a research group at the Argonne National Laboratory a decade ago.S. Choi [1] in 1995 was probably the first one who called the fluids with particles of nanometer dimensions "nanofluids".He showed substantial augmentation of heat transported in suspensions of copper or aluminum nanoparticles in water and other liquids.Compared with suspended particles of millimeter or micrometer dimensions, nanofluids show better stability and rheological properties, dramatically higher thermal conductivities and no penalty in pressure drop.Several published literatures have mainly focused on the prediction and measurement techniques in order to evaluate the thermal conductivity of nanofluid.It is noticeable that only a few papers have discussed the convective heat transfer of nanofluids, including the experimental and theoretical investigation.
A numerical study of natural convection of copper-water nanofluide in a two dimensional enclosure was conducted by Khanafer et al. [2].The nanofluid in the enclosure was assumed to be in single phase.It was found in any given Grashof number, heat transfer in the enclosure increased with the volumetric fraction of the copper nanoparticles in water.Lee et al. [3] measured the thermal conductivity of Al 2 O 3 water and Cu-water nanofluids and indicated that the thermal conductivity of nanofluids increases with solid volume fraction.He concluded that any new models of nanofluide thermal conductivity should contain the effect of surface area and structure dependent behavior as well as the size effect.Xie et al. [4] added spherical and cylindrical shaped nano sized SiC particles to water and Ethylene Glycol, separately and found that cylindrical nanoparticles increased thermal conductivity more than spherical ones.The dependence of thermal conductivity of nanoparticles-fluid mixture was estimated by Xie et al. [5].Some of the theoretical and experimental studies have been reported on convective heat transfer coefficient [6]- [9].
Sandeep Naramgari and C. Sulochana [10] analyzed the momentum and heat transfer behavior of MHD nanofluid embedded with conducting dust particles past a stretching surface in the presence of volume fraction of dust particles.They solved equations numerically using Runge-Kutta based shooting technique and showed that the increase in the interaction between the fluid and particle phase enhanced the heat transfer rate and reduced the friction factor.Nader Ben-Cheikh et al. [11] studied natural convection in a square enclosure filled with a water based nanofluid (water with Ag, Cu, Al 2 O 3 or TiO 2 nanoparticles) with non-uniform (sinusoidal) temperature distribution maintained at the bottom wall.An accurate finite volume scheme along with a multi-grid technique is devised for the purpose of solution of the governing equations.Tiwari et al. [12] numerically investigated the behavior of copper-water nanofluid in a two sided lid-driven differentially heated cavity.They considered different cases characterized by the direction of movement of walls and found that both the Richardson number and direction of moving walls influenced the fluid flow and thermal behavior.Yadil et al. [13] study the Cu/Water nanofluids filled baffled square cavity.
The effects of Rayleigh number, volume fraction and partitions location on the average Nusselt number are studied.I.El Bouihi and R. Sehaqui [14] she simulate the flow features of nanofluids for a range of solid volume fraction χ and a sinusoidal thermal boundary condition, and we obtained correlations of heat transfer in enclosures for two different thermal boundary conditions on the left wall.Ami-Nossadati and Ghasemi [15] studied natural convection cooling of a localized heat source at the bottom of a nanofluid filled enclosure.Ogut [16] investigated natural convection of water-based nanofluids in an inclined enclosure with a heat source using the expression for calculating the effective thermal conductivity of solid-liquid mixtures proposed by Yu and Choi [17].Ghasemi and Aminossadati [18] considered periodic natural convection in a nanofluid filled enclosure with oscillating heat flux.Non-uniform heating of surfaces in buoyancy driven flow in a cavity has significant effect of the flow and heat transfer characteristics and finds applications in various areas such as crystal growth in liquids, energy storage, geophysics, solar distillers and others.In a relatively recent study, Sarris et al. [19] reported that the sinusoidal wall temperature variation produced uniform melting of materials such as glass in their detailed study on the effect of sinusoidal top wall temperature variations in a natural convection within a square enclosure where the other walls are insulated.Corcione [20] studied natural convection in an air filled rectangular enclosure heated from below and cooled from above for a variety of thermal boundary conditions at the side walls.Roy and Basak [21] studied numerically natural convection flows in a square cavity with non-uniformly (sinusoidal) heated wall(s) using the finite element method.The bottom wall and one vertical wall were heated (uniformly and non-uniformly) and the top wall was insulated while the other vertical wall was cooled by means of a constant temperature bath.Sathiyamoorthy et al. [22] investigated steady natural convection flows in a square cavity with linearly heated side wall(s).
In order to optimize and improve heat transfer by natural convection in closed square cavity.Although extensive research has been given to cases of rectangular cavity filled nanofluid, few studies have focused on the study, theoretical or numerical discretizations high order (≥2).The numerical study of systems of equations in the heat transfer area is usually treated by various methods, sometimes numérical classics like Finite Elements (FE), Volume Finite (VF) and Finite Differences (DF).or using some software adapted as "FLUENT".To solve fluid mechanics problems such as conductive heat transfer, convective or mixed into regular geometries.
More specific order four schemes have been used to solve the Navier-Stokes in enclosures without considering the energy equation by Ecran Erturk et C. Gokcol [2].The objective of this work is to develop a new method for solving heat transfer equations in convection.The new method is a Fourth Order Compact.This work aims to show the interest of the method and understand the effect of the presence of nanofluids in closed square systems on the natural convection mechanism.
Mathematical Formulation
Consider a square cavity filled with a nanofluid.The vertical walls are assumed to be insulated, non-conducting, and impermeable to mass transfer.The horizontal walls are differentially heated, the low is maintained at hot condition (sinusoidal) when the high one is cold (Figure 1).The nanofluid in the enclosure is Newtonian, incompressible, and laminar.The nanoparticles are assumed to have a uniform shape and size.Moreover, it is assumed that both the fluid phase and nanoparticles are in thermal equilibrium state and they flow at the same velocity.The thermophysical properties of the nanofluid are assumed to be constant except for the density variation in the buoyancy force, which is based on the Boussinesq approximation.
We have considered the continuity, momentum and energy equations for a Newtonian, Fourier constant property fluid governing an unsteady, two-dimensional flow.It is further assumed that radiation heat transfer among sides is negligible with respect to other modes of heat transfer.Under the assumption of constant thermal properties, the Navier-Stokes equations for an unsteady, incompressible, two-dimensional flow are: Continuity equation: x-momentum equation: ,0 y-momentum equation: Energy equation: ( ) where ( ) 2 and .
The viscosity of the nanofluid can be estimated with the existing relations for the two-phase mixture.The equation given by Brinkman [23] has been used as the relation for effective viscosity in this problem, as given by Xuan and Li [24] have experimentally measured the apparent viscosity of the transformer oil-water nanofluid and of the water-copper nanofluid in the temperature range of 20˚C -50˚C.The experimental results reveal relatively good agreement with Brinkman's theory.The thermophysical properties of fluid and the solid phases are shown in Table 1.
The effective density of the nanofluid at reference temperature is ( ) The heat capacitance of the nanofluid is expressed as Abu-Nada [25] and Khanafer et al. [2] ( ) ( ) ( )( ) The effective thermal conductivity of the nanofluid is approximated by the Maxwell-Garnetts model [26] ( ) ( ) ( ) ( ) Equations ( 1)-( 4) can be converted to the dimensionless forms by definition of the following parameters as: ; and Hence, the governing equations of continuity, linear momentum and energy for unsteady laminar flow in Cartesian coordinates take the following dimensionless form: Table 1.Thermophysical properties of water and nanoparticles.
Physical properties
Pure water Cu ( ) ( ) The enclosure boundary conditions consist of no-slip and no penetration walls, U = V = 0 on all four walls.The thermal boundary conditions on the bottom wall is such that The left and right vertical walls are assumed to be insulated, non-conducting, and impermeable to mass transfer and the bottom wall are at the cold temperature The governing equations for the present study in ( ) ψ ω formulation taking into the account the above men- tioned assumptions are written in dimensionless form as: Kinematics equation: Vorticity equation: where: ( ) Before turning to the application of the method of fourth order in the equations governing our problem we will combine Equations ( 12) and ( 13) in a condensed form by introducing a dummy variable Γ , which replace either the temperature θ is the vorticity ω .
All these terms are listed in Table 2.
Dimensionless boundary conditions for ( )
, ψ ω are: For vorcicite Störtkuh et al. [27] have presented an analytical asymptotic solution near the corners of cavity and using finite element bilinear shape functions they also have presented a singularity-removed boundary condition for vorticity at the corner points as well as at the wall points.For the boundary conditions, in both of the numerical methods described above we follow Störtkuh et al. [27] and use the following expression for calculating vorticity values at the wall.
Quantities transported
For corner points, we again follow Störtkuh et al. [27] and use the following expression for calculating the vorticity values: where V is the speed of the wall in our case which is equal to 0 for the four stationary walls.
In explicit notation, for the wall points shown in Figure 2(a), the vorticity is calculated as the following: Similarly, for the corner points also shown in Figure 2(b), the vorticity is calculated as the following: ( ) The reader is referred to Störtkuh et al. [27] for details on the boundary conditions.The local and averaged heat transfer rates at the bottom hot wall of the cavity are presented by means of the local and averaged Nusselt numbers, Nu and Nu , which are, respectively determined as follows:
Introduction
High-Order Compact (HOC) formulations are becoming more popular in computational fluid dynamics (CFD) field of study.Compact formulations provide more accurate solutions in a compact stencil.In finite differences, a standard three-point discretization provides second-order spatial accuracy and this type of discretization is very widely used.When a high-order spatial discretization is desired, i.e. fourth-order accuracy, then a fivepoint discretization has to be used.However, in a five point discretization there is a complexity in handling the points near the boundaries.High-order compact schemes provide fourth-order spatial accuracy in a 3 × 3 stencil Gupta et al. [30], Spotz and Carey [31], Li et al. [32], have demonstrated the efficiency of the HOC schemes on the stream function and vorticity formulation of two dimensions.In the literature, it is possible to find numerous different types of iterative numerical methods for the momentum equations.These numerical methods, however, could not be easily used in HOC schemes because of the final form of the HOC formulations used in References [28]- [32].This fact might be counted as a disadvantage of HOC formulations that the coding stage is rather complex due to the resulting stencil used in these studies.It would be very useful if any numerical method for the solution of momentum equations described in books and papers could be easily applied to HOC formulations.E. Erturk and C. Gokcol [33] present a new Fourth-Order Compact Formulation.The difference of this formulation with References [28]- [32] is not in the way that the Fourth-Order Compact scheme is obtained.The main difference, however, is in the way that the final forms of the equations are written.The main advantage of this formulation is that, any iterative numerical method used for Navier-Stokes equations, can be easily applied to this new FOC formulation, since the final form of the presented FOC formulation is in the same form with the Navier-Stokes equations.Moreover, if someone already has a second-order accurate ( )
2
O x ∆ code for the solution of conservation equations the mass and momentum, they can easily convert their existing code to fourthorder accuracy ( )
O x
∆ by just adding some coefficients into their existing code.In this study, we will applied Using this new compact formulation, we have solved the conservation equations of mass, momentum and energy in square cavity at the Rayleigh number varies in the range , taking water as a base fluid with a Prandtl number equal to ( )
Pr =
using a very fine grid mesh to demonstrate the efficiency of this new formulation.
Principle Method of Fourth Order Compact
We will use the equations of streamlines ψ , vorticity ω and energy θ dimensionless forms are given as follows: Stream function: x y General equation of conservation: For first and second-order derivatives the following discretizations are fourth-order accurate: ( ) ( ) where x Φ and xx Φ are standard second-order central discretizations such that 1 1 If we apply the discretizations in Equations ( 24) and (25) to Equations ( 22) and ( 23), we obtain the following equation ( ) In these equations we have third and fourth derivatives ( ) of stream function and general equation of conservation bringing together the equations of vorticity and energy.In order to find an expression for these derivatives we use Equations ( 22) and (23).
For example, when we take the first and second x-derivative of the stream function Equation ( 22) we obtain x x x y And also, by taking the first and second y-derivative of the stream function Equation ( 22) we obtain Using standard second-order central discretizations given in Table 3, these equations can be written as ( ) ( ) ( ) , When we substitute Equations ( 35) and (37) into Equation ( 28) we obtain the following finite difference equation.
Derivations Discretizations
i j i j i j i j i j i j x y i j i j i j i j i j i j i j i j i j x y We note that the solution of Equation ( 38) is also a solution to stream function Equation ( 22) with fourth-order spatial accuracy.Therefore, if we numerically solve Equation (38), the solution we obtain will satisfy the stream function equation up to fourth order accuracy.
In order to obtain a fourth-order approximation for the vorticity equation and energy (23), we follow the same procedure.When we take the first and second derivatives of the general equation of conservation (23) with respect to x-and y-coordinates we obtain: If we substitute Equations ( 39) and (41) for the third derivatives of the general equation of conservation and into Equations ( 29), ( 40) and ( 42) and also if we substitute Equations ( 34) and (36), for the third derivatives of stream function into Equations ( 29), ( 40) and ( 42) and finally, if we substitute Equations ( 40) and (42) for the fourth derivatives of the general equation of conservation into Equation ( 29), then we obtain the following: where: ; .
Again we note that the solution of Equation ( 43) satisfy the vorticity and energy Equation ( 23) with fourthorder accuracy.
As the final form of our FOC scheme, we prefer to write Equations ( 38) and (43) as We note that the finite difference Equations ( 44) and ( 45) are fourth-order accurate ( ) approximation of the stream function, vorticity and energy Equations ( 22) and (23).In Equations ( 44) and ( 45), however, if A, B, C, D, E and F are chosen to be equal to 0 then the finite difference Equations ( 44) and ( 45) simply become Equations ( 47) and ( 48) are the standard second-order accurate , O x y ∆ ∆ approximation of the streamfunction and the general equation of conservation ( 22) and ( 23).When we use Equations ( 44) and (45) for the numerical solution of the stream function and general equation of conservation, we can easily switch between second and fourth-order accuracy just by using homogeneous values for the coefficients A, B, C, D, E and F or by using the expressions defined in Equation ( 46) in the code.We note that the numerical solutions of Equations ( 44) and ( 45), strictly provided that second-order discretizations in Table 3 are used and also strictly provided that a uniform grid mesh with and is used, are fourth-order accurate to streamfunction and the general equation of conservation (21) and (22).The only difference between Equations ( 44) and (45) and Equations ( 22) and ( 23) are the coefficients A, B, C, D, E and F. So these equations are of the same form, therefore, all the iterative numerical methods (such as SOR, ADI, factorization schemes, pseudo time iterations, etc.) used to solve stream-function, vorticity and energy Equations ( 22) and ( 23) can also be easily applied to fourth-order Equations ( 44) and (45).In our work we apply the ADI method on the equations of 4th order.
As a measure of convergence to the steady state, during the iterations we monitored three residual parameters.The first residual parameter, RES1, is defined as the maximum absolute residual of the finite difference equations of steady stream function and general Equations ( 44) and (45).These are, respectively, given as max The magnitude of RES1 is an indication of the degree to which the solution has converged to steady state.In the limit RES1 would be zero.The second residual parameter, RES2, is defined as the maximum absolute difference between two iteration steps in the stream function, vorticity and energy variables.These are, respectively, given as RES2 gives an indication of the significant digit on which the code is iterating.The third residual parameter, RES3, is similar to RES2, except that it is normalized by the representative value at the previous time step.This then provides an indication of the maximum percent change in ψ and Γ in each iteration step.RES3 is de- fined as 1 , , In our calculations, for all Rayleigh numbers we considered that convergence was achieved when both were achieved.Such a low value was chosen to ensure the accuracy of the solution.At these convergence levels, the second residual parameters were in the order of , which means that the stream function, vorticity and energy variables are accurate to the 10 th and 9 th digit accuracy, respectively, at a grid point and even more accurate at the rest of the grids.In addition, at these convergence levels the third residual parameters were in the order of , which means that the stream function, vorticity and energy variables are changing with 10% -9% and 10% -8% of their values, respectively, in an iteration step at a grid point and even with less percentage at the rest of the grids.These very low residuals ensure that our solutions are indeed very accurate.
Results and Discussion
In the present grid independence test, the Prandtl number is set to Pr = 6.2 (pure water).The nanoparticles are chosen to be copper (Cu) with a solid volume fraction 0.1 χ = and a Rayleigh number Ra = 10 5 .Numerical computations have been carried on five different grid sizes 32 × 32, 42 × 42, 62 × 62, 82 × 82 and 102 × 102 grid sizes.Table 4 regroups the values of the averaged Nusselt number through the hot wall and the maximum value of the stream function.Uniform grid has been used for all the computations.The distribution of the u-velocity in the vertical mid-plane and v-velocity in the horizontal mid-plane are shown in Figure 3.It is observed that the curves overlap with each other for 82 × 82 and 102 × 102.So a grid number of 82 × 82 is chosen for further computation.
Our code has been tested for natural convection fluid flows in differentially heated cavities and in Rayleigh-Bénard configuration for Rayleigh numbers between 10 3 and 10 6 (Table 5) and gave excellent results (see ref. [12] [14] [34]- [36]).In this section, the nanofluid-filled enclosure is studied for a range of solid volume fraction 0% 10% χ ≤ ≤ and the Rayleigh number varies from 10 3 to 10 5 .For all simulations the considered base fluid is water (Pr = 6.2).
In Figure 4, we present the streamlines (top) and isotherms (bottom) for 10 3 ≤ Ra ≤ 10 5 , for the case of a water-Cu nanofluid and pure water.The value of solid volume fraction is set to 0.04 χ = . Figure 5 represents the same physical quantities but for a volume fraction value of 0.1 χ = . Due to the temperature distribution imposed at the bottom wall and to the boundary conditions on vertical walls, we observe a symmetry behavior in both the streamlines and in the contour maps of the isotherms.We can see that whatever the Rayleigh number and value of solid volume fraction, the flow is mainly composed of two counter-rotating circulating cells.
Figure 6 presents the velocity profiles V(X) and U(Y) along the mid-section of the enclosure X = 0.5 and Y = 0.5 for different values of χ and is in good concordance with the fact that the nanofluid moves slower than a pure water.Indeed, for Ra = 10 3 , the deviation (relative to χ = 0) between the maximum values of vertical velocity is ( ) max 0.1 8.73% V ∆ = respectively.As far as the temperature distribution is con- cerned, clear differences are observed in the isotherm contour plots compared to the case χ = 0.These differences are accentuated as the solid volume fraction increases.These differences mean that the presence of nanoparticles affect especially the heat transfer rate through the enclosure.
The heat transfer distribution through the hot wall is displayed in Figure 7 through the plotted lines of the local Nusselt number for different values of Ra.One can see that for all combinations of Ra and χ, the local Nusselt number behavior is symmetric with respect to the plane X = 0.5.For low Rayleigh numbers, (Ra = 5 × 10 3 ) and χ = 0, the transfer of heat through the hot wall is relatively low with a slight curvature at X = 0.5.This curvature is due to relatively higher intensity of the counter rotating cells represented by the highest value of max ψ when χ = 0.When χ increases to χ = 0.1, the curvature at the center disappears because the fluid velocity decreases.The heat transfer in this case is maximum at X = 0.5 and is higher due to the presence of nanoparticles whose thermal conductivity is much greater than that of water.The same phenomena are observed almost on the curves related to Ra = 5 × 10 4 and Ra = 5 × 10 5 with a maximum heat transfer in the vicinity of X = 0.25 and X = 0.75.For example, for Ra = 5 × 10 4 , the maximum Nusselt number value is Nu max = 7.374 and is situated at both locations X = 0.317 and X = 0.682 for χ = 0.For χ = 0.1, Nu max = 10.394 and is located at X = 0.317 and X = 0.695.
The variations of average Nusselt number (Nu) with Ra and χ are shown in Table 6.For Ra = 10 3 , there is a substantial increase in Nu as χ is increased above 2%.In general, Nu increases with χ.When χ is 2%, the increase is approximately 9.75%.When χ is 4%, the increase is approximately 19.51%.When χ is 8%, the increase is above 42.07%.For Ra = 10 4 , as χ is increased to 2%, an increase of 6.03% is observed.A heat transfer augmentation of above 27.63% is obtained for 8% χ = compared to 0% χ = or more is observed for different Rayleigh number Ra.Thus, one can conclude that the Nusselt number increases with the increase of the volume fraction χ and Rayleigh number Ra.
Conclusions
In this study the heat transfer enhancement in a two dimensional enclosure filled with nanofluids is studied numerically.This study presented a new fourth-order compact formulation and investigated the effect of a sinusoidal thermal boundary condition, for different Rayleigh number Ra and volume fractions of nanoparticles.The flow and temperature fields are symmetric near the middle plane of the enclosure due to the imposed symmetry condition on the bottom wall boundary.From the results of this work, the following main conclusions may be drawn: • The fourth-order accurate compact formulation was developed and was in agree with previous studies.
• Our numerical code has been validated for different Rayleigh number.
• A comparative study illustrates that the suspended nanoparticles substantially increase the heat transfer rate with an increase in the nanoparticles volume fraction for different Rayleigh number Ra Rayleigh number.Moreover, the nanofluid flows as well as the cooper nanoparticles increase.
In the near future, this study will be extended for different geometry studies and other types of base fluids and nanoparticles.
Figure 1 .
Figure 1.Physical model the coordinate system.
Figure 2 .
Figure 2. Grid points at the wall and at the corner: (a) wall points and (b) corner points.andthis type of compact formulations does not have the complexity near the boundaries that a standard wide (five-point) fourth-order formulation would have.Dennis and Hudson[28], MacKinnon and Johnson[29], Gupta et al.[30], Spotz and Carey[31], Li et al.[32], have demonstrated the efficiency of the HOC schemes on the stream function and vorticity formulation of two dimensions.In the literature, it is possible to find numerous different types of iterative numerical methods for the momentum equations.These numerical methods, however, could not be easily used in HOC schemes because of the final form of the HOC formulations used in References[28]-[32].This fact might be counted as a disadvantage of HOC formulations that the coding stage is rather complex due to the resulting stencil used in these studies.It would be very useful if any numerical method for the solution of momentum equations described in books and papers could be easily applied to HOC formulations.E. Erturk and C. Gokcol[33] present a new Fourth-Order Compact Formulation.The difference of this formulation with References[28]-[32] is not in the way that the Fourth-Order Compact scheme is obtained.The main difference, however, is in the way that the final forms of the equations are written.The main advantage of this formulation is that, any iterative numerical method used for Navier-Stokes equations, can be easily applied to this new FOC formulation, since the final form of the presented FOC formulation is in the same form with the Navier-Stokes equations.Moreover, if someone already has a second-order accurate for different solide volume fractions χ of nanoparticles (Cu) is varied as 0% 10% χ ≤ ≤
Figure 6 .
Figure 6.Velocity profiles along the mid-plane for different Ra and different solid volume fractions (Water-Cu).
Figure 7 .
Figure 7. Local Nusselt number through the heated wall for different Ra and solid volume fractions (water-Cu).
Table 6 .
Comparison of average Nusselt number Nu for different Rayleigh number and various solid volume fractions.
Table 4 .
Grid independency results for water-Cu,
Table 5 .
Comparaison between the present work and other studies for Nu . | 6,690.6 | 2016-04-15T00:00:00.000 | [
"Physics"
] |
Dynamic atomic reconstruction: how Fe3O4 thin films evade polar catastrophe for epitaxy
Polar catastrophe at the interface of oxide materials with strongly correlated electrons has triggered a flurry of new research activities. The expectations are that the design of such advanced interfaces will become a powerful route to engineer devices with novel functionalities. Here we investigate the initial stages of growth and the electronic structure of the spintronic Fe3O4/MgO (001) interface. Using soft x-ray absorption spectroscopy we have discovered that the so-called A-sites are completely missing in the first Fe3O4 monolayer. This allows us to develop an unexpected but elegant growth principle in which during deposition the Fe atoms are constantly on the move to solve the divergent electrostatic potential problem, thereby ensuring epitaxy and stoichiometry at the same time. This growth principle provides a new perspective for the design of interfaces.
one may want to design: interfaces which appear impossible to grow at first sight may now be tried out.
Here, we investigate the polar interface between Fe3O4 and the MgO (001) substrate, one of the most used interfaces in the research field of spintronics. [16][17][18][19][20][21][22][23][24] This interface is completely not understood in terms of atomic structure, electronic structure and growth mode. (001) is also known to produce films with excellent physical properties [22]. In order to obtain direct insight into the atomic and electronic structure of the interface, we utilize soft x-ray absorption spectroscopy (XAS) at the Fe L2,3 edges. This spectroscopic technique is extremely sensitive to the local coordination and charge state of the Fe ions [25][26][27][28].
Fe3O4 thin films with thicknesses varying between 0.67 and 8 monolayers (ML) were grown on MgO (001). Each film has been grown on a new and freshly annealed substrate. The substrate temperature was kept at 250 °C during the growth in order to avoid the Mg inter-diffusion at the Fe3O4/MgO interface [29,30]. Details about the film growth are given in the Supplementary Materials [31]. One ML consists of one (001)-oriented layer of oxygen anions together with the appropriate number of Fe cations to maintain charge neutrality and stoichiometry, and has a thickness of 2.1 Å . In Fig. 1 (c), and (d) we present representative reflection high energy electron diffraction (RHEED) and low energy electron diffraction (LEED) patterns, respectively, of a 200 nm thick Fe3O4 film to demonstrate that the surface is still smooth for very long deposition times.
The typical �√2 × √2�R45° surface reconstruction is also clearly visible. Fig. 1 (e) shows the regular oscillations with time in the intensity of the specularly reflected RHEED beam during growth, indicating a two-dimensional layer-by-layer growth mode. Fig. 1 (f) Fig. S1]. We also include in Fig. 2 the spectra of bulk YBaCo3FeO7 [28], bulk FeO (reproduced from Ref. 32) and bulk Fe2O3 as references for Fe 3+ ions in tetrahedral coordination, Fe 2+ ions in octahedral coordination, and Fe 3+ ions in octahedral coordination, respectively. The line shapes of the spectra strongly depend on the multiplet structure given by the atomic like Fe 3d-3d and 2p-3d Coulomb and exchange interactions, as well as by local crystal fields and the hybridization with the O 2p ligands [25][26][27][28]. Here we note the striking similarities of the spectral features of the 8 ML Fe3O4 thin film and the bulk magnetite, which confirms that our Fe3O4 films have the correct stoichiometry.
We now focus on the thickness dependence of the spectra. Clear and systematic changes can be observed, in particular in the peak position of the spectral feature labeled as (I) and in the intensity of the spectral feature labeled as (II) relative to that of peak (I), see Fig. 2. The position of peak (I) of the thinnest Fe3O4 films, i.e. of the 0.67, 0.75 and 1 ML films, is the same as that of bulk Fe2O3, while for the thicker films, i.e. 2 ML and beyond, it is more similar to that of bulk YBaCo3FeO7. This gives a first indication that the thinnest films contain only tiny amounts of Fe 3+ ions in tetrahedral coordination and implies that such A-site Fe ions could essentially only be present for films of 2 ML thickness and beyond. This then would also explain why for the thinnest films one can see two separate peaks (I) and (II) like in bulk Fe2O3 (green curve), while for thicker films the appearance of an in-between peak associated with the Fe 3+ ions in tetrahedral coordination (red curve) will fill up the valley between peak (I) and (II), making peak (II) to become a shoulder and the position of the larger peak (I) to shift to lower energies. Important is to note that the foot at the onset of the Fe L3 edge, i.e. the feature between 706 and 707.7 eV, which is part of the spectral feature characteristic for Fe 2+ ions in octahedral B sites (see blue curve), are thickness independent. All these strongly suggest that the spectral weight of the A-site and the B-site Fe 3+ ions varies strongly with thickness.
To interpret and better understand the XAS spectra and their thickness dependence we have performed calculations using the well established configuration interaction cluster model that includes the full atomic multiplet theory and the local effects of the solid [25][26][27][28]. We have simulated each of the XAS spectra shown in Fig. 2 where the error bars reflect the deviations of the fits to the experimental data. From the relative concentrations of the constituents we have calculated the average valence or equivalently, by taking the oxygen lattice to be complete, we have determined the Fe content y in our FeyO films.
These y values are plotted as black closed squares in the bottom panel of Fig. 3. We can observe that all points are very close to the Fe3/4O (gray) line, which confirms the correct stoichiometry of our films through the entire thickness range and very consistent with the RHEED intensity oscillations which have a constant time period, i.e. independent of the film thickness.
An important aspect that emerges directly from the simulations is the strong thickness dependence of the different Fe constituents, see Fig. 3. We recall that bulk Fe3O4 has 1/3 (33%) Fe 3+ ions in tetrahedral coordination (A-sites), 1/3 (33%) Fe 2+ and 1/3 (33%) Fe 3+ ions in octahedral coordination (B-sites). We found to our surprise that the amount of A-site Fe 3+ ions is practically negligible for the thinnest films, i.e. 2-3% instead of the 33% bulk value. At the same time, the amount of B-site Fe 3+ in the thinnest films is between 60-68%, much larger than the 33% bulk value. We also observe that with increasing film thickness the A-site Fe 3+ amount increases and the B-site Fe 3+ decreases, both to approach the 33% bulk value, see for example the 8 ML results in Fig. 3. Interestingly, the amount of B-site Fe 2+ is rather constant and independent of the film thickness, it fluctuates around the 33% bulk value.
These spectroscopic findings provide crucial data for the determination of the actual growth process and the interface structure. Especially the observation that the first monolayer of the Fe3O4 film has essentially no A-sites is a surprising piece of information. In fact, as far as the monolayer is concerned, the choice of 'nature' not to have A-sites is the simplest manner to solve the planar electrostatic potential problem. As can be seen from Figs. 1 (a) and (b), it is indeed the presence of the A-sites that causes the polar catastrophe to occur as there are no negative ions in those A-site planes to neutralize the charges. So by not having A sites for the first monolayer, there is also no electrostatic problem. What we then have is that the first monolayer constitutes basically of a charge-neutral non-polar rocksalt FeO layer with 25% Fe vacancies. All Fe ions are occupying the B-site with 33% of them having the 2+ valence and 67% the 3+ state and the vacancies are not ordered since we did not observe any superstructure. We have also carried out polarization dependent XAS measurements, and we are able to indeed verify in detail that also the dichroic spectrum is consistent with the 33% Fe 2+ For the second monolayer, we observe in the experiment the appearance of some amount of A-sites, about 16.7%, see top panel of Fig. 3. We now can arrive at the following model, see Fig. 4 where the left panel shows the growth process and the right panels the corresponding net charges, electric field, and electric potential of each plane. Since in bulk magnetite, a monolayer per unit cell includes 2 A-site Fe 3+ , 2 B-site Fe 2+ , 2 B-site Fe 3+ , and 8 oxygen ions, we will use the formula notation Fe6O8 instead of Fe3/4O to describe each monolayer. When deposited, the second monolayer will first form a non-polar monolayer, like the first monolayer. Then, both the first and the second layers give away one Fe 3+ ion as shown in Fig. 4 For a 3-ML film, the layer added will again form a non-polar monolayer first. This monolayer and the subsurface monolayer then carry out the same process in which both give away one Fe 3+ ion to the space in between. See Fig. 4 (c). Again, the potential divergence remains nullified after this process as shown on the most right panel of Fig. 4 (c). One complete bulk Fe6O8 layer now is formed. This growth process is repeated for the subsequent layers, and the model predicts that the concentration of the A-site ions will increase following the geometrical series 2(n-1)/6n, while the concentration of the Fe 2+ B-site ions will remain constant at 2n/6n = 33% and that of the Fe 3+ B-site ions will decrease following 2(n+1)/6n, where n denotes the number of monolayers. These predictions of the model are also presented in Fig. 3. We can see that the essential behavior observed in the experiment is well reproduced with convergence to the bulk values for thicker films. We also would like to note that an ordering in the outer Fe5O8 layer can be made consistent with the often observed �√2 × √2�R45° surface reconstruction in thicker (001) Fe3O4 films, please see the Supplementary Materials for details [31].
We thus have found that the A-sites are absent in the first monolayer or interface, and that Fe ions are on the move while the films is growing to accommodate for the presence of A-sites inside the film having the proper crystal structure and stoichiometry. We clearly have a 'dynamic atomic reconstruction' taking place here.
It is interesting to note that the Fe3O4 thin films are insulating and that the interface does not induce metallicity as shown by the resistivity measurements displayed in Fig. 1 (f). This is obviously in contrast with the resistivity measurements on SrTiO3/LaAlO3 [5,[35][36][37] and SrTiO3/RETiO3 [13][14][15]. In principle, the Fe5O8 interface layer could have been conducting since this layer can be considered as a defective and doped rocksalt FeO layer. Yet, considering the fact that small polaron effects in bulk Fe3O4 are strong and hamper the conductivity [38][39][40], we may expect that this will also be the case for the interface layer. Its resistivity will then be dominated by strong scattering effects due to disorder.
Our findings have direct and important implications for the field of Fe3O4 spintronics. There are some reports concerning the possible existence of a magnetically dead layer at the interface [41][42][43], but others ascribe the decrease of the magnetization in the thin films to the presence of antiphase boundaries leading to superparamagnetic behavior of the domains [44][45][46]. Our findings may give credit to the proponents of the dead magnetic layer model. In view of the absence or low amount of A-sites in the interface region, some of the superexchange paths which determine the ferrimagnetism in Fe3O4 are certainly missing. This then would also explain why tunneling experiments have spin-polarizations different than expected from the properties of bulk Fe3O4 [47]. We now can propose that the insertion of a monolayer of magnetic metals like Fe, Co, Ni, or even noble metals like Cu, Ag, Au or Pt between the Fe3O4 and the insulating oxide substrate will drastically change the situation: the metal layer inserted will act as a charge reservoir that can accommodate the flow of planar charges required to stabilize a Fe3O4 interface layer which has A-sites like in the bulk. The occurrence of a magnetically dead layer can then be prevented and also the spin polarization at the interface may be increased. A hint that the latter is not unrealistic can be found in an early work by Dedkov et al. [48] on oxidized Fe films deposited on metal substrates.
To summarize, using soft x-ray absorption spectroscopy we find that nature provides us with an unexpected but elegant solution for the polar catastrophe problem at the Fe3O4/MgO (001) interface: the A-site Fe 3+ ions are missing in the first Fe3O4 layer and the growth process involves movements of not only the surface but also the subsurface Fe ions, securing epitaxy and stoichiometry at the same time. Having identified this 'dynamic atomic reconstruction' growth principle, we conclude that we really have to think differently and openly about how polar interfaces can grow. Apparently, 'nature' offers us a much wider range of opportunities to prepare unstable polar interfaces. It would be interesting to put effort to grow a monolayer or a few monolayer of Fe3O4 film where the defects are ordered, so that diffraction techniques can confirm the growth model. Fig. S1 shows the full Fe L2,3 XAS spectra of the Fe3O4 films together with the spectra of bulk Fe3O4, bulk YBaCo3FeO7 (Fe 3+ in tetrahedral coordination) [1], bulk FeO (Fe 2+ in octahedral coordination) [2] and bulk Fe2O3 (Fe 3+ in octahedral coordination): the spectra are identical to those in Figure 2 in the main text, but with a wider photon energy window covering both Fe L3 and L2 edges.
Configuration interaction cluster calculation. To interpret and better understand the x-ray absorption (XAS) spectra and their thickness dependence we have performed simulations using the well established configuration interaction cluster model that includes the full atomic multiplet theory and the local effects of the solid [3][4][5]. It accounts for the intra-atomic Fe 3d-3d and 2p-3d Coulomb and exchange interactions, the atomic 2p and 3d spin-orbit couplings, the O 2p-Fe 3d hybridization and the local ionic crystal field. The calculations were done using the program XTLS 8.3 [5]. The XAS spectra of Fe3O4 can be decomposed into the three sub-spectra of the three Fe sites, i.e. A-site Fe 3+ , B-site Fe 2+ , and B-site Fe 3+ . We have considered an FeO4 and an FeO6 cluster for each Fe A-site and B-site, respectively. Parameters for the multipole part of the Coulomb interactions were given by 75% and 80% of the Hartree-Fock values for the d−d and p−d Slater integrals, respectively, while the monopole parts (Udd, Ucd) as well as the O 2p-Fe 3d charge transfer energy (∆) were adopted from typical values for Fe 2+ and Fe 3+ ions [6,7]. The hopping integrals between the Fe 3d and O 2p were calculated for the various Fe-O bond lengths according to Harrison's description [8]. The Fe-O bond lengths were taken from x-ray singlecrystal structure diffraction data [9]. The crystal field parameter 10Dq was tuned to fit the experimental spectra. All parameters are listed in Ref. 10. The relative energy positions for the three sub-spectra were determined in such a way that the simulated total MCD spectrum fits by the experimental MCD spectrum best, see Refs. 6, 11, and 12. The fits were done using the "NMimimize" function of the Mathematica software [13].
By making weighted sums with the three isotropic sub-spectra using the "NMimimize" function of the Mathematica software [13] to obtain the best fit to the experimental spectrum of each Fe film with the different thicknesses, we extract the relative amount of B-site Fe 2+ , A-site Fe 3+ , and B-site Fe 3+ ions as a function of film thickness. The XAS simulation results of the Fe3O4 thin films of 0.67, 0.75, 1, 1.5, 2, 3, 4, 5, 6, and 8 MLs are shown in Fig. S3.
To double check the validity of the three isotropic sub-spectra, we compare them in Figure S4 with the experimental XAS spectra of the standard references for each Fe site, i.e., bulk YBaCo3FeO7 [1] for the A-site Fe 3+ , bulk FeO [2] for the B-site Fe 2+ , and bulk Fe2O3 for the B-site Fe 3+ (same as those shown in Figure 2 in the main text, and in Figure S1 in the supplementary materials). Each of them reproduces the experimental spectrum of its corresponding reference very well. We include also in Figure S4 the XAS spectrum from Fe0.04Mg0.96O [14], a system of Fe impurities embedded in MgO. The identical spectral features of the bulk FeO and of the Fe impurity system clearly demonstrate that XAS is most sensitive to the presence of the nearest neighbor ligands only. We have also done calculations for an Fe 2+ in FeO5 and an Fe 3+ in FeO5 by simply removing the apical oxygen of the FeO6, but otherwise using the same parameters as those for FeO6, as shown in Figure S4. Only minor differences can be observed between the isotropic XAS spectra of an Fe 2+ in FeO6 and in FeO5, and of an Fe 3+ in FeO6 and FeO5. The large difference between the isotropic XAS spectra of octahedral Fe 3+ and tetrahedral Fe 3+ originates from the fact that the effective 10Dq ligand/crystal field value is positive for the octahedral coordination while it is negative for the tetrahedral coordination.
1 ML Fe3O4 thin film: polarization dependence. Figure S5 shows the experimental linear polarization-dependent Fe L2,3 XAS spectra of the 1 ML Fe3O4. In the bottom panel, the experimental linear dichroic (LD) signal, defined as the difference between two polarizations (E || C -E ⊥ C) is displayed, together with the calculated LD spectrum for the scenario of 33 % B-site Fe 2+ and 67 % B-site Fe 3+ . The LD signal can be well reproduced without including any contribution from the A-site Fe 3+ ion. All this can be very well understood: the isotropic spectrum is determined mostly by the octahedral part of the ligand/crystal field, while the dichroism is due to the small tetragonal part of the crystal field in the monolayer. This tetragonal part of the crystal field makes the orbital occupation of the high-spin d 6 ion to become anisotropic, resulting in the polarization dependence of the intensity of the Fe 2+ signal. The tetragonal part of the crystal field does not affect the orbital occupation of the spherical high-spin d 5 ion but sets up the energy splitting in the XAS final states, resulting in the polarization dependence of the Fe 3+ peak position.
Please note that these XAS spectra and the dichroism therein are very different from those of Fe atoms on MgO [15], confirming the notion that L2,3-XAS is indeed an extremely powerful method to determine the local electronic structure of transition metal systems.
1 ML Fe3O4 thin film capped with 10 ML MgO. Fig. S6 shows the Fe L2,3-XAS spectra of the 1 ML Fe3O4 film, the 1 ML Fe3O4 film capped with 10 ML MgO, and the Fe0.04Mg0.96O system [14]. One can clearly observe that the spectrum of the 1 ML Fe3O4 changes drastically upon capping with MgO and that the spectrum becomes identical to that of octahedral Fe 2+ in | 4,575.2 | 2016-09-13T00:00:00.000 | [
"Physics"
] |
MicroRNA-21 Is a Downstream Effector of AKT That Mediates Its Antiapoptotic Effects via Suppression of Fas Ligand*
MicroRNA-21 (miR-21) is highly up-regulated during hypertrophic and cancerous cell growth. In contrast, we found that it declines in cardiac myocytes upon exposure to hypoxia. Thus, the objective was to explore its role during hypoxia. We show that miR-21 not only regulates phosphatase and tensin homologue deleted on chromosome 10 (PTEN), but also targets Fas ligand (FasL). During prolonged hypoxia, down-regulation of miR-21 proved necessary and sufficient for enhancing expression of both proteins. We demonstrate here for the first time that miR-21 is positively regulated via an AKT-dependent pathway, which is depressed during prolonged hypoxia. Accordingly, hypoxia-induced down-regulation of miR-21 and up-regulation of FasL and PTEN were reversed by activated AKT and reproduced by a dominant negative mutant, wortmannin, or PTEN. Moreover, the antiapoptotic function of AKT partly required miR-21, which was sufficient for inhibition of caspase-8 activity and mitochondrial damage. In consensus, overexpression of miR-21 in a transgenic mouse heart resulted in suppression of ischemia-induced up-regulation of PTEN and FasL expression, an increase in phospho-AKT, a smaller infarct size, and ameliorated heart failure. Thus, we have identified a unique aspect of the function of AKT by which it inhibits apoptosis through miR-21-dependent suppression of FasL.
MicroRNA (miRNA) 3 are molecules approximately twenty ribonucleotides long that specifically target mRNA through partial complementarity and, thereby, inhibit translation and/or induce their degradation. miR-21 is one of the most commonly and dramatically up-regulated miRNA in many cancers (1,2) and has been implicated in the inhibition of programmed cell death (2). Some of its validated targets include tropomyosin 1 (3), PTEN (2,4,5), programmed cell death 4 (Pdcd4) (6,7), TAp63 isoform of p53 family, and LRRFIP1, an inhibitor of NFB signaling (8). Similarly, miR-21 is one of the most highly and consistently up-regulated miRNA during car-diac hypertrophy (9 -12). Thum et al. (13) show that miR-21 is predominantly up-regulated in the myofibroblasts where it targets sprouty1 and enhances their survival and, thereby, fibrosis in the heart. Similarly, Roy et al. (14) show that miR-21 is elevated in the myofibroblast-infiltrated area 7 days after ischemia/ reperfusion and suppresses metalloprotease-2 via targeting PTEN. More recently, studies have shown that miR-21 exerts an antiapoptotic function in cardiac myocytes via inhibiting PDCD4 (15) and reduces infarct size via local viral delivery to the heart (16). However, the signaling pathway that regulates miR-21 has not been identified.
Two of the molecules that play a major role in ischemic injury of the heart include PTEN and FasL. PTEN is a major negative regulator of AKT (17) whose activity is modulated by its abundance, oxidation, or phosphorylation (18). It is also targeted by miR-21, which provides a specific post-transcriptional mechanism for regulating its expression (2,4,5). PTEN has been regarded as the Achilles' heel of the myocardium (19), whose knockdown reduces infarction following myocardial ischemia (20). On the other hand, FasL is the main activator of the extrinsic apoptotic pathway in cells and in the heart during failure (21) and ischemia (22) and is responsible for ϳ64% of the apoptosis contributing to myocardial infarction (22). Although both PTEN and FasL play essential roles in myocyte apoptosis, the mechanism of their regulation remains largely unknown. In this report, we show for the first time that both PTEN and FasL are negatively regulated by an AKT-miR-21-dependent pathway that is deactivated during ischemia and is rescued by overexpression of miR-21.
DNA Constructs Cloned into Recombinant Adenovirus-A
320-bp sequence encompassing the stem-loop of miR-21 was amplified from mouse genomic DNA by PCR using the following primers: 5Ј-CCTGCCTGAGCACCTCGTGC-3Ј and 5Ј-GACTGTGACGACTACCCCAA-3Ј and cloned into recombinant adenovirus downstream of a cytomegalovirus (CMV) promoter (23). For a control, a scrambled sequence (5Ј-GAA-CCGAGCCCACCAGCGAGC-3Ј) replaced the mature miRNA sequence within its stem-loop structure (23). This was as a control in all experiments. The miR-21 eraser was synthesized in the form of a tandem repeat antisense sequence of mature miR-21 terminating in (T) 6 . This construct was cloned into recombinant adenovirus downstream of a U6 promoter. Human PTEN cDNA (accession number NM_000314), purchased from Origene, and short hairpin RNA targeting PTEN, synthesized in the form of a hairpin-forming oligonucleotide corresponding to bases 1471-1491 of Mus musculus PTEN (accession number NM_008960), were cloned into recombinant adenovirus downstream of a CMV and U6 promoter, respectively. Akt1/PKB␣ cDNA (activated) and dominant negative, kinase-defective, dnAkt1 cDNA in pUSEamp from Upstate (Millipore), were cloned into recombinant adenovirus downstream of a CMV promoter.
Construction of the miR-21 Transgenic Mouse-The miR-21 transgene was constructed using the miR-21 DNA described above, cloned downstream of the ␣-myosin heavy chain promoter and upstream of an SV40 polyadenylation signal. The transgenic mouse was generated at the Transgenic Core Service, University of Medicine and Dentistry of New Jersey by Dr. Gassan Yehia. All required animal protocols were approved by the Institutional Animal Care and Use Committee at the New Jersey Medical School.
Culturing Cardiac Myocyte and Treatments-Cardiac myocytes were prepared as previously described (24). Briefly, hearts were isolated from 1-to 2-day-old Sprague-Dawley rats. After dissociation the cardiac myocytes were differentially separated by Percoll gradient centrifugation followed by a differential preplating step for further separation of non-cardiac cells. Myocytes were then plated in Dulbecco's modified essential medium/Ham's F-12 (1:1) supplemented with 10% fetal bovine serum. Twenty-four hours after plating, the medium was changed to serum-free, and the cells are infected with recombinant adenoviruses at a multiplicity of infection of 10 -20 particles/cell. Myofibroblast that were separated from the cardiac myocytes were isolated from the same Percoll gradient and cultured in Dulbecco's modified essential medium/Ham's F-12 (1:1) supplemented with 10% fetal bovine serum.
SW-480 colon cancer cell line was purchased from ATCC (catalogue number CCL-228) and propagated in Leibovitz's L-15 medium supplemented with 2.0 mM L-glutamine and 10% fetal bovine serum, in a CO 2 -free incubator. The medium was changed to Dulbecco's modified essential medium/Ham's F-12 (1:1) just before transfer to a hypoxic chamber. An MCF-7 breast adenocarcinoma cell line was purchased for ATCC (catalogue number HTB-22) and propagated in Eagle's minimum essential medium supplemented with 10% fetal bovine serum and 0.1 mg/ml bovine insulin, as recommended by the manufacturer.
Construction of Adenoviruses-Recombinant adenoviruses were constructed, propagated, and titered, as previously described by Dr. Frank Graham (25). Briefly, pBHGlox⌬E1,3Cre (Microbix), including the ⌬E1 adenoviral genome, was co-transfected with the pDC shuttle vector containing the gene of interest, into 293 cells using Lipofectamine (Invitrogen). Through homologous recombination, the test genes integrate into the E1-deleted adenoviral genome. The viruses were propagated on 293 cells and purified using CsCl 2 banding followed by dialysis against 20 mM Tris-buffered saline with 2% glycerol. Tittering was performed on 293 cells overlaid with Dulbecco's modified Eagle's medium plus 5% equine serum and 0.5% agarose.
Northern Blotting-This was carried out as described previously (23).
Immunocytochemistry-Myocytes were plated in fibronectin-coated glass slides for 24 h before infecting them with adenovirus. After the desired period of time slides were fixed in 3% paraformaldehyde plus 0.3% Triton X-100 in CB buffer (10 mM PIPES, 150 mM NaCl, 5 mM EGTA, 5 mM MgCl 2 , 5 mM glucose, pH 6.1) for 5 min at 25°C followed by 3% paraformaldehyde in CB buffer for 20 min at 25°C. The cells were then immunolabeled with anti-FasL (Santa Cruz Biotechnology, catalogue number sc-834) and anti-MHC (MF-20, Developmental Studies Hybridoma Bank, University of Iowa, Iowa City, IA) in Trisbuffered saline with 1% bovine serum albumin. Slides were mounted using Prolong Gold anti-fade plus 4,6-diamidino-2phenylindole (Invitrogen).
Luciferase Assay-A concatemere of miR-21-predicted target sequence within the FasL 3Ј-UTR (GGCCCATTTGAC-TGACTGATAAGCTA) ϫ3 and a mutant lacking the complementarity with miR-21 seed sequence (GGAGACCCACATT-GCGACTATA) ϫ3 were cloned downstream of the luciferase gene driven by CMV promoter, generating Luc.FasL and Luc. Mut vectors, respectively. Cultured neonatal myocytes were transfected with these constructs using Lipofectamine (Invitrogen) in conjunction with plasmids expressing miR-21 (CMV.miR-21) or a nonsense stem-loop. The cells were harvested after 24 h, and luciferase activity was assayed using an Lmax multiwall luminometer (Molecular Devices).
Exposing Myocytes to Hypoxia-To mimic ischemic conditions, cells in culture medium lacking serum were subjected to hypoxia in a "sealed hypoxia chamber" (Billups-Rothenberg Inc.). The chamber was filled with a gas mixture of 95% nitrogen and 4.8% Ϯ 0.2% CO 2 (Inhalation Therapy) at 7 p.s.i./12,000 kPa filing pressure for 15 min and sealed. The chamber was then placed in a 37°C incubator for the required period.
Monitoring Mitochondrial Membrane Potential-Mitochondrial membrane potential was monitored using JC-1 cationic dye (Molecular Probes, Invitrogen), as recommended by the manufacturer. Briefly, the cells were incubated with JC-1 (0.35 g/ml) for 20 min at 37°C. They were then washed with 1ϫ phosphate-buffered saline and imaged live.
Myocardial Ischemia and Ischemia/Reperfusion-Mice 14-16 weeks of age were anesthetized by an intraperitoneal injection of pentobarbital sodium (60 mg/kg). Under mechanical ventilation via tracheal intubation and through a left third intercostal thoracotomy, the pericardial sac was opened and an 8-0 nylon suture was passed under the left anterior descending coronary artery 2-3 mm from the tip of the left auricle. Then non-traumatic silicone tubing was placed on top of the vessel, and a knot was tied on top of the tubing to occlude the left anterior descending coronary artery. At this point, if permanent occlusion is the objective, the chest cage is then closed in layers and the pneumothorax reduced. On the other hand, for ischemia/reperfusion, the left anterior descending coronary artery was occluded for 45 min only, after which the knot was released for a 16-h reperfusion period. The loosened ligature was left in place for subsequent occlusion prior to infusion of Evans blue dye, which was used to assess the area at risk at the conclusion of the reperfusion period. The coronary occlusion and reperfusion were verified by visual inspection and by electrocardiograph monitoring. Throughout the procedure, a 36.8 -37°C rectal temperature was maintained.
Triphenyltetrazolium Chloride Staining-At the conclusion of the ischemia/reperfusion protocol, the animals were anesthetized, the chest cage was opened, and the coronary artery was re-occluded at the same site, using the ligature that was loosened in situ. Evans Blue dye (0.5%) was perfused for a few minutes into the coronary artery above the site of ligation before the hearts were excised and sliced into a 1-mm-thick cross-section. The sections were fixed with formaldehyde and stained with triphenyltetrazolium chloride (1%) solution for 15 min at 37°C. Both sides of each slice were photographed using a camera-mounted dissecting microscope. ImageJ software was then used to assess the percent infarct area relative to the area at risk.
Echocardiography-Mice were anesthetized with 2.5% avertin (0.010 -0.015 ml/g body weight) administered by intraperitoneal injection. Transthoracic echocardiography (Sequoia C256, Acuson, Mountain View, CA) was performed using a 13-MHz linear ultrasound transducer. The chest was shaved. Mice were placed on a warm saline bag in a shallow left lateral position, and warm coupling gel was applied to the chest. Electrocardiographic leads were attached to each limb using needle electrodes. Twodimensional images and M-mode tracing (sweep speed ϭ 100 -200 mm/s) were recorded from the parasternal short-axis view at the mid papillary muscle level. Care was taken not to apply too much pressure to the chest wall. The images were recorded on videotape, and freeze frames were printed on a Sony color printer, scanned into a PC using Photoshop (Adobe). The images were then analyzed by using the NIH ImageJ program. M-mode measurements of LV internal diameter (LVID) and wall thickness were made from three consecutive beats and averaged using the leading edge-to-leading edge convention adopted by the American Society of Echocardiography. End-diastolic measurements are taken at the time of the apparent maximal LV diastolic dimension. End-systolic measurements are made at the time of the most anterior systolic excursion of the posterior wall. LVEF is calculated by the cubed methods as follows: LVEF ϭ LVID d Ϫ LVID s /LVIDd, where d indicates diastolic and s indicates systolic. LV percent fractional shortening (LVFS) is calculated as LVFS% ϭ [(LVEDD Ϫ LVESD)/LVEDD]/100. Heart rate is calculated from the period between two consecutive electrocardiogram tracings.
Hemodynamic Measurements-Mice were anesthetized as described above, and a 1.4-French (Millar Instruments) catheter tip micromanometer catheter was inserted through the right carotid artery into the aorta and then into the LV where pressures, dp/dt, and Ϫdp/dt were recorded.
Statistics-For comparing two experimental groups, an unpaired, two-tailed, Student t test (Excel software) was used for calculating the probability values. p Ͻ 0.05 was considered significant.
RESULTS
Hypoxia Induces Down-regulation of miR-21-Because miR-21 has been reported to target a plethora of antiapoptotic genes, we decided to examine its expression levels in cardiac myocytes exposed to hypoxia and in the ischemic myocardium. Fig. 1a FIGURE 1. Down-regulation of miR-21 in cardiac myocytes exposed to hypoxia. a, myocytes were exposed to hypoxia for increasing intervals, as indicated. The treatments were staggered to allow synchronized extraction of total RNA and Northern blot analysis. The autoradiogram signals for miR-21 were quantified and normalized to that of U6 (n ϭ 4). The numbers were averaged and plotted as relative values to the control adjusted to 1. Error bars represent ϮS.E. and *, p Ͻ 0.01 versus normoxia (0 time point). b, 16-week-old C57bl/6 mice were subjected to coronary artery occlusion (CAO) for the intervals indicated. For each time point cardiac tissue from the ischemic and remote regions, and a sham-operated heart, were dissected, and total RNA was extracted and used for Northern blot analysis. The autoradiogram signals for miR-21 were quantified and normalized to that of U6 (n ϭ 4). The numbers were averaged and plotted as relative values to the sham adjusted to 1. Error bars represent ϮS.E. and *, p Ͻ 0.05 versus sham. c, myofibroblasts, MCF-7, and SW480 cells were exposed to hypoxia for the indicated periods. RNA was extracted and analyzed by Northern blotting. The numbers below the autoradiogram are the relative levels of miR-21/U6 signal (n ϭ 2).
shows that miR-21 declined by 40 Ϯ 7% within 1 h and by 75 Ϯ 8% within 12-24 h of exposure to hypoxia. Meanwhile, there was no significant change in miR-1 levels (Fig. 1a). Similarly, within 6 h of coronary artery occlusion in mice, miR-21 was reduced by 40% in the ischemic region (Fig. 1b). In contrast, though, it increased by 1.8-fold in the perinfarct zone, as expected in myocytes undergoing hypertrophy. At this early stage of ischemia there was no significant inflammatory cell infiltration or myofibroblast proliferation detected in the ischemic region. We also examined the response of other cell types to hypoxia. Interestingly, colon cancer (SW480) and breast cancer (MCF-7) cell lines exhibited a slight increase in miR-21 after 48 h of hypoxia (Fig. 1c). However, fibroblasts isolated from the neonatal rat heart (myofibroblasts) exhibited a reduction in miR-21 within the same time frame. These differences between cell types may reflect a contrast in their tolerance to hypoxia, cancer cells being the most resilient. The results also suggest that miR-21-targeted genes may play a role in a normal cell's response to hypoxia. miR-21 Targets FasL and PTEN during Hypoxia-PTEN is a validated miR-21 target (2,4,5). Using TargetScan v5.1 and Microcosm target prediction software, we identified FasL as another miR-21-predicted target, which has not been previously experi-mentally validated. Fig. 2a shows the alignment between miR-21 and FasL 3Ј-UTR of both the mouse and human genes. Inclusion of this sequence within the 3Ј-UTR of a luciferase gene rendered it responsive to hypoxia and to overexpression of miR-21, which induced an increase or a decrease in its expression, respectively, as demonstrated in Fig. 2b. However, mutations within the seed sequence's binding site abrogated these responses. This suggests that miR-21 directly targets and inhibits FasL.
We also tested the effect of miR-21 on the expression of the endogenous protein. Interestingly, overexpression of miR-21 did not inhibit basal levels of either PTEN or FasL in cardiac myocytes but did completely suppress their up-regulation after exposure to hypoxia (Fig. 2c). In contrast, knockdown of miR-21 by an antisense construct (miR-21-eraser) induced up-regulation of both genes. This suggests that miR-21 is saturating during normoxia but limiting/reduced during hypoxia, relative to its target mRNAs. This is in contrast to the results observed with the luciferase reporter assay in Fig. 2b, where the cells were supplemented with both an exogenous target (luciferase reporter) and miR-21. There have been contradictory reports regarding the localization of FasL to cardiac myocytes (26 -28). Also, because primary myocyte cultures usually contain Ͻ10% of non-myocytes, the results of Western blot analysis could be ambiguous. Thus, to address this issue we co-immunostained myocytes for FasL (green) and ␣-myosin heavy chain (␣MHC, red). The results show that FasL is expressed during normoxia, becomes more abundant during hypoxia, and is strictly localized to the junctions between myocytes (Fig. 2d). Overexpression of miR-21 produced dramatic reduction in junctional FasL, in addition to its dispersion to internal vesicles. Notably, non-cardiac cells (Fig. 2d, marked by an asterisk) had little detectable FasL.
AKT Regulates miR-21, PTEN, and FasL Expression-AKT is an established anti-apoptotic gene that has been implicated in the negative regulation of FasL (29,30). Therefore, we questioned whether AKT regulates miR-21 or its targets in cardiac myocytes. To address this, we overexpressed constitutively active AKT (caAKT) in the myocytes before exposure to hypoxia. Similar to overexpression of miR-21, this resulted in an effective down-regulation of junctional FasL and its dispersion to internal vesicles (Fig. 3a). In contrast, overexpression of PTEN, a negative regulator of AKT, induced up-regulation of FasL during normoxia, equivalent to its induction by hypoxia FIGURE 2. miR-21 targets FasL. a, an alignment between miR-21 and the 3Ј-UTR of human (hsa) and mouse (mmu) FasL 3Ј-UTR. The " " represents a Watson-Crick base pair, and the ":" represents a wobble base pair. b, the miR-21-targeted site in the mouse FasL 3Ј-UTR (Luc-FasL) and a nonsense mutant (Luc-FasL-mt) were separately cloned into the 3Ј-UTR of the luciferase gene downstream of a cytomegalovirus promoter, as indicated. These constructs were virally delivered to myocytes in the presence or absence of a control, or Ad.miR-21, virus where indicated by the plus sign, for 24 h (n ϭ 6). Cells were then exposed to hypoxia or normoxia for an additional 24 h, as indicated, before total protein was extracted and analyzed for luciferase activity. The results were averaged and plotted as luciferase activity/g of protein.
Error bars represent ϮS.D. and *, p Ͻ 0.05 versus control normoxia. **, p Ͻ 0.01 versus control hypoxia. c, myocytes were treated with Ad.miR-21, Ad.miR-21 eraser, or a control virus for 24 h before exposing them to hypoxia for an additional 24 h, as indicated with the plus sign. Total cellular protein was extracted and analyzed by Western blotting for the molecules listed on the left of each panel (n ϭ 6). d, myocytes were treated with Ad.miR-21, Ad.miR-21-eraser, or a control virus, in the presence or absence of 24 h hypoxia, as indicated. Cells were fixed and immunostained with anti-FasL (green) and anti-␣MHC (red), and nuclei were stained with 4,6-diamidino-2-phenylindole (DAPI, blue) (n ϭ 3). The asterisk marks non-cardiac, ␣MHC-negative cells. Arrowheads mark FasL-positive vesicles. (Fig. 3, a and b). In consensus, knockdown of PTEN (Ad.siPTEN) resulted in up-regulation of phospho-AKT (Fig. 3c) and inhibited hypoxia-induced FasL (Fig. 3a). To determine whether the effect of PTEN was mediated through inhibition of AKT, we delivered it to the cells in the presence of caAKT. The results show that the effect of caAKT was dominant and counteracted that of PTEN (Fig. 3, a and b). Surprisingly, the results also revealed that caAKT inhibited the expression of the exogenously delivered PTEN (Fig. 3b). This suggested that AKT inhibition of both FasL and PTEN might be mediated through up-regulation of miR-21. Indeed, knockdown of miR-21 resulted in reversal of caAKT inhibition of FasL expression (Fig. 3a). Moreover, Northern blot analysis confirmed that caAKT induced up-regulation of miR-21 and rescued its down-regula-tion during hypoxia, which was abrogated upon pretreatment of the cells with Ad.miR-21-eraser (antisense miR-21) (Fig. 3, d and f). This suggests that caAKT suppresses FasL and PTEN during hypoxia through an miR-21-dependent mechanism.
The AKT pathway is commonly regarded in terms of its activation but rarely its inhibition. However, we speculated that if activated AKT inhibits hypoxia-induced FasL and PTEN and increases miR-21, it is likely that inhibition of endogenous AKT would be responsible for the reverse effects seen during hypoxia. Although determining the effect of knockdown of endogenous AKT by short silencing RNA on miR-21 and its targets was complicated by the rapid ensue of cell death, its inhibition by overexpression of a dominant negative mutant (Ad.dnAKT), PTEN, or treatment of the cells with wortmannin FIGURE 3. AKT regulates FasL through a miR-21-dependent mechanism. a, myocytes were treated with Ad.caAKT, Ad.miR-21-eraser, Ad.PTEN, Ad.siPTEN, Ad.dnAKT, wortmannin, hypoxia, or Ad.control separately or in various combinations, for 24 h, as indicated. Cells were then fixed and immunostained with anti-FasL (green) and anti-␣MHC (red), and nuclei were stained with 4,6-diamidino-2-phenylindole (DAPI, blue) (n ϭ 3). The asterisk marks non-cardiac, ␣MHCnegative cells. Arrowheads mark FasL-positive vesicles. b, myocytes were infected with Ad.PTEN, Ad.caAKT, or a control virus, separately or in combination for 24 h, as indicated by the plus sign. Total protein was extracted and analyzed by Western blotting for the molecules listed on the left of each panel (n ϭ 2). c, myocytes were treated with an adenovirus expressing short hairpin RNA targeting PTEN (Ad.siPTEN) or a control virus for 48 h before total protein was extracted and analyzed by Western blotting for the molecules listed on the left of each panel (n ϭ 2). d, myocytes were treated with Ad.caAKT or a control virus for 24 h, followed by exposure to hypoxia or normoxia for an additional 24-h period, as indicated by plus signs. In parallel, similar treatments were applied to cells that were preincubated with Ad.miR-21-eraser for 24 h. Total RNA was then extracted and analyzed by Northern blotting for the molecules listed on the left of each panel (n ϭ 3). e, myocytes were treated with 5 M wortmannin, Ad.dnAKT, or a control virus for 24 h. Total RNA was then extracted and analyzed by Northern blotting for the molecules listed on the left of each panel (n ϭ 2). f, the autoradiogram signals of miR-21 in d and e were quantified and normalized to that of U6. Results were averaged and graphed as -fold change in miR-21 relative to the normoxia control signal. Error bars represent ϮS.D. and *, p Ͻ 0.01 versus control normoxia. g, myocytes were exposed to hypoxia for the indicated intervals. Protein and RNA were extracted from parallel sets and analyzed by Western blotting and Northern blotting (lower two panels) for the molecules listed on the left of each panel (n ϭ 2).
induced up-regulation of junctional FasL (Fig. 3a) and downregulation of miR-21 (Fig. 3, e and f). This indicated that inhibition of AKT is sufficient for inducing down-regulation of miR-21 and may, thus, be the mediating signal during hypoxia. Fig. 3g shows that exposure of myocytes to hypoxia results in a biphasic effect on phospho-AKT and miR-21 levels. During short intervals (15 min) of hypoxia, AKT phosphorylation is enhanced and is associated with an increase in PTEN phosphorylation that deactivates PTEN and an increase in miR-21. However, longer periods of hypoxia induced a reduction in phospho-AKT, associated with an increase in total PTEN and a decrease in its phosphorylation, and reduced miR-21 levels. The data suggest that long term hypoxia induces inhibition of AKT, which mediates the down-regulation of miR-21 and upregulation of FasL and PTEN. This may be initiated by dephosphorylation and activation of PTEN, which in turn deactivates the AKT pathway. The data also suggest that down-regulation of miR-21 during hypoxia is mediated through inhibition of AKT.
miR-21 Enhances AKT Phosphorylation-Because miR-21 suppresses the expression of PTEN it would, accordingly, be expected to enhance AKT phosphorylation. Thus, to confirm its inhibition of PTEN and also determine if this effect has any functional significance, we tested the consequence of overexpressing miR-21 on AKT phosphorylation. Under quiescent conditions, miR-21 had no influence on the phosphorylation levels of AKT, which reconciles with its lack of inhibition of PTEN under the same conditions (see Fig. 2c). In contrast, when it was added before stimulating the cells with an activator of the AKT pathway, such as insulin, we observed a 5-fold increase in phospho-threonine 308 and a 20-fold increase in phospho-serine 473 following 1 h of insulin treatment (Fig. 4, a and d). In contrast, the miR-21 eraser inhibited AKT phosphorylation by 40% (Fig. 4, b and d). Insulin was selected here for two reasons: one, it is a robust activator of AKT and, second, insulin resistance has been attributed to an increase in PTEN (31)(32)(33)(34)(35)(36). As seen in Fig. 4a, insulin stimulation was accompanied by a 1.9-fold increase in PTEN that was completely suppressed by overexpression of miR-21. To determine if the effect of miR-21 was indeed mediated through suppression of PTEN, we supplied the cells with miR-21 in the presence of exogenous PTEN. This treatment resulted in complete abrogation of miR-21-enhanced AKT phosphorylation (Fig. 4, c and d). Thus, by suppressing PTEN, miR-21 enhances AKT activity, which, in turn, induces up-regulation of miR-21 (Fig. 3c), creating a positive feedback loop. miR-21 Is Necessary for AKT-mediated Inhibition of Apoptosis-Because miR-21 is required for AKT-mediated inhibition of FasL during hypoxia, we questioned whether it is necessary or sufficient for mediating the antiapoptotic effects of this kinase. To address this we treated myocytes with caAKT in the presence or absence of miR-21-eraser, or independently with miR-21, before exposing them to hypoxia. After 24 h, live cells were loaded with JC-1 dye and immediately imaged. Fig. 5a shows that the JC-1 dye predominantly aggregates in the healthy mitochondria and emits a red florescence, whereas the excess monomeric form remains in the cytosol, emitting green florescence (Fig. 5a). During hypoxia the mitochondrial outer membrane is damaged, and this is reflected by a loss in the red JC-1 aggregates. Both overexpression of caAKT and miR-21 prevented mitochondrial damage by 80 Ϯ 20% and 50 Ϯ 13%, respectively, whereas knockdown of miR-21 partly reversed the effect of caAKT (35 Ϯ 15%, Fig. 5, a and b). Interestingly, though, neither miR-21-eraser nor dnAKT were sufficient for inducing mitochondrial damage when applied during normoxia, despite the fact that they were both sufficient for inducing up-regulation of FasL.
Caspase-8 is a direct downstream mediator of the Fas-induced death signal, thus, to confirm the antiapoptotic effects of miR-21 and its contribution to AKT function, we measured the activity of this caspase. Interestingly, unlike its effect on mitochondrial damage, miR-21 completely inhibited hypoxia-induced the activity of caspase-8, whereas the miR-21-eraser completely abolished AKT inhibition (Fig. 5c). This indicated that probably both the intrinsic and extrinsic apoptosis pathways are involved in mitochondrial injury during hypoxia, where miR-21 has a major role in inhibiting the latter only through suppression of FasL. On the other hand, AKT, through suppression of FasL, as well as BAD and Bax, has the potential to inhibit both pathways. Hence, miR-21 partly mediates the antiapoptotic effects of AKT. (n ϭ 3). b, myocytes were treated with Ad.miR-21 eraser or a control virus for 24 h before stimulation with insulin for 1 h, as indicated by the plus signs. Total protein was extracted and analyzed by Western blotting for the molecules listed on the left of each panel (n ϭ 3). c, myocytes were treated with Ad.PTEN, Ad.miR-21, or a control virus, separately or in combination, for 24 h before treatment with insulin for 1 h, as indicated by the plus signs. Total protein was extracted and analyzed by Western blotting for the molecules listed on the left of each panel (n ϭ 3). d, the signal for phospho-threonine 308 and phospho-serine 473 in each of the panels was quantified and normalized to total AKT. The results were averaged, and the -fold change of insulin stimulated in the presence of miR-21, miR-21 eraser, or PTEN versus insulin stimulated in the presence of the control virus, was graphed on a log scale. Error bars represent ϮS.D. and *, p Ͻ 0.001 versus insulin stimulated in the presence of the control virus.
Overexpression of miR-21 Retards Ischemic Damage and
Heart Failure-To investigate the role of miR-21 in the heart in vivo, we generated a cardiac-specific transgenic mouse model overexpressing miR-21 (miR-21-TG). Two lines were obtained with ϳ10and 20-fold overexpression of the transgene relative to the wild-type littermates (Fig. 6a). These animals show no overt phenotype, and all physical and functional parameters of the heart are comparable to the wildtype mice before or after short term transverse aortic constriction. To examine whether miR-21 could exert its antiapoptotic effect in vivo in the adult myocytes, we subjected the mice to 45 min of ischemia followed by 16 h of reperfusion (I/R). Fig. 6b shows the results of Evans blue and triphenyltetrazolium chloride staining in cardiac sections from the miR-21-Tg and wild hearts after I/R. The infarct zone/area at risk was significantly smaller in the transgenic model (Fig. 6c). This is consistent with suppression of junctional FasL and, thus, cell death by the overexpressed miR-21.
We also investigated the effect of the transgene on long term ischemic injury in the heart. After 4 weeks of left coronary artery occlusion, cardiac function and physical parameters were assessed by echocardiography and hemodynamic measurements. Heart sections revealed a less dilated left ventricular chamber in the miR-21-Tg versus the wild-type mice (Fig. 6d) that was confirmed by echocardiography measurements of the left ventricular end diastolic and end systolic dimensions (Fig. 6f, LVEDD and LVESD). This, in addition to lower left ventricular end diastolic and systolic pressure (Fig. 6f, LVEDP and LVSP), and higher ejection fraction (EF(%)) in the miR-21-TG, demonstrates reduced signs of cardiac failure. In agreement, there was no detectable lung congestion or increase in lung weight in the transgenic mice (Fig. 6e). Moreover, there was less collagen deposition, which is a reflection of reduced myocyte death, inflammatory cell infiltration, and myofibroblast proliferation (Fig. 6g), in addition to inhibition of ischemia-induced up-regulation of PTEN, FasL, and caspase-6, and down-regulation of phospho-AKT, in the left ventricle of the miR-21-TG (Fig. 6h). We also confirmed that FasL was localized to the intercalated discs in the ischemic heart, which was undetectable in the transgenic model (Fig. 6i). Thus, the results demonstrate that the miR-21 transgene effectively inhibits PTEN and FasL during myocardial ischemia and ameliorates signs of cardiac failure.
DISCUSSION
miR-21 is one of the most commonly and highly up-regulated miRNA in cancer and cardiovascular diseases and, thus, one of the most studied. A preponderance of reports shows that it targets antiapoptotic and tumor suppressor genes and is involved in promoting cardiac myocyte and cancer cell survival. Its antiapoptotic effect has been predominantly attributed to inhibition of PTEN and PDCD4 expression (2, 4 -7). Whereas up-regulation of miR-21 in various pathological conditions has been the prevailing finding, we discovered herein that miR-21 is downregulated in the ischemic heart, consistent with the results of a recent report (16). We observed a similar down-regulation in isolated cardiac myocytes and myofibroblasts after prolonged hypoxia, albeit at different rates. On the other hand, brief periods of hypoxia (15 min) induced up-regulation of miR-21 in myocytes, in concordance with its increase seen during ischemia preconditioning (37). In contrast, MCF-7 and SW-480 cancer cell lines exhibited a slight increase in miR-21 that did not decline for up to 48 h of hypoxia. This indicated that the pathway that induced down-regulation of miR-21 in myocytes is refractory to hypoxia in cancer cells. We found this to be an AKT-dependent pathway, whose inhibition during hypoxia may be initiated by dephosphorylation and activation of PTEN. This explains why cancer cells that are frequently deficient in PTEN might be insensitive to hypoxia-induced inhibition of AKT.
Thus, in this report we have identified AKT as a positive upstream regulator of miR-21, experimentally validated that FasL is a direct target of miR-21, and showed that AKT regulates PTEN and FasL expression in cardiac myocytes. We also show that this pathway is deactivated during ischemia or hypoxia, which is required for down-regula-tion of miR-21 and, thereby, up-regulation of its targets (Fig. 7). Indeed, there are several notable parallelisms between previously known functions of AKT and miR-21 that support this finding. For example, similar to miR-21, AKT activity is up-regulated in many cancers and cardiac hypertrophy and confers a survival function unto the cells. More significantly, AKT has been implicated in the negative regulation of FasL in smooth muscle (30) and cancer cells (38), and reduction of PDCD4 expression in cancer cells (39,40), both of which are now validated targets of miR-21. Thus, AKT-induced up-regulation of miR-21 provides a mechanism whereby AKT inhibits these molecules. As evidence, we show that AKT-mediated suppression of FasL during hypoxia in cardiac myocytes was reversed upon knockdown of miR-21 (Fig. 3a), as was its effect on caspase-8 activity (Fig. 5c). However, its effect on mitochondrial integrity was only partly reversed (Fig. 5a). This suggests that up-regulation of miR-21 and suppression of FasL only partly contribute to the antiapoptotic effects of AKT. Our recently published data support this conclusion, because it shows that, in addition to up-regulation of miR-21, AKT induces down-regulation of miR-199a and up-regulation of hypoxia-inducible factor-1alpha (Hif-1␣) and Sirt-1 (41). It is important to note that during short term hypoxia Hif-1␣ exerts protective effects, whereas during prolonged hypoxia it is involved in a proapoptotic function that involves stabilization of nuclear p53, which in turn induces an increase in FasL mRNA (42). We have also previously shown that this could be suppressed by overexpression of miR-199a, which inhibits the expression of Hif-1␣ and, thus, p53 (43).
PTEN is an established negative regulator of AKT, but a reciprocal effect has not been shown. Upon overexpression of caAKT we found that it not only inhibited FasL but also suppressed PTEN. This is consistent with AKT inducing up-regulation of miR-21, which in turn inhibits PTEN. Conversely, PTEN may inhibit miR-21, via inhibiting AKT. This reciprocal relation between a miRNA and its target has been previously described for several miRNA target pairs, including miR-200 family and ZEB1 (44), miR-9 and REST (45), miR-145 and OCT4 (46), and let-7 and lin-28 (47), where the miRNA target is itself a negative regulator of the miRNA, creating a double negative feedback loop that could potentially augment their effects. In our transgenic model, overexpression of miR-21 had no impact on physical or functional parameters of the heart during normal conditions. Similarly, overexpression of miR-21 in cultured myocytes had no effect on basal levels of its targets, PTEN or FasL, during resting conditions. This indicated that its levels are saturating relative to its targets under these settings. On the other hand, its function was uncovered following its down-regulation by an antisense construct or during ischemia, which proved necessary for up-regulation of its targets (Figs. 2c and 6 (b-f)). Thus, the question remains regarding its role during cardiac hypertrophy or in cancer, in which it is highly up-regulated. One explanation is that it is an adaptive mechanism against the gradual development of ischemia as mass outgrows vascular supply during enhanced tissue growth. Alternatively, the increase in miR-21 parallels the increase in its target mRNAs, as a consequence of enhanced global transcription associated with induction of growth. Another possibility is that miR-21 might be targeting other genes under those conditions. Indeed, Thum et al. reported that miR-21 is predominantly up-regulated in the myofibroblast during cardiac hypertrophy or failure, where it promotes cell survival and fibrosis through inhibition of sprouty1 (13). Consequently, knockdown of miR-21 induced myofibroblast apoptosis and reduced fibrosis during cardiac failure. On the other hand, our data show that overexpression of miR-21 could reduce fibrosis during ischemic heart disease or failure by decreasing myocyte cell death and, thus, inflammatory cell infiltration, and fibroblast proliferation.
In conclusion, we have outlined a unique aspect of the AKT survival pathway: one in which it regulates the extrinsic apoptotic pathway in cardiac myocytes via miR-21-dependent sup-FIGURE 6. miR-21 protects the heart against ischemic injury. a, a transgenic mouse model was generated with a 320-bp sequence encompassing the stem-loop of mouse miR-21 downstream of the ␣-myosin heavy chain promoter (␣MHC). Two lines were obtained and diagnosed by Northern blot analysis for miR-21 expression in the hearts of 10-week-old mice. b, miR-21-Tg and wild-type littermate mice were subjected to 45-min ischemia followed by 16-h reperfusion. Hearts were then perfused with Evans blue dye, fixed, sectioned, and stained with triphenyltetrazolium chloride. Shown are images from both sides of a representative section from each of the mice. c, the % area of risk (right) and % infarct zone/area at risk (left) were measured, averaged, and plotted (n ϭ 6). Error bars represent ϮS.E. and *, p Ͻ 0.05 versus wild type. d, 20-week-old transgenic and wild-type littermates were subjected to complete left coronary artery occlusion (CAO) or a sham operation for 4 weeks. The hearts were then isolated and sectioned to reveal the extent of left ventricular chamber dilatation (n ϭ 6, each). e, lung weight and heart weight/tibial length (HW/TL) were calculated and graphed. Error bars represent ϮS.D. and *, p Ͻ 0.01 versus sham. f, before sacrifice, cardiac wall and chamber dimensions, and functions, were assessed by echocardiography and hemodynamic measurements. *, p Ͻ 0.05 versus sham of matching genotype. # , p Ͻ 0.05 versus WT-CAO. g, hearts were fixed in formaldehyde, sectioned, and stained with Sirius Red for detection of collagen (red). The collagen was quantified (n ϭ 3, 3 sections each) and graphed as -fold increase in CAO hearts versus sham-operated ones. Error bars represent ϮS.D. and *, p Ͻ 0.001 versus sham, and # , p Ͻ 0.001 versus wild type-CAO. h, total protein was extracted from similarly treated mice groups and analyzed by Western blotting for the molecules listed on the left of each panel. i, 20-week-old miR-21-TG and wild-type littermates were subjected to CAO for 16 h. The hearts were isolated and fixed in formaldehyde and immunostained for FasL (purple). pression of FasL. Thus, in general, the discovery of miRNA, and their functions, is introducing a new dimension to our existing knowledge of signaling molecules and pathways that remains to be explored and exploited for more precise therapeutic targeting. | 8,911 | 2010-04-19T00:00:00.000 | [
"Biology"
] |
Real-Time Evaluation of Compaction Quality by Using Artificial Neural Networks
+e primary goal of this study is to find an easy and convenient way to estimate the degree of compaction in real time for compaction quality control. In this paper, an artificial neural network classifier is developed to identify the different characteristic patterns of drum vibration and classify them according to the different compaction levels. At first, a field compaction experiment is designed and performed in a construction site, and the degree of compaction and the vibration are measured. +en, the vibration signals collected from the experiment are processed to extract the features of vibration patterns and labeled with the compaction level to train the artificial neural network model. At last, the performance of the artificial neural network classifier is verified against the degree of compactionmeasured by using a nuclear density gauge. It can be found that artificial neural networks show good performance and huge potential for the problem of compaction quality control.
Introduction
e compaction process plays an important role in improving the strength and bearing capacity of materials for use in road construction. e existing compaction quality control relies on spot tests, such as sand replacement method, falling weight deflectometer (FWD), and plate bearing test. ese traditional manual measurements have several drawbacks [1,2]: (1) the measurements are usually time consuming and may interrupt the subsequent construction operation; (2) test samples are collected at limited test points, and the testing results cannot indicate the overall pavement quality; (3) the measurements are performed after compaction; thus, it is impossible to provide real-time compaction quality information for the operator, which may lead to under or over compaction. To address these problems, intelligent compaction (IC) technique is proposed to provide real-time compaction quality assurance during compaction.
So far, there are several equipment manufacturers around the world offering IC rollers to compact subgrade and aggregate materials. Several intelligent compaction measurement values (ICMV) are set up to evaluate the compaction quality, such as Compaction Meter Value (CMV), Compaction Control Value (CCV), Resonance Meter Value (RMV), Machine Drive Power (MDP), vibration modulus (Evib), and soil stiffness (Ks) [2,3]. CMV is widely accepted for quality assurance, and it is computed by the amplitude of vertical drum acceleration at the operating frequency and first harmonic. CCV and RMV further consider the high-order harmonics. Considering the nonlinearity vibration induced by the periodic loss of contact between soil and drum, Anderegg et al. [4,5] develop a feedback control system to automatically adjust the compaction parameters (vibration frequency, vibration amplitude, and driving speed) during construction. Due to the development of these helpful IC technologies, roller operator can optimize the compaction process timely according to the updated compaction information, and the compaction quality is improved effectively.
However, recent research studies found that there are still some uncertain correlations between ICMVs and compaction quality. Firstly, CMV, as a harmonic-based indicator, is easily influenced by many factors. Zhu et al. [6] test a multilayer structure and find that CMV is sensitive to the characteristic of underlying layers, such as stiffness and moisture content of the layers. White et al. [7] indicate that CMV is dependent on the vibration amplitude; therefore, a higher excitation force amplitude generally yields a greater CMV at a constant soil modulus. Wersäll et al. [8] conduct full-scale tests to study the influence of variable frequencies on compaction control. e results indicate that the resonant frequency is about 17 Hz, and the optimum compaction frequency is about 18 Hz, while the standard operating frequency of the roller is about 31 Hz. is means that there is no direct correlation between the excitation force and compaction quality. Secondly, the mechanical-based ICMVs related to soil stiffness and vibration modulus manifest unstable changes due to their amplitude dependence. Mooney and Rinehart [9,10] demonstrate that soil stiffness on the soft layer decreases with increasing excitation force, while on the stiffer layer exhibits conversely. Further analysis by Mazari et al. [11] shows that roller type, machine operation setting variation, and instability of the machine in practicing operation commonly affect the accuracy of ICMVs. Furthermore, some researchers [12][13][14][15][16] use operational modal analysis (OMA) for the structural health monitoring of engineering. Different from the experimental modal analysis methods, OMA uses the output-only response to identify the structure properties; thus, the input excitation measures can be avoided.
Recently, an artificial intelligence-based intelligent compaction analyzer (ICA) was developed by Barman et al. [17][18][19]. e frequency characteristics of drum vibration can be analyzed by the ICA, and amounts of field testing show that the results correlate well with subgrade modulus. Zhang et al. [20,21] utilize acoustic wave detection techniques to evaluate rock-fill compaction status, and a genetic algorithm-based optimization procedure was proposed to optimize the overall compaction process. e recent IC techniques are largely based on the vibration analysis of pavement during compaction, trying to build the correlation between the vibration of pavement and compaction quality. However, the development of IC technique is blocked by the complication of vibration during compaction. From another point of view, the development of IC technique can be considered as a problem of signal processing and recognition. Fourier transform and artificial neural networks (ANN) have been widely used for signal processing and recognition nowadays, especially for speech recognition [22][23][24][25]. Zhan et al. [26] also use the ANN for radar waveform recognition.
ese studies provide many references on the applications of Fourier transform and ANN. However, few studies have been conducted on the compaction quality control by using the ANN. In the research studies by Barman et al. [17][18][19], the ANN is tried to analyze the correlation between the vibration pattern and subgrade stiffness. For the development of IC technique, neural networks can bypass some difficulties which cannot be solved easily by traditional methods, showing huge potential.
Generally, degree of compaction is the most direct index for the compaction quality evaluation. e main objective of this study is to find an easy and convenient way to estimate the degree of compaction in real time. In this research, the compaction analysis is based on a hypothesis that the vibratory roller and pavement form a coupled system during compaction.
e coupled response is determined by the excitation frequency and natural vibration modes of the coupled system. e variations in the degree of compaction will affect the response and will lead to different patterns of vibrations of the drum. erefore, the compaction quality can be estimated by using the mapping between the vibration pattern and degree of compaction.
In reality, however, the vibration pattern of the drum usually includes the information of the noise, which means some of the features of the vibration pattern reflect the system and others reflect the noise. e vibration pattern features reflecting the system can be used to estimate the compaction quality, but these useful features cannot be recognized and extracted easily. In this paper, an artificial neural network (ANN) classifier is developed to identify the different characteristic patterns of drum vibration and classify them according to the different compaction levels. A field compaction experiment is designed and performed in a construction site, and the degree of compaction and the vibration are measured. e vibration signals collected from the experiment are processed to extract the features of vibration patterns and then labeled with the compaction level to train the ANN model. At last, the performance of the ANN classifier is verified against the degree of compaction measured by using a nuclear density gauge (NDG).
Experimental Program and Signal Processing
It is assumed that the vibratory roller and pavement underneath form a coupled system during compaction. e variations in degree of compaction affect the coupled response and lead to different vibration patterns of the drum. To analyze and make use of the mapping relationship between the vibration pattern and degree of compaction, a field compaction experiment is designed and performed in a construction site to collect the data of the vibration signal and degree of compaction.
Experimental Program.
A field test is performed on the extension project of G2 expressway in Shandong Province, China. e typical pavement structure used in the project is shown in Table 1, which consists of three hot-mix asphalt (HMA) surface layers, one flexible base layer, two semirigid base layers, one sub-base layer, and the subgrade, in that order, from top to bottom. e vibrating compaction test is implemented at the cement-stabilized gravel base layer with a thickness of 18 cm. e aggregate used for cementstabilized base layer is limestone, and the details of the mixture and equipment are shown in Tables 2 and 3.
Two wireless accelerometers are mounted on the axle of the roller drum on both sides to monitor the vibration acceleration signals in the vertical direction, as shown in Figure 1. e compaction is carried out on ten test lanes, and each test lane has a length of 60 m. e width of each test lane is 2.13 m, the same as the width of the roller. Figure 2 shows the roller pass trajectory on each test lane, and a total of 8 passes are performed for each lane. e degree of compaction is measured by using nuclear density gauge for each pass and computed as follows: where DOC is the degree of compaction, ρ E is maximum dry density, ρ w and w are wet density and moisture content measured by nuclear density gauge, and ρ d is dry density. For each test lane, two different locations are tested, as shown in Figure 2, and the relationship between the number of roller passes and degree of compaction is investigated. e relationship between the number of passes and degree of compaction is plotted in Figure 3. Here, the open circles denote the degree of compaction obtained from the experiment by using NDG, and the solid circles denote the average degree of compaction for each pass. Ten test lanes are investigated and each test lane has two test locations; therefore, there are 20 experimental results for each pass. For the cement-stabilized gravel base, the minimum requirement of degree of compaction is 98% and about 6 passes are needed to reach the requirement.
Signal Processing.
e vibration signals are collected continuously from two accelerometers when the vibratory roller moves on the test lane. e length of the test lane is 60 m, the velocity of the roller is about 0.6 m/s, and it takes about 100 s for one pass. erefore, we can get a 100-second long vibration signal from one accelerometer in one pass, and a total of 160 long signals can be collected since we have ten test lanes and two accelerometers. eoretically, the vibration signals from two accelerometers have the same frequency components, even though their amplitudes may be different. e vibration is sampled at a 2 kHz sampling frequency. e long vibration signal of one pass is divided into lots of 0.5-second short signals. Each short signal includes 1000 contiguous data samples having an overlap with 500 previous values. Each short signal is converted to frequency domain representation using a fast Fourier transform (FFT). Since the vibration is sampled at 2 kHz, the Nyquist frequency is 1 kHz. erefore, a single-sided FFT provided a frequency spectrum distributed between 0 and 1 kHz, and the output of the single-sided FFT for each short signal is an array of 500 elements, expressed as a � (a 1 , a 2 , . . ., a 500 ). By using a FFT, the features of the vibration signal are expressed as the frequency components. Amplitude is not considered as a kind of feature in this study; array a should be normalized to eliminate the effects of amplitudes. e normalized array x is obtained as follows: Here, the logarithmic operation in equation (2) is used to amplify some inconspicuous frequency components. e signal processing method in this section is shown in Figure 4 and summarized as the following steps: (1) e long vibration signal from the accelerometer in one pass is divided into lots of 0.5-second short signals (2) Each short signal is converted to frequency domain representation by using a single-sided FFT to extract the frequency features (3) e output of the single-sided FFT for each short signal is normalized by using equation (2) to eliminate the effects of amplitudes and amplify the inconspicuous frequency components (4) e processed signals are input to ANN for training or for predicting the degree of compaction e frequency features of vibration signals are extracted by this method, and the processed arrays will be used as input data for training the ANN model. In the following discussion on the ANN, the input array x is called "sample," and the element of x is called "feature." In this study, therefore, each sample has 500 features.
Before being used for training the ANN, each sample should be labeled with a target class. In this research, we use four target classes to represent four compaction levels, as shown in Table 4. According to the experimental results of
Development of the ANN Model
A multilayer perceptron (MLP) feedforward neural network is used in this research. Figure 5 shows the structure of the network.
e network consists of one input layer, two hidden layers, and an output layer. ere are 500 nodes in the input layer since each sample has 500 features. e first and second hidden layers contain 44 and 10 nodes, respectively. e output layer contains four nodes representing four classes of the compaction level. Figure 5(b) shows the schematic of a single neuron. Each node is governed by the following equation: where the subscript s denotes the sth sample, k denotes the number of nodes in layer l, x (l) s,i is the ith input of layer l, and is also the jth input of layer l + 1. w (l) i,j is the weight value from the ith input to the jth output. b (l) 0,j is the weight value from the bias term of layer l to the jth output. All the bias terms are "+1" in this research. f() is the activation function, a softmax function, Sof(z j ) � (e z j / 4 i�1 e z i ), is used in the output layer to ensure the predictions for each sample are in the range of [0, 1] and sum to 1, and a sigmoid function, Sig(z) � (1 + e − z ) − 1 , is used in the rest of the layers.
Training of the ANN Model
A supervised learning method is used to train the ANN. e network can be trained to classify inputs according to target
Advances in Materials Science and Engineering
classes. e target data should consist of arrays of all 0 values except for a 1 in element c, where c is the class they are to represent, as shown in Table 4. e cross-entropy is used as the loss function to measure the network's performance. e loss associated with the sth prediction would be CE s � − where y is the target array, y is the output array of the output layer, the subscript s denotes the sth sample, and c denotes the cth element of the target or output array. e crossentropy loss of the entire training dataset would be the average CE s over all samples. In this work, the scaled conjugate gradient (SCG) algorithm [27] is used to perform training. SCG is based on a class of optimization algorithms called Conjugate Gradient Methods (CGM), but this algorithm avoids the line-search per learning iteration by using a Levenberg-Marquardt approach to scale the step size. SCG can train any network as long as its weight, net input, and activation functions have derivative functions. Backpropagation is used to calculate derivatives of the loss function with respect to the weights.
To avoid overfitting during neural network training, the dataset is randomly divided into three subsets: training set, validation set, and test set. e training set is used for computing the gradient and updating the network weights. e validation set is aimed to avoid the overfitting problem. e error on the validation set normally decreases during the initial phase of the training, as does the error on the training set. When the network begins to overfit the data, the validation error typically begins to increase. When the validation error increases for several iterations, the training should be stopped. In this work, when the validation error keeps increasing for six iterations, the training is stopped and the weights at the minimum of the validation error are returned. e test set is not used during the training, but it is used as a completely independent test of network generalization. In this study, validation and test datasets are each set to 15% of the original data.
e data of the test lanes 1 to 8 are used for the training in this work. e signals collected in these eight test lanes are processed by using the method mentioned in Section 2.2, forming the dataset for the training. e order of samples in the dataset matrix is arranged randomly since samples are considered to be independent of each other. e data of test lane 9 and 10 are used to test the validity and performance of the ANN model. e training performance of the ANN is shown in Figure 6. e performance is also visualized in the form of confusion matrix in Figure 7. In the confusion matrix in Figure 7, the rows correspond to the output predicted class, and the columns correspond to the target class. e diagonal cells correspond to samples that are correctly classified. e off-diagonal cells correspond to incorrectly classified samples. Both the number of samples and the percentage of the total number of samples are shown in each cell. e column on the far right of the plot shows the percentages of all the samples predicted to belong to each class that are correctly and incorrectly classified. e row at the bottom of the plot shows the percentages of all the samples belonging to each class that are correctly and incorrectly classified. e cell in the bottom right of the plot shows the overall accuracy.
Test Results of the ANN Classifier
A good training performance of the ANN can be seen in Figure 7. Combining the signal processing method and ANN model, we can get an ANN classifier, as shown in Figure 8. We input a 0.5-second signal, the ANN classifier outputs an estimated compaction level.
In this section, the data of test lane 9 and 10 are used to test the validity and performance of the ANN classifier. As mentioned in Section 2, a total of 32 long vibration signals Advances in Materials Science and Engineering are collected on these two test lanes. Considering that, under the actual working condition, the signal samples may be not collected continuously or in order during the real-time estimation of the compaction quality, we randomly capture some 0.5-second short signals from the long signals as the inputs of the ANN classifier. e output accuracy of the ANN classifier is shown in Figure 9. We can believe that the ANN classifier is accurate enough for the real-time estimation of the compaction quality.
Discussion on Roller Moving
Direction during Compaction e analysis in this paper mainly focuses on the compaction of cement-stabilized gravel base. e main intention of this study is to find an easy way to estimate the degree of compaction in real time. e ANN classifier developed in this paper partly achieves this goal by using vibration pattern recognition. In fact, it is difficult to analyze the vibration of cement-stabilized gravel base layer by using traditional methods because of the inhomogeneity and anisotropy. erefore, the authors try to use neural networks. Although neural network is a black box for the user, it does work well in this study.
From the experimental results shown in Figure 3, it can be seen that the degree of compaction increases slowly from pass 5 to 8. In the "sight" of neural network, the vibration signals of pass 5 to 8 should look similar since the values of the degree of compaction have little differences (they are in the same compaction level). In this section, we try to label the samples with the number of passes, and the network is trained to classify the input samples according to 8 target classes.
e training performance is shown in Figure 10. Some samples of pass 5 are misclassified as target class 7, and some samples of pass 6 are misclassified as target class 8. However, the ANN has good performances on the classification between class 5 and 6 and between class 6 and 7 as well. e ANN thinks some vibration signals of pass 5 look like the signals of pass 7, but signals of pass 5 and 6 are completely different. is is weird because the values of the degree of compaction of pass 5 and 6 are closer, and signals of pass 5 and 6 should look more similar. Similarly, the ANN thinks some signals of pass 6 look like the signals of pass 8, but not signals of pass 7. e roller moving direction may be the reason for this weird phenomenon. During compaction, in Figure 2, the roller moves to the north in pass 1, 3, 5, and 7, to the south in pass 2, 4, 6, and 8. Due to the inhomogeneity and anisotropy, the cement-stabilized gravel base layer shows different properties and vibration responses in different roller moving directions. e effect of roller moving direction may be paid attention to for the vibration analysis of compaction in future studies.
Conclusions and Outlooks
e primary goal of this paper is to find an easy and convenient way to estimate the degree of compaction in real time.
e ANN classifier developed in this paper partly achieves this goal. Some main conclusions and findings of this research are as follows: Advances in Materials Science and Engineering (1) A signal processing method is proposed. e signals collected in the experiment are converted to frequency domain representation by using a singlesided FFT. e frequency features of the vibration signal are extracted and expressed in the logarithmic form to amplify some inconspicuous frequency components. e vibration signals are also normalized to eliminate the effects of amplitudes.
(2) An ANN model is designed and trained to identify the different vibration patterns of drum and classify them according to the different compaction levels. e correlation between vibration patterns and the compaction quality is built by the ANN model.
(3) An ANN classifier is developed by combining the signal processing method and ANN model together. e ANN classifier can estimate the compaction quality in real time according to the input vibration signal. From the testing results, it can be found that the ANN classifier shows a good performance on the compaction quality real-time estimation. (4) e effect of roller moving direction during compaction is observed and analyzed by using the ANN. is effect may be important and should be considered in future studies.
Essentially, the development of IC technique is based on the idea that the compaction quality can be evaluated by identifying the vibration patterns of pavement. erefore, IC technique can be regarded as a problem of pattern recognition, and the ANN is very suitable for this problem. However, the works in this paper still have room for improvement. To improve the performance of the ANN classifier, a large number of training data are required. Moreover, different pavement materials (such as the pavement structure in Table 1) have different properties; therefore, training different ANN models for different materials is necessary. More projects and materials will be included in our future studies.
Data Availability
e data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. | 5,588.8 | 2020-12-22T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Influence of volume effect on electrical discharge initiation in mineral oil in the setup of insulated electrodes
This article deals with the problem of electrical discharge initiation in mineral oil in the setup of insulated electrodes. The results of laboratory experimental studies were collated with the results of the numerical calculations of electrical field stress. Both kinds of the research methods were applied to analyze the two model electrode setups immersed in mineral oil: setup with insulated HV electrode and setup with bare HV electrode having the same outer dimensions as the insulated one. The obtained similarities in the values of maximum inception electrical field stress and the equality in the experimentally evaluated initiation delay indicated that the most stressed oil volume and weak points included in it may be responsible for discharge initiation in paper–oil insulation setups with oil of technical purity. The same number of weak points included in oil may be, in both the cases considered, an equally productive source of initiation sites.
Introduction
The studies on electrical discharge initiation in dielectric liquids have been conducted for many years. In these studies, initiation has been considered both in pure hydrocarbon liquids and in transformer mineral oil of technical purity [1][2][3][4][5][6][7][8][9]. A new part of the experimental studies has been focused also B Pawel Rozga<EMAIL_ADDRESS>1 Lodz University of Technology, Institute of Electrical Power Engineering, Stefanowskiego 18/22, 90-924 Lodz, Poland on environmentally friendly dielectric liquids such as the natural and synthetic esters [10][11][12]. The most often analyzed electrode setups have been the point-plane setups representing extremely non-uniform electrical field distribution. The typical voltage waveform used in the studies on discharge development in dielectric liquids has been the lightning impulse of different characteristic times. Taking into account that mineral oil is the most important liquid from the practical point of view, the largest number of the studies has been focused solely on it. The wide spectrum of research indicated that electrical discharge initiation and propagation are very complex processes involving different physical phenomena. Depending on the chemical composition and physical properties of the liquids, pressure and temperature, type of testing voltage as well as electrode geometry, the processes determining discharge initiation may be the bubble formation by Joule heating, cavity formation by electromechanical forces or electrostatic emission [1,3,4,7,8]. Discharge propagation may be, however, the result of ionization of gas included in discharge channel (development of slow 1st and 2nd mode discharges) or direct liquid ionization as a characteristic process of the development of 3rd or 4th mode discharges. This second type of ionization occurs when the voltage significantly exceeds (more than two times) the 50 % breakdown voltage for the given electrode gap [1][2][3][4][5][9][10][11][12]. Similar phenomena to these, observed in the setups of bare electrodes of point-plane arrangements, were recorded for electrode setups in which an HV electrode was covered by paper insulation. In such setups, characterized by quasi-uniform electrical field distribution, discharge characteristics were practically the same as in the case of setups with high degree of field nonuniformity [13][14][15][16][17]. Thus, the phenomena responsible for electrical discharge initiation and development may be considered as the same as in the setups with electrodes without paper insulation. Additionally important phenomenon, that is responsible for discharge initiation in oil, is a well-known volume effect of the most stressed oil. According to the theory of this phenomenon, electrical discharges in oil may be initiated in oil volume being under electrical field stress higher than 90 % of maximum value for given electrode setup. This means that the more the impurities or gas bubbles in considered oil volume, the probability of discharge initiation gets higher. On the other hand, increasing the most stressed oil volume causes a decrease in the electrical strength of the insulating system exponentially [8,14,18]. Hence, taking into account the existing knowledge in the field of discharge initiation and propagation in mineral oil, confirmation of the phenomenon responsible for such the initiation in the setups of paper-oil insulation and with oil of technical purity became an aim of the studies presented. These studies were divided into two parts: laboratory experiments and numerical calculations of electrical field distribution. The results of performed experiments and their discussion were limited to one parameter-the most important from the initiation of discharges point of view-time to initiation. This parameter informs about the delay of the moment of initiation in relation to the moment of supplying the lightning impulse to the electrode setup. On the other hand, simulation of electrical field distribution was focused on the determination of maximum electrical field stress occurring in investigated electrode setups at inception voltage.
Laboratory studies
The main aspect of the experimental laboratory studies was the assessment of the influence of paper insulation on the parameters characterizing the electrical discharges developing in mineral oil in the setup of insulated electrodes. Selected parameters to analysis were inception voltage, time to discharge initiation and propagation velocity. Simultaneously, the spatio-temporal development of the discharges was observed on the basis of taken-out shadowgraph photos and registered light oscillograms [16,17]. The research was performed in an automated laboratory system consisting of two experimental systems cooperating together. This laboratory system is presented schematically in Fig. 1.
In the first system, the single-shot shadowgraph method with the Q-switched YAG neodymium laser as a flash lamp was used to record the photos of the discharge forms. In the second system, a photomultiplier tube and a digital storage oscilloscope were used to register the light pulses emitted by the developing discharges. Trigger and control unit was, however, responsible for the detection of discharge initiation and for measuring the time to discharge initiation [12,16,17].
Two model electrode setups used in the experimental studies are presented schematically in Fig. 2.
The HV electrode in the first setup was insulated by crepe paper while in the second setup this electrode was bare having the same outer dimensions as the electrode with insulation. Both the setups were characterized by a quasi-uniform electrical field distribution. In both setups, the HV electrode was a brass wire formed in the shape of the capital letter U. In the case of the setup with the insulated HV electrode, the wire had a 4 mm diameter while in the case of the setup with a bare HV electrode 4.8 mm. The 0.8 mm difference (0.4 mm from each side) resulted from the thickness of the insulation made of the crepe paper, which was used to cover the thinner wire creating, in this way, the insulation on HV electrode. This meant that in both the cases identical outer dimensions of the HV electrode were obtained. The part of the electrode setups connected with grounded electrode was, however, identical. The grounded electrode was constituted of a metal plate of 195 mm in diameter. On this plate, 5 mm in thick, a transformerboard insulating plate was deposited. This was made to prevent against complete breakdown which, emitting very intensive light, could destroy the sensitive optical devices used in the measurements. Both the setups were immersed in the test cell filled by commercial mineral oil of technical purity. The studies were performed under the standard lightning impulse voltage of characteristic times 1.2/50 μs produced by Marx generator. Both positive and negative polarities were used during the investigations [14,16,17].
As mentioned above, the time to initiation (t) was a main parameter considered from the initiation of discharges point of view. This time was measured with an accuracy of 0.1 μs as the time between the moment of supplying the lightning impulse to the given electrode setup and the beginning of the discharge determined by the system of discharge initiation detecting. This latter was based on measuring the light emitted by discharge channels. The moment of initiation was marked as the first light pulse generated by the developing discharge and registered by the photomultiplier tube installed in the system of light registration [12,16,17]. Example of recorded time course on which the method of time to initiation evaluation was presented is shown in Fig. 3.
To compare the results of the measurements realized in the same field conditions, time to initiation was measured for both the electrode setups at the same value of testing voltage. This voltage was chosen as a statistically estimated inception voltage (median of measured values) corresponding to the setup with the insulated HV electrode. For positive polarity of lightning impulse, this was a value of 190 kV and for negative polarity 192 kV, respectively [16,17]. The choice of such a value resulted from the fact that for the setup with the insulated HV electrode inception voltage was higher than for the setup with the bare HV electrode. Thus, we can state with a relatively large probability that discharge initiation will follow each applied lightning impulse. On the other hand, the difference in the inception voltage between both the setups was so small and, thus, the chance on the direct initiation and then development of the fast 3rd mode discharges in the setup of bare HV electrode practically did not exist.
Statistically estimated values of times to initiation, based on the tens of individual measurements for each of voltage polarities and each of electrode setups, are presented in Table 1. These times were described by a log-normal distribution and, thus, the average values t and standard deviations σ , together with the corresponding confidence intervals, were included in the table [17,19,20].
The fundamental conclusion resulting from the Table 1 is a clearly visible lack of differences between the setup with bare HV electrode and the setup with insulated HV electrode in the estimated times to initiation of the discharges. This conclusion concerns both the positive and negative lightning impulse. The equality of the measured times relates also to the standard deviations assigned to them. These standard deviations differ to each other in a very small range. In order not to leave the conclusion about the equality of times to initiation only as the statistical assumption resulting from the intuitively interpretation of measured times to initiation, the hypothesis on their equality was confirmed using analysis of variance method (ANOVA). The ANOVA test showed that there was no reason to reject the hypothesis about the equality of average values of times to initiation. From the practical point of view obtained, small difference may be recognized as negligible [20].
Simulation of electrical field distribution
For the analysis of electrical field distribution in the investigated model electrode setups, the finite element method (FEM) applied to commercially available software was used. The first step of the simulation was shaping, in a threedimensional (3D) space, the electrode setups which were used previously in the laboratory experimental studies. Figure 4 presents an example of such shaping for the setup with bare HV electrode.
After shaping both the electrode setups, materials which had the same properties as the materials used in the experimental studies were assigned to the individual setup components. Then, for the individual materials, relative electrical permittivity ε r was designed (2.2 for oil and 4 for the transformerboard insulating plate and paper insulation). Simulta-neously, electrical potential was implied to the HV electrode. To simplify the calculations and taking into account the small difference between both the voltage polarities in the estimated inception voltages, the electric potential of the HV electrode taken for the simulation was 190 kV. Because in the simulations the electrical field distribution in the interelectrode space was sought and, in other words, the value of the maximum field stress had to be found; this was assumed in the considerations that the voltage applied to the HV electrode will be characterized by DC-based type of voltage. It is a kind of simplification in relation to the actual laboratory measurements; however, according to the authors' opinion such simplification is not too far-fetched. The essence of the simulations was to determine the values of the maximum electrical field stress at lightning impulse of predetermined peak value applied to the electrode setups, which was considered as inception voltage. The highest value of the electrical field due to geometry of the electrodes and applied voltage takes place just for a moment when the lightning impulse reaches its peak value; hence, the above-mentioned assumption seems to be correct. Additionally, it is important to remind that in the works presented only moment of discharge initiation was analyzed. Thus, static simulation was performed as a fast and valuable approach. The dynamics of the changes in the voltage applied was not considered; because for the moment of initiation as well as from the point of view of the problem associated with the volume effect in oil it was not necessary [14,18]. However, future work will be focused also on the modeling related to the voltage changes over time (actual lightning impulse voltage) and to the influence of the space charge on the process of discharge propagation.
In the next step, according to the assumptions of the finite element method, a mesh of tetrahedral elements was designed for each of the distinctive cases. It was assumed that the densest distribution of tetrahedral elements creating the computational mesh had to be applied in the insulating space surrounding the HV electrode. This resulted from the expected results and knowledge on the possible discharge initiation area [13][14][15][16][17]. The rest of the space, for example the insulating plate, did not require a mesh of high density because in this space the values of electrical field stress were not important from the point of view of the considered issue. In each case, the same density of computational mesh was used to make a reliable comparison of the obtained results. The results of the simulations presenting electrical field distribution in both model electrode setups are shown in Fig. 5. The area of the maximum electrical field stress appeared in the close proximity to the HV electrode, thus, around the place where initiation of discharges in oil was observed during the experimental studies. To confirm this fact, together with the electrical field distribution the exemplary photos of discharges (positive ones) developing typically in electrode setups under consideration at inception voltage are also presented [14,16,17,19]. In the above figure, the highest values of electrical field stress are represented by the red color and its shades while the lowest values and zero by the blue color with its shades. Because the most important area for analysis was the area in close vicinity of the HV electrode, the results of the simulation present only this area in magnitude. The maximum values of electrical field stress obtained on the basis of simulations are set in Table 2.
These values, corresponding to the inception electrical field stress (not the breakdown strength because the break-down did not happen during experimental studies), were very similar to each other. The small difference between obtained values is, first of all, a result of the small difference in the geometry of both the setups. Although the layer of insulating paper on the HV electrode is very thin, its influence on the electrical field distribution exists and this influence was anticipated before the beginning of the simulation. The border, separating the materials having different electrical permittivity (paper and oil), causes the difference in electrical field distribution. So, obtaining different maximum values of electrical field stress is mostly true in the case in consideration. On the other hand, we should take into account the fact that the outer dimensions of the metal part of both HV electrodes are also slightly different. Bare electrode had a higher diameter due to the fact that it was enlarged by a thickness corresponding to the thickness of the paper insulation covering the insulated HV electrode. Thus, it was not expected that the results will be exactly the same in both cases [8,13,18]. This expectation had its base also in the knowledge on insulation setups of series insulation. Applying the well-known Eq. (1), describing electrical field stress in such the setups: where E k is the electrical field stress in the given layer, V is the electric potential of HV electrode, εr is the relative electrical permittivity of the given layer, a is the thickness of the given dielectric material and n is the number of the layers, it may be confirmed the relationship between setups with paper insulation and without it. Although the setups under consideration had a quasiuniform electrical field distribution, the general dependence should be identical.
For these simple calculations the following values were assumed for the setup with an insulated HV electrode: electrical potential V = 190 kV, relative electrical permittivity of paper εr 1 = 4, relative electrical permittivity of mineral oil εr 2 = 2.2, thickness of paper insulation a 1 = 0.4 mm and length of oil gap a 2 = 20 mm. Simultaneously for the setup with a bare HV electrode the equation was simplified giving in result the so-called average electrical field stress since only applied voltage V and the length of oil gap were considered in calculations. Finally the results like in Table 3 were obtained.
As was expected, the higher value of electrical field stress in oil was obtained in the case of the setup with a bare HV electrode and still, because of the thin layer of paper insulation, the difference between both the setups was very small.
Discussion
Experimental laboratory studies indicated on the equality of the measured times to initiation for the considered electrode setups. On this basis, it may be concluded that the reason for this equality lies in the same source of weak points responsible for the process of discharge initiation. Because the delay of initiation is the same in both cases, it does not have to be influenced by the structure at which initiation was originated. It is hardly probable that the paper insulation and metal of the electrode are the identical sources of weak points in the tested setups. Such the source, in the setup of paper-oil insulation with the oil of technical purity, seems to be only the oil bath. This conclusion may be additionally supported by the results of another works in this field [13]. In these works, the influence of artificially implemented weak points on the electrical strength of insulation setups was investigated. The weak points of different kinds were placed in the paper insulation wrapping of HV electrode and then the inception voltage for each kind of weak points was measured separately. The result of this experiment showed that the electrical strength, understood as the value of statistically estimated location parameter of the Weibull distribution, did not decrease. From the above, it may be supposed that volume effect in oil may play an important role in the process of discharge initiation. The simulation of electrical field distribution and assignation of maximum electrical field stress confirmed the presented supposition. The calculated maximum values of electrical field stress in considered electrode setups were obtained as almost identical. They were 0.4 MV/cm for the setup with an insulated HV electrode and 0.42 MV/cm for the setup with a bare HV electrode respectively. Referring this to the identity in the outer dimensions of both the setups, it may be concluded that the most stressed oil volume in both considered cases should be also identical. Thus, in the same volume of the most stressed oil the same number of weak points occurs and these weak points cause the same delay of initiation.
Finally, in order to state the reliable values of maximum electrical field stress obtained from the simulations, these values were compared to the characteristic values of inception electrical field stress of discharges in oil. Within the literature on this subject, there are many publications, serving such threshold values of inception electrical field stress both for the setups of non-uniform electrical field distribution (point-plane) and for setups of uniform and quasi-uniform field distribution [1,3,4,[6][7][8]. These data came from the theoretical considerations and experimental research, and also were supported by appropriate mathematical calculations. From the general point of view, these considerations have indicated that the values of inception electrical field stress of slow electrical discharges, developing in mineral oil under impulse voltage, are in the range from tenths to few MV/cm. These values, however, depend on the degree of field nonuniformity and type of liquid. In the setups of uniform and quasi-uniform electrical field distribution, inception electrical field stress is about 0.3-0.5 MV/cm (especially for the liquids of technical purity) while in the setups of non-uniform electrical field distribution and liquids of laboratory purity even few or 10 MV/cm [1][2][3][4][6][7][8]. In the first case, generally lower values of inception electrical field stress are the result of uniform electrical field distribution and the influence of surface and volume effect on the electrical strength of such setups. In the second case, when the setups are characterized by non-uniform electrical field distribution (point-plane electrode arrangements) the values of inception electrical field stress change with the changes of radius of curvature of the HV point electrode. Lower values are observed when increasing the radius of the curvature. This may be explained by the fact that the surface of the end of the HV electrode increases, so the surface effect starts to have an influence. However, it is important to remember that increasing the radius of curvature increases the inception voltage of discharges. On the other hand, electrical strength of some hydrocarbons (electrical field stress at which in the given conditions breakdown happens) in accordance with the considerations presented in [1] is in the range of 1-2 MV/cm. Thus, the initiation of discharges, which disappear in the interelectrode gap and do not cause a complete breakdown, should take place at much lower values than this, corresponding to breakdown electrical field stress.
Close to the dependence of inception electrical field stress on the degree of field non-uniformity (electrode configuration), this also depends on the width of the oil gap. In the case of small gaps (to 5 mm), discharge initiation is practically always connected with breakdown. Hence, electrical strength of the setup in the category of electrical field stress is equal to the inception electrical field stress [1,3]. Some papers indicate, based on theoretical considerations concerning the physico-chemical nature of the liquids, that the threshold value of inception electrical field stress is directly connected with the threshold of the phenomena of electrostatic emission or field ionization occurring in the liquid volume. In such a case, these values for field emission are from 7 to 20 MV/cm (compared by recording the direct current), wherein they depend on the radius of curvature of point electrode. With a larger radius of curvature and lower electrical field stress, the phenomena of critical volume were observed, and this phenomenon was assessed as decisive in the process of discharge initiation. Concerning the field of ionization, the proper values of electrical field stress, at which this ionization may take place, were assessed on 10 to several MV/cm. For example, a higher electrical field is needed in the case of cyclohexane, for which the ionization energy is 8.75 eV while for mineral oil, consisting mostly of aromatic compounds, the threshold electrical field stress causing ionization may be lower, because the ionization energy of benzene is circa 7 eV (benzene ring is a main part of aromatic hydrocarbons, which are included in mineral oil). However, it is important to note, that above presented values for field emission and field ionization correspond only to ideal liquids, for which the consideration did not take into account the possibility to contain in their volume impurities or gas bubbles [1,4,6,7].
On the basis of the above-presented data, it is clearly visible that for the model electrode setups of quasi-uniform electrical field distribution obtained values are in accordance with the common approach to the issue of discharge initiation in liquid dielectrics. For the setups of the electrical field distribution close to uniform, literature data of inception electrical field stress are from 0.3-0.5 MV/cm to about 1 MV/cm while the values correspond to the model electrode setups representing quasi-uniform electrical field distribution are in the range of 0.4 MV/cm. Taking into account additionally that both the surface of the electrode and the surface of the insulation wrapping were shaped as ideal in the simulations (without any irregularities causing locally the increase of electrical field stress) the obtianed values may be treated as a fully reliable.
Conclusions
The above presented considerations concerning relationship between experimentally measured times to initiation of the discharges in oil and the results of simulations of maximum electrical field stress allowed for presentation of the following conclusions: 1. The most stressed oil volume and the weak points included in it may be successfully responsible for discharge initiation in mineral oil of technical purity. This does not depend on whether the HV electrode is covered by paper or it is without this wrapping. This conclusion results from the equality of experimentally measured in the same testing conditions times to initiation and practically identical values of maximum electrical field stress obtained for both considered model electrode setups through FEM simulations. However, this conclusion does not diminish the role of paper insulation in the initiation process. This insulation repels the initiation sites out of the region of high field stress. This is only a confirmation of the feasible and correct approach regarding the special allowance for the quality of insulating oil used in high voltage insulating systems with paper-oil insulation. 2. It is possible to initiate the electrical discharge in the setup of insulated electrodes with quasi-uniform electrical field distribution in mineral oil at maximum electrical field stress in the range of 0.4-0.5 MV/cm. This is especially possible for oil of technical purity which has impurities being able to constitute the weak points of the setup and then to determine its electrical strength. | 5,992 | 2016-08-23T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Physics"
] |
Redshift: Manipulating Signal Propagation Delay via Continuous-Wave Lasers
. We propose a new laser injection attack Redshift that manipulates signal propagation delay, allowing for precise control of oscillator frequencies and other behaviors in delay-sensitive circuits. The target circuits have a significant sensitivity to light, and a low-power continuous-wave laser, similar to a laser pointer, is sufficient for the attack. This is in contrast to previous fault injection attacks that use high-powered laser pulses to flip digital bits. This significantly reduces the cost of the attack and extends the range of possible attackers. Moreover, the attack potentially evades sensor-based countermeasures configured for conventional pulse lasers. To demonstrate Redshift, we target ring-oscillator and arbiter PUFs that are used in cryptographic applications. By precisely controlling signal propagation delays within these circuits, an attacker can control the output of a PUF to perform a state-recovery attack and reveal a secret key. We finally discuss the physical causality of the attack and potential countermeasures.
Introduction
There is a continuous demand for network-enabled embedded devices to extend evergrowing information technologies further into the physical world. Unfortunately, any new technology always comes with new attack surfaces; such embedded devices are exposed to local attackers with physical access. Ensuring security against local attackers is a challenging task because they can use physical attacks including side-channel attacks [MOP07] and fault-injection attacks [DFM + 11, JT12,KSV13]. The attacks pose realistic threats to otherwise secure embedded devices, such as smartcards, and researchers have been studying new attacks and countermeasures for more than two decades.
Laser fault injection (LFI) induces faults on a target chip by applying laser stimulation [SA02]. Among many ways to inject faults [DFM + 11, JT12,KSV13], LFI is considered particularly effective because of its high spatial selectivity. By illuminating a particular coordinate with a tiny laser spot, an attacker can induce precise faults such as a single bit flip in a particular address in memory. This is in contrast to the other fault-injection methods, such as clock glitching, that globally affect a target chip. The industry considers LFI a realistic threat and certification schemes (e.g., EAL5+ in Common Criteria [Joi20]) require penetration testing against LFI. To support this ecosystem, several vendors sell LFI instruments for security assessment [Risb, Alpa].
The conventional LFI focuses on digital circuits that implement cryptographic algorithms [DBC + 18]. The state-of-the-art LFI setup is optimized for the peak laser power needed for a successful bitflip in digital circuits. Such a modern LFI setup is extremely expensive, typically costing more than $100,000, and is only available to well-funded attackers.
Besides LFI, there are several light-induced interferences in the wild. Xenon Death Flash [Upt15] is an issue found in Raspberry Pi 2, wherein illuminating the circuit board with a camera flash causes the system to reboot. The light interfered with a voltage regulator in a bare-silicon package and caused the problem. Another example is Light Commands [SCR + 20] that silently injects voice commands to MEMS microphones with a low-power, modulated laser. The researchers identified that an ASIC inside the microphone package is one of the causes. The laser light reached the ASIC through the microphone's acoustic port and induced an electrical signal representing false audio accepted by the computer system as authentic audio.
The interesting gap between the above two attacks motivated our study. On the one hand, a conventional LFI needs an optimized high-power and short-pulse laser. On the other hand, ordinary camera flashes and laser pointers were sufficient to cause interference. The gap led us to the hypothesis that certain analog and timing circuits are more sensitive to light because they handle more variation in voltage and delay than digital circuits. If this hypothesis is correct, more light-sensitive targets opens another direction of low-cost laser injection attacks [SA02,SH07,GGS17] that extend the potential attackers from well-funded organizations to individuals with low-cost equipment.
Among many analog circuits, we focus on the ones using signal propagation delay, namely delay-sensitive circuits. They are common in cryptographic modules for realizing non-digital features using logic gates only, e.g., physically unclonable functions (PUFs) [Mae13], random-number generators [MM09], and on-chip sensors [HBB + 16]. In particular, we set the ring-oscillator PUF (RO-PUF) and the arbiter PUF (A-PUF) as our targets, suspecting that laser injection would circumvent implicit assumptions of some PUF threat models.
Preliminary
We briefly summarize the conventional works on LFI, PUF, and its attack.
Laser Fault Injection
Semiconductor circuits are inherently sensitive to incident light energy, which can cause soft errors similar to those generated by ionizing radiation [Hab65]. Skorobogatov and Anderson first exploited such light-induced errors to attack smartcards and microcontrollers [SA02]. The attack using a laser, i.e., LFI, has several advantages over other fault-injection methods such as clock and voltage glitching. In particular, LFI enables a more precise and stealthy attack by selectively illuminating a particular coordinate. Consequently, many research works began to thoroughly investigate LFI and its applications to many different circuits [KSV13,DFM + 11]. In the meantime, the industry has established the ecosystem for evaluating and certifying the resistance against LFI [Risb,Alpa,Joi20].
A parasitic photodiode explains the physical causality behind LFI [MFS + 18]. Without an electrical field on the gate terminal, a MOS transistor prevents a current flowing between the source and drain terminals. A reverse PN junction between the substrate and the highly doped regions contributes to this electrical isolation, and this PN junction acts as a parasitic photodiode under laser stimulation. When laser light reaches the junction, it generates electron-hole pairs by the photoelectric effect. The generation of these carriers in the presence of the built-in electric field causes a current flow between otherwise non-conducting transistor terminals. If this photocurrent disturbs a voltage signal to an intermediate level, it can result in a bitflip. Further details about the causality are discussed in Section 8.1.
Since digital circuits periodically refresh their electrical states at clock edges, the attacker needs to inject enough optical energy within a clock cycle to cause a bitflip. But if too much energy is injected, it can cause heat buildup and permanent damage to the device. Therefore short, high-power laser pulses are necessary for a successful attack against these digital circuits. For example, a 1064-nm single-mode laser in Riscure's Laser Station 2 emits a pulse as narrow as 2 nanoseconds, with its peak power reaching 4.6 watts [Risb].
Short-pulse lasers are on the cutting edge of optical engineering, requiring special techniques such as Q-switching [Risa] and optical amplification [Alpb], which significantly increases the instruments' cost. Breier et al. reported the cost of around €150,000 [BJ15]. Meanwhile, van Woudenberg et al. estimated the cost ranging from $50,000 to $150,000 [vWWM11]. Similarly, the Joint Interpretation Library (JIL), a working group organized for certifying cryptographic modules, categorizes the high-end laser station as specialized, rated between €10,000-200,000 [Joi20].
PUFs and their Application to Secure Key Storage
Physical Unclonable Functions (PUFs) are circuits that provide device-unique identifiers that are used in cryptographic modules [Mae13]. The key idea is to extract uniqueness from slight differences in each transistor (e.g., threshold voltage) due to manufacturing process variation. Researchers have been designing sophisticated circuits that efficiently harvest device-specific uniqueness with a variety of mechanisms. Some PUFs use digital components only and are available on FPGAs and semi-custom ASICs [GKST07]. Here, variation in signal propagation delay is frequently used to extract device-unique features using digital components.
Ring-Oscillator PUF (RO-PUF) [SD07]
The RO-PUF uses oscillation frequencies of ring oscillators as a source of uniqueness. Each logic gate has a different propagation delay due to several manufacturing variations such as transistor sizes and dopant density. The RO-PUF efficiently extracts such variation using oscillators. The RO-PUF has a set of ring oscillators and uses their relative frequencies for generating a state.
Arbiter PUF (A-PUF) [LLG + 05]
The A-PUF also uses propagation delays in logic gates but with another circuit. In an A-PUF, a step signal is sent to two distinct electrical paths with slight differences in propagation delay due to device variation. An arbiter circuit determines the faster path, which is used as a binary output. Using cascaded selectors, it configures the electrical paths using challenge bits.
SRAM PUF [GKST07, HBF07]
A 1-bit SRAM cell has two electrically stable states corresponding to the stored bit value. Once a cell moves to a stable state, it will stay there until a write operation overwrites it. When a chip is turned on, each SRAM cell starts from an unstable state and eventually converges to one of the two stable states. The destination is determined by manufacturing variation. Therefore, the SRAM PUF reads the SRAM's initial values and uses them as a PUF state [GKST07,HBF07].
Secure Key Storage Using PUF
A common PUF application is secure key management with the key encryption key (KEK) [Int20]. We denote a binary string from a PUF as the PUF state s. There are techniques for securely correcting errors in s, e.g., fuzzy extractor [GKST07]. As a result, PUF provides an error-free and device-unique key k PUF , which stays within the chip during its lifetime. The system encapsulates a pre-shared key k using k PUF : (1) The system generates c k at an enroll phase and stores it in external non-volatile memory.
In each bootup, the system (i) generates k PUF by calling a PUF, (ii) retrieves c k from the non-volatile memory, and (iii) recovers k by decrypting c k with k PUF . The system finally provides a cryptographic service using k. Those keys disappear when the device is powered down, providing security against static reverse engineering attacks [TJ11,CSW16]. PUF-based key storage has several real-world applications. In particular, NXP Semiconductors' LPC55Sxx devices provide a set of APIs for realizing KEK using SRAM PUF [Sem19]. Other vendors use delay-sensitive PUFs for PUF-based key storage, e.g., A-PUF [DZ04] ifx = x then returnŝ 5: end for 6: return ⊥
Zeitouni et al's Attack on SRAM PUF [ZOW + 16]
There are several side-channel and fault-injection attacks on PUF [Taj17]. In particular, Zeitouni et al. proposed a sophisticated attack on key storage using an SRAM PUF. The attack exploits the remanence effect: a phenomenon that an SRAM cell preserves its data for a short period after power is off [HBF07]. By exploiting the remanence effect, an attacker can partially control the PUF state.
In a normal case, the SRAM PUF generates a PUF state s PUF , which is secret from the attacker. The attacker can send a query q and obtain a response Dev[s PUF ](q). Here, Dev[·](·) abstracts a service using the PUF state; for example, the system recovers a preshared key k from s PUF as described in Section 2.2.1 and returns a ciphertext obtained by encrypting q with k for challenge-and-response authentication. Here, Dev[·](·) is assumed to be public. Figure 1 illustrates the attack process. First, the attacker writes zeroes to the target SRAM cells and resets the device for τ seconds. The normal case described above corresponds to a sufficiently long pulse, namely τ ∞ . In contrast, when the pulse is very short, namely τ 0 , the SRAM preserves the data across the reset by the remanence effect, and the PUF state becomes s 0 = 0. The attacker obtains Dev[s 0 ](q) accordingly. Then, the attacker sends a slightly longer pulse t 1 , which results in an intermediate state s 1 that satisfies HW(s 0 ) ≤ HW(s 1 ) wherein HW(·) is the Hamming weight. The attacker repeats the above process by gradually increasing the pulse widths τ 0 < · · · < τ i < · · · < τ ∞ , and the corresponding PUF states satisfy 0 = HW(s 0 ) ≤ · · · ≤ HW(s i ) ≤ · · · ≤ HW(s ∞ = s PUF ). (2) In the meantime, the attacker obtains the set of responses If the increment of the pulse widths is sufficiently small, the neighboring states are very close, i.e., HW(s i ⊕ s i+1 ) becomes small. In this situation, the attacker can find s i+1 by exhaustively searching the neighbors of s i . By recursively repeating the process starting from s 0 = 0, the attacker eventually recovers the secret state s PUF . Algorithm 1 describe the process. Here, Finder realizes the neighbor search as described in Algorithm 2; the algorithm searches for a state corresponding to the output x given the previous state s. For each neighborŝ, the attacker emulates Dev and obtainsx = Dev[ŝ](q). Here,x = x implies thatŝ is the desired one. Finder returns ⊥ when the search is unsuccessful.
Proposed Method
We briefly summarize Redshift and discuss its advantages, followed by the threat model describing the attacker's accessibility.
Principle
Redshift is a laser injection attack targeting delay-sensitive circuits, such as oscillators and arbiter PUFs, by changing signal propagation delay with laser stimulation. This attack is different than conventional LFIs because it changes the target's behavior within the analog domain rather than causing pure digital faults [DFM + 11, JT12,KSV13]. Redshift is also different than conventional optical interferences [Upt15, SCR + 20] in that it exploits spatial selectivity by using a tiny laser spot focused with a microscope.
A concrete example of Redshift is the precise control of the frequency of ring oscillators. We measure two ASIC chips fabricated with 180-nm and 40-nm CMOS technologies ( Figure 2-(left) and -(right)). During the experiment, a cheap continuous-wave laser diode illuminates the target ring oscillator under a microscope. Figure 2 shows the linear relationship between the oscillation frequency (the vertical axis) and the injected laser power (the horizontal axis). The maximum laser power of 1.75 mW is several orders of magnitude lower than conventional LFI tools, and even less than the power of a laser pointer.
Advantages
Laser Injection Attack on Delay-Sensitive Circuits Redshift extends the target of laser injection attack to delay-sensitive circuits in contrast to the conventional LFIs targeting digital components for cryptography. Since delay-sensitive circuits are an essential analog building block, Redshift has many applications beyond PUFs. For example, Redshift can degrade random number generators that use propagation delay as a source of entropy [MM09]. Delay-sensitive circuits are commonly used for on-chip sensors, too. For example, an EM sensor uses a shift in oscillation frequency to detect a magnetic-field probe for the electromagnetic side-channel attack [HHM + 14]; manipulating the oscillation frequency can cause false positives and negatives in such sensors. Moreover, underclocking a system clock with a laser can result in conventional digital faults.
Stealthiness As shown in Figure 2, a low-power continuous-wave laser, too weak for conventional LFI, is sufficient for the attack. Redshift potentially evades the detectionbased LFI countermeasures using on-chip sensors configured for the conventional pulse lasers. Those sensors compare the peak photocurrent with a configured detection threshold [NRV + 06, MFS + 18]. Hardware designers are motivated to configure the sensor with a high detection threshold [NRV + 06] to avoid false positives caused by environmental lights or cosmic particles [Hab65]. The threshold is likely set low enough to sense pulse lasers but too high to sense continuous-wave lasers due to the significant difference in peak power: Redshift needs several milliwatts only, which can be less than 1/1000 of the conventional pulse lasers [Risb]. We discuss how to improve on-chip sensors to detect Redshift without sacrificing the false-positive rate in Section 8.2.
Cheaper Setup
The setup for Redshift is much cheaper than the conventional laser stations [Risb,Alpa,vWWM11,BJ15]. This extends the potential attackers from wellfunded organizations to individuals. In particular, this will lower the attack cost from specialized to standard in the Common Criteria certification scheme [Joi20]. Moreover, this makes the attack more stealthy in terms of the instruments' traceability because an attacker can improvise its Redshift setup using off-the-shelf components.
There are few previous works on cost-efficient optical attacks. The first LFI by Skorobogatov and Anderson [SA02] used cheap light sources such as a flashgun and a laser pointer and successfully attacked digital circuits. However, the target device was fabricated with a very old 1300-nm technology, and an attack using a laser pointer has become challenging as semiconductor chips become smaller and faster [GGS17]. As a result, recent works mostly use short-pulse lasers, as discussed previously. We empirically verified that our continuous-wave setup cannot flip digital bits with a preliminary experiment 2 .
By following the above direction, Schmidt and Hutter [SH07] proposed to deliver laser light using an optical fiber instead of a microscope. Later, Guillen et al. used a flashgun combined with a single-lens optics [GGS17] to even eliminate a laser. These attacks can be even cheaper than Redshift because they do not require a microscope. These previous works aimed at achieving a high peak power using a cheap setup. In contrast, Redshift approaches the same problem by finding more light-sensitive targets. These two approaches are complementary and further optimizing the Redshift's cost using the previous techniques is open for further research. Meanwhile, the cost reduction using the previous approaches, cheaper optics and/or incoherent light source, comes at the cost of a larger spot size that makes the attack less stealthy.
Threat Model
Similar to conventional LFI, Redshift assumes a local attacker who can physically access the target chip and apply laser stimulation. Our attacks on PUFs additionally follow the model by Zeitouni [ZOW + 16]: the target chip has PUF-based key storage and provides a Algorithm 3 LIE: Getting a device response while shining a laser Require: Laser current j and query q Ensure: Response x 1: Set the Laser current to j 2: Invoke PUF state generation the PUF state becomes s 3: Get a response x ← Dev[s](q) 4: return x cryptographic service using a pre-shared key to which the attacker can send a query. The attacker aims to recover the secret protected by the PUF.
The attacker measures the target device with the laser-injection experiment LIE in Algorithm 3. First, the attacker keeps illuminating the target chip with laser power specified by a diode current j (line #1) while the PUF generates a state s (line #2). Finally, the attacker is able to send a query q to the chip's legitimate interface and retrieve x ← Dev[s](q) at line #3. Here, Dev[s](q) abstracts the chip's cryptographic service as discussed in Section 2.3. Here, the attacker is assumed to know the details about Dev[·](·) in the same as the previous attack by Zeitouni et al.
The availability of Dev[·](·) follows Kerckhoff's principle, and PUF-based key storage is designed to be secure without hiding it. There are open hardware for PUF wherein the assumption is reasonable [Tri17]. Meanwhile, Dev[·](·) can be unavailable in commercial chips. For example, NXP LPC55S69 uses a proprietary scheme for its SRAM-based key generation [Sem19]. In this case, the attacker should pay the cost of reverse-engineering Dev[·](·) in advance. We note that considering an attacker with reverse-engineering capability would be reasonable because protection against reverse engineering is a significant benefit of PUF-based key storage [Mae13].
Experimental Setup
This section provides the experimental setup and measurement procedure used throughout this paper.
Optics We use a low-cost laser module in Figure 3 composed of a laser diode, a collimation lens, and a C-mount adapter in the optical cage system. The idea is to cheaply upgrade a simple microscope, categorized as standard [Joi20], by attaching the laser module to the standard C-mount camera port. The total cost of the module is less than $500. The module uses a 520-nm green laser diode in the standard TO56 package that can emit up to 110 mW (Osram PLT5 520B [Osr21]), available at less than $50 from a popular online electronics retailer. We use a Wraymer RM-5400T microscope with a manual XY stage we had in our laboratory, which was roughly $4,000 at purchase. Figure 4-(left) shows the laser module installed in our microscope.
Managing the Laser Power
The block diagram in Figure 4-(right) shows the components and connections used to control the laser power in a programmable way. We manage the laser power with a Thorlabs LDC202C laser driver that regulates the laser diode current. We first characterize the relationship between the diode current and the emitted optical power (the I-L curve) to translate an amount of current to laser power. For the preliminary characterization, we measure the optical power with a laser power meter (Thorlabs PM100D with the S121C sensing head) under the objective lens. The DC current output of the laser driver is controlled by a DC voltage from a function generator (Rigol DG1022Z). During our experiments, the laser power is controlled through the function Focusing and Magnification In general, a higher magnification is advantageous in attacking the target with smaller laser power. That is because the power density in the laser spot increases with the magnification ratio. As a drawback, however, a higher magnification requires more precise aiming. Moreover, increasing the current by a minimum unit can cause too much change in the target with too much magnification. Considering the above trade-off, we use a minimum magnification needed to cause sufficient change with the laser power around 5 mW, the power of a laser pointer. After determining an objective lens, we minimize the laser spot by changing the distance between the diode and the collimation lens using the screw shown in Figure 3. We use a camera on the microscope during the adjustment; see Figures 6 and 8 for the microscope images with laser spots. The laser spot size is proportional to the magnification ratio: the spot diameters are 14.9, 7.7, and 3.9 micrometers with the ×5, ×10, and ×20 lenses, respectively. These spot sizes are much larger than the top metal wires in the target chips, and all our experiments succeeded without intentionally widening the laser spot. Doubling a magnification quadruples the optical energy received at a light-sensitive region covered by the spot. The laser beam does not go through an eyepiece, and its magnification does not affect the results.
Targets We evaluate Redshift on both custom ASIC chips and off-the-shelf microcontrollers. The first set of targets are RO-PUFs and A-PUFs on ASIC chips fabricated with 180-nm and 40-nm CMOS technologies. We use custom chips to perform a white-box analysis, since we know the implementation details about our RO-PUFs and A-PUFs (shown in Sections 5 and 6). We use two chips of each PUF to compare the effectiveness of laser injection in different fabrication technologies, as they have functionally-equivalent circuits but different feature sizes. Our setup illuminates a semiconductor die from the top; we access the die by removing a glued top cover from a ceramic package. The light reaches the transistors after passing through the top metal layers, as there are 4 and 6 metal layers in the 180-nm and 40-nm chips, respectively. The chips can communicate with a PC through evaluation boards. The RO-PUF has an analog debug port that directly outputs the oscillating waveform, where an oscilloscope (Keysight DSO3034T) monitors the signal during the experiments.
The second set of targets are the clock oscillators on off-the-shelf microcontrollers. We use the three microcontrollers from different vendors available on the NewAE UFO target boards [Inc19a,Inc19b,Inc18]: NXP LPC55S69 (Cortex-M33) [Sem21], Microchip SAM L11 (Cortex-M23), and STMicroelectronics STM32F4 (Cortex M4). These devices are used to perform a black-box analysis and verify the feasibility of Redshift on real devices. The laser is injected from the top of the device after decapsulating the quad flat packages; we outsourced the decapsulation for roughly $200 per chip. The target chips are configured to output their clock signals to a GPIO pin. We directly use the 16-MHz internal RC oscillators with SAM L11 and STM32F4. For LPC55S69, on the other hand, we use a 12-MHz signal generated by dividing its 192-MHz free-running oscillator (FRO). Similar to the ASICs, we monitor the GPIO pins with the oscilloscope during the experiments.
Measurement Algorithm 4 shows the experimental procedure of repeating a unit measurement LIE (see Algorithm 3) while changing the laser power. We first fix an arbitrary query q (line #1) and gets a legitimate response x PUF without laser illumination (line #2). Then, we start applying laser stimulation with the diode current starting from j min to j max at the step of j step . As a result, we examine j i = j min + i × j step for i ∈ I = {0, 1, · · · , jmax−jmin jstep }. For each current value j i , we repeat the same measurement r max times, i.e., for r ∈ R = {0, 1, · · · , r max − 1} (line #5-7). We denote the r-th measurement using the laser current j i by x r i (line #6). The algorithm finally returns the list of faulty PUF responses [x r i ] for i ∈ I and r ∈ R (line #11). We choose the following parameters unless otherwise noted: • j min = 34 mA: the laser diode's threshold current, • j step = 0.02 mA: the laser driver's accuracy limit, i.e., ± 0.01 mA, • r max = 25: a sufficiently large number for a decimated experiment in Section 7.2.
Experiment: Oscillator and RO-PUFs
To show the effectiveness of Redshift, we start by controlling the delay within oscillators. First, we manipulate the frequency of a ring oscillator with laser stimulation. Second, we evaluate the same oscillator as an RO-PUF and show that we can manipulate the PUF states. Third, we replicate the frequency manipulation to microcontrollers.
RO-PUF Design
The target RO-PUF comprises of many independent oscillators, two counters, and an arithmetic comparator, as shown in Figure 5. Each ring oscillator is composed of two inverters and one NAND gate. The current source on the top roughly specifies the oscillation frequency [SD07], which we configure to be around 30 MHz. Figure 6 shows the 180-nm and 40-nm RO-PUFs with laser spots. We directly measure the oscillation using the oscilloscope during the experiment, as discussed in Section 4.
Algorithm 4
Measuring the target device while changing the laser power Require: The minimum current j min , the maximum current j max , the current step j step , and the number of iteration r max Ensure: A query q, the true PUF response x PUF , and the list of faulty responses [x r i ] 1: Fix an arbitrary device query q 2: x PUF ← LIE(0, q) A response with no laser injection 3: i ← 0, j 0 ← j min 4: while j i ≤ j max do 5: for r ← 0 to r max − 1 do Repeat the same measurement r max times 6: The RO-PUF compares the frequencies of two oscillators and generates a 1-bit state for each pair. The circuit measures the frequencies using counters that detect the number of edges as the signal oscillates. We use one reference oscillator RO ref and 256 target oscillators RO i for i ∈ {0, 1, · · · 255}. We assume that the PUF generates the i-th bit by wherein f freq (RO) represents RO's oscillation frequency. Finally, the RO-PUF generates a 256-bit secret state by concatenating these bits, i.e., s = b 0 ||b 1 || · · · ||b 255 . This state s is directly output by the target chip.
Experiment 1: Changing Oscillator Frequency with LFI
First, we examine how laser stimulation affects the oscillation frequency. We locate a light-sensitive region by scanning the chip surface with a manual XY stage while monitoring RO ref 's frequency. After locating a coordinate that indicates light sensitivity, we gradually increase the laser power until the oscillator stops working, i.e., observing a flat line. Figure 2 shows the relationship between the injected laser power (the horizontal axis) and the oscillation frequency (the vertical axis). between the frequency and the injected optical power. In the 180-nm oscillator, a ×5 magnification is sufficient to decrease the frequency from 30.7 MHz to 4.8 MHz with only 1.7 mW. The 40-nm oscillator is less sensitive because of the smaller dimensions and more metal layers, so the ×5 objective lens was insufficient. After increasing the magnification to ×10, however, the frequency changes from 34.3 to 4.1 MHz with the same laser power of 1.7 mW.
Experiment 2: Changing RO-PUF's Secret State with LFI
The frequency shift by laser stimulation impacts the bias between 0 and 1 in RO-PUF's state. We denote the oscillation frequency by f and its probability distribution by Prob Figure 7 summarizes the relationship between HW(s) (the vertical axis) and the power of the laser aimed at RO ref (the horizontal axis). The results clearly show that HW(s) approaches 0 as we increase the laser power. HW(s) became zero using 0.3 mW with the ×5 lens for the 180-nm chip and 0.6 mW with the ×10 lens for the 40-nm chip. At this point, RO ref 's frequency is lower than any of RO i . These laser powers are significantly smaller than the power limits that cause the oscillator to fail. By using this relationship between optical power and the Hamming weights, we can recover the PUF state s PUF as shown in Section 7.
We note that the attacker can locate RO ref , without the analog debug port nor the PUF output s, by repeatedly invoking Algorithm 3 while scanning the chip with a laser. If we observe several different outputs x = Dev[s](q) affected by the laser power, it is likely the coordinate for RO ref . The attacker can distinguish it from a laser on any other oscillator RO i , which causes at most 1-bit error in s.
Experiment 3: Clock Oscillators on Microcontrollers
To show how Redshift affects real devices, we evaluate the on-chip clock oscillators on the three microcontrollers discussed in Section 4. Like the ring-oscillator experiment, we evaluate the light-frequency characteristics by monitoring the clock signals on a GPIO pin while changing the laser power. Figure 8 laser spots. Then, we obtain the light-frequency characteristics with the same procedure in Section 5.2. The ×5 objective lens was sufficient for the SAM L11 and STM32F4. Meanwhile, the LPC55S69 was significantly less sensitive, and the ×20 lens was necessary. Figure 8 (d)-(f) are the light-frequency characteristics, which show the frequency decrease similar to the results on the custom ring oscillators. These results verify that the light-induced frequency shift is a common phenomenon that can affect many different devices. The results also show that Redshift still works with modern chips, e.g., LPC55S69 released in 2019.
The SAM L11 and STM32 are more sensitive than our 180-nm RO-PUF; less than 0.12 mW is sufficient to shift 16 to 5 MHz with the ×5 lens. The light-frequency graphs of these chips are relatively non-linear, but they are still monotonic. In contrast, LPC55S69 is much less sensitive, and 6.5 mW is necessary for shifting 12.0 to 9.1 MHz even with the highest magnification (×20). Even though it is less sensitive, its light-frequency graph is highly linear, allowing for precise oscillator control. Although explaining the exact reason for this less sensitivity needs details about the internal design, the rectangular metal patches on the top layer can be one reason. To evade the patches, we put a laser spot on a narrow space between them (see Figure 8), potentially preventing us from aiming the laser at the optimal coordinate.
Experiment: Arbiter PUF
We also verify Redshift on arbiter PUFs, which is another delay-based PUF with different measurement principle.
A-PUF Design
Our A-PUF circuit in Figure 9 compares the slight difference in propagation delay between two configurable delay paths. The paths have 128 stages and accepts a 128-bit challenge. Each stage is composed of a pair of selectors that changes the path depending on a challenge bit. The arbiter decides a faster path using a NAND-based SR latch; Figure 9 shows the arbiter's transistor-level internal structure for the later discussion in Section 8.1.
We use the A-PUF as a weak PUF generating a 256-bit state. We first determine random 256 challenges, namely w i ∈ {0, 1} 128 for i ∈ {0, 1, · · · , 255}. Then, we get 256 bits by feeding these challenges to the A-PUF. The concatenated 256-bit word is the secret state s. Similar to our RO-PUF, the chip directly outputs s. Figure 9: Our A-PUF Design. The arbiter has transistor-level description with a laserinduced photocurrent, which will be discussed in Section 8.1.
Experiment 4: Changing A-PUF's Secret State with LFI
We repeat the experiments from the previous section, only now targeting our A-PUF. We first explore the light-sensitive region by scanning the chip surface while monitoring HW(s). We locate two light-sensitive regions in the arbiter: one increases and another decreases the Hamming weight HW(s). We aim the laser beam at the former coordinate and gradually increase the laser power while measuring the corresponding HW(s). Figure 10 shows the relationship between the laser power and the Hamming weight HW(s) while applying laser stimulation on the arbiter circuit. Figure 10-(left) and -(right) show the results with the 180-nm and 40-nm chips, respectively. Similar to the previous experiment with RO-PUF, HW(s) almost monotonically decreases as we increase the laser power. We use the ×5 objective lens in both the 180-nm and the 40-nm A-PUFs. The 180-nm A-PUF is highly sensitive; 0.3 mW is sufficient to achieve HW(s) = 0. Similar to the previous experiment, the 40-nm A-PUF is less sensitive. However, the 40-nm A-PUF reaches HW(s) = 0 with 4.6 mW using the ×5 lens.
The above results show that we can manipulate the A-PUF state through laser stimulation in the same way as the RO-PUF. Redshift is applicable to another delay-sensitive
State Recovery Attack
We recover the PUF's secret state s PUF using the data collected in our experiments. For analysis, we extend Zeitouni et al.'s attack [ZOW + 16] to handle unstable bits in our PUFs.
Extension of Zeitouni et al.'s Search Algorithm
Around 5-10% of the bits in our RO-PUFs and A-PUFs are unstable (see Appendix A), which is a problem for the previous state recovery algorithm (Algorithm 1). Figure 11-(left) and (right) illustrate the state recovery with and without unstable bits. In the figure, s 0 , · · · , s end are the intermediate PUF states. The arrow between two states means that one is reachable from another by neighbor search. Algorithm 1 successfully reconstructs each state s i+1 from previous state s i when measurements are stable (Figure 11-(left)), but Algorithm 1 fails with unstable measurements (Figure 11-(right)) for two reasons. First, HW(s i ) ≥ HW(s i+1 ) is not always true when there are measurement errors. Second, there are unhandled branches and deadends (e.g., s 2 , s 4 , and s 5 ) within the search space.
To address these issues, we instead use Algorithm 5, which is an extension of Algorithm 1. Algorithm 5 takes the measured data (q, x PUF , and [x r i ] from Algorithm 4) and the maximum distance d max in neighbor search, and returns the secret PUF state s PUF corresponding to the correct PUF response x PUF . The key idea is to compare the emulated output x with a set of responses X (line #11), instead of a particular response x i−1 in Algorithm 1. The algorithm initializes X with all the measured responses [x r i ] (line #1) and removes a particular element x ∈ X if the corresponding state is found (line #13).
We use another set C to keep track of the discovered states.
In each iteration, the algorithm first fixes a base state s and exhaustively checks its neighbors within the distance d max (line #7). For each candidate s , the algorithm emulates Dev and obtains x = Dev[s ](q) (line # 8), which is then compared with the elements in X (line #11). If x is found in X, i.e., Dev[s ](q) ∈ X, we add s to C and remove x from X (lines # 12 and 13). After the neighbor search, the algorithm continues by choosing a new base from C. We prioritize the candidate in C with the lowest Hamming weights (line #4). If the distance between the neighboring states is closer than d max , we will eventually reach the final state s PUF corresponding to x PUF ; otherwise, the algorithm returns a failure.
State Recovery Experiment
We apply Algorithm 5 to the PUF responses we obtained in Sections 5 and 6 for recovering the 256-bit PUF states s PUF with the most simple Dev given by We discuss a more elaborate Dev in Section 7.3. Although Eq. 4 does not reflect reality, it is sufficient for evaluating the computational effort, i.e., the number of Dev emulations (line #8). Note that since the PUF output s is supposedly secret in Algorithm 5, we use it only for checking hypothetical states. We implemented the search program with C++ and ran it on a mid-range CPU (AMD Ryzen5 2600). Table 1 summarizes the results: • The number of unique states observed after the entire measurement #X, • The number of states #States examined for the neighbor search, • The minimum search distance needed for a successful attackd max , and dmax is the minimum dmax needed for a successful attack.
• The total CPU time measured using the clock function in the C standard library.
The table also shows j max , the maximum laser current needed for observing HW(s) = 0, which is necessary for a successful attack. The algorithm finished within a minute for the 40-nm RO-PUF, 180-nm A-PUF, and 40-nm A-PUF because the measurement was dense enough andd max = 1. The 180-nm RO-PUF required a larger neighbor search distance of d max = 3, but it still finished within an hour.
The experimental results show that we can fully recover a secret PUF state by running Algorithm 5. Even with the most challenging case, i.e., 180-nm RO-PUF, the total number of Dev emulations is 256 3 × 3, 116 ≈ 2 33.0 . The search space 256 dmax increases combinatorially withd max and quickly becomes impractical. Therefore, a successful attack requires a smallerd max with a finer measurement either by reducing j step or increasing r max . We evaluate the impact of these parameters ond max by decimating our 40-nm RO-PUF dataset 3 . Table 2 summarizes the distancê d max for various j step and r max .d max decreases with a smaller j step and a larger r max as expected. With sufficiently dense measurement, we can eventually achieved max = 1 in which running Algorithm 5 is trivial even with a larger PUF state.
Error Correction and Cryptography
We verify the state recovery attack with a more elaborate Dev[·](·) that is described by Algorithm 6, which includes error correction and a cryptographic service. It uses a simple error-correction scheme with a repetition code and bit selection [DGSV15], similar to the one used by Zeitouni et al. [ZOW + 16].
Following the previous work [ZOW + 16], we optimize the recovery algorithm by searching k PUF instead of the raw states s 0 , · · · , s r−1 . In other words, we ignore the error correction by targeting the value after error correction. We can easily extend Algorithm 5 for the optimization by (i) setting the 128-bit k PUF as the target state s and (ii) skipping the lines #1-4 in Algorithm 6 while emulating Dev during the attack.
Algorithm 6 Dev with an error correction and a cryptographic service Require: The raw PUF outputs s 0 · · · s r−1 ∈ {0, 1} M , encrypted key c k , and a query q, the indices of selected bits l 0 , · · · , l N −1 . Ensure: A response x ∈ {0, 1} N 1: s ← Vote(s 0 , · · · , s r−1 ) Bitwise majority voting 2: for i = 0, · · · , N − 1 do 3: Bit selection: t j represents the j the bit of a word t 4: end for 5: k ← Enc −1 k PUF (c k ) 6: x ← Enc k (q) 7: return x Table 3 summarizes the experimental results for recovering k PUF , which shows smaller #X, #States, andd max compared with the previous experiment. The attack is easier because of the smaller search space reduced from 256 to 128 bits and more stables bits by the error correction. As a result, the execution times are faster even though new Dev involves two AES calls. All the searches finished within 1 second. The most challenging case is the 180-nm RO-PUF withd max = 2 that finished in 0.931 seconds. Dev occupies only ≈10% of the total execution time; the AES encryption and decryption with AES-NI is faster than other utility functions for data structures.
We finally discuss how error correction impacts the state recovery attack. Error correction introduces difficulty in two ways. First, the state size before error correction is larger. Second, it extends the distance between the neighboring states. That is because Dev will not generate a different output until a sufficient amount of difference is accumulated [ZOW + 16]. Searching the state after error correction, as we did in our experiment, is a general strategy to improve the attack. However, this technique becomes less efficient with a larger codeword. We consider generating an N -bit key using the κ-bit codeword N/κ times in parallel. In this case, the number of adjacent states after error correction is 2 κ · N/κ. The bitwise repetition code in our experiment is the most efficient case with κ = 1. However, the complexity grows exponentially with κ and eventually becomes infeasible as κ increases.
Discussion
This section includes the discussions on the causality and possible countermeasures, followed by the related works.
Causality
We investigate how laser injection influences the transistors based on the conventional parasitic-photodiode model. Figure 12 shows an inverter within a ring oscillator; Figure 12-(left) and -(right) show the current paths during high-to-low and low-to-high transitions, respectively. The gray arrows are the legitimate current paths. Meanwhile, the dashed arrows are the additional current paths caused by laser-induced photocurrent; the laser stimulation is modeled by the addition of extra current sources.
Oscillators and RO-PUF
We focus on the high-to-low transition in Figure 12-(left) for simplicity. Without a laser injection, all the PMOS current I 1 goes to the load capacitance, and thus I 2 = I 1 . With a laser injection, part of I 1 goes to ground through the photocurrent I 3 causing I 2 < I 1 . A smaller I 2 increases the low-to-high transition delay because the inverter needs more time charging the load capacitance with a smaller I 2 . The high-to-low transition delay, shown in Figure 12-(right), increases in the same way. Finally, the longer propagation delay in each inverter results in a slower oscillation.
A-PUF Figure 9 shows the transistor-level description of the arbiter with the current sources by laser stimulation. The arbiter's frontend is a pull-down network composed of the transistors (Tr T and Tr B ) and the capacitors (Cap T and Cap B ). The capacitors are precharged by negating the reset before sending a step signal; the figure shows the transistor states after the precharge. The capacitors preserve their charges because both Tr T and Tr B are turned off. At the evaluation phase, a step signal propagates through the delay paths and eventually turns Tr T (resp. Tr B ) ON. Then, the capacitor Cap T (resp. Cap B ) starts to discharge through the transistors. When the capacitor voltage V T (resp. V B ) reaches a threshold, the backend cross-coupled inverters converge to a stable state representing the faster path 5 . Figure 9 shows a case a laser illuminates Tr B only. The photocurrent I 7 discharges Cap B , making the delay in discharging Cap B shorter because a part of the charges is already lost when the step signal finally arrives. This causes the arbiter a bias preferring the bottom path at B, increasing the population of corresponding bit value. Laser stimulation on Tr T (cf. Tr B ) causes an opposite bias. This explains the two light-sensitive coordinates observed in Section 6: one increases 0 and another increases 1.
Extension to Other Analog Circuits
We discuss the potential to extend Redshift into other analog circuits beyond the delay-sensitive circuits that were the focus of this paper. The simple principle of the parasitic-photodiode model -laser stimulation generates a current in an otherwise insulating transistor-is still useful for explaining Redshift, as discussed above. Therefore, the model should be useful for predicting how Redshift affects a target analog circuit in advance. For an experimental verification, sweeping the laser power while monitoring an output will be generally useful. An amplitude-modulated laser will be necessary for targets that may reject DC signals, e.g., a microphone [SCR + 20]. Once a problem is experimentally verified, we can use the parasitic-photodiode model in a SPICE simulation to make a quantitative analysis and evaluate the effectiveness of a countermeasure.
Countermeasures
On-Chip Sensors The conventional sensor-based countermeasures should work in principle if a detection threshold is properly configured for Redshift. However, as discussed in Section 3.2, simply raising the detection threshold can prohibitively increase the false positives caused by environmental lights or cosmic particles [Hab65, NRV + 06]. Instead, we can improve the false-positive rate by integrating (averaging) the sensors' output over time. The above technique is effective because Redshift is sustained for a longer period. We can achieve this by adding an integrator circuit after a conventional LFI sensor. Alternatively, an oscillator-based sensing scheme naturally achieves such integration. He et al. proposed to use a ring oscillator to detect laser pulses [HBB + 16]; it can be extended for efficiently detecting Redshift.
Detecting a Wrong PUF Key. Detecting a wrong PUF key and terminating the cryptographic service Dev[s i ](q) [ZOW + 16] can prevent the state-recovery attack. We can achieve this with recalculation. At enrollment, we encrypt a constant value, such as 0, to get the corresponding ciphertext c 0 = Enc k PUF (0) and store it on a non-volatile memory along with the encapsulated pre-shared key c k in Eq. 1. After recovering the PUF key k PUF on each bootup, the system recalculates c 0 = Enc k PUF (0) and compares it with the stored c 0 . An unsuccessful comparison means k PUF = k PUF , and we can terminate the following sensitive operations.
Changing a Reference Oscillator
To attack this scheme, the attacker should change the laser coordinate for each bit, which significantly increases the difficulty of the measurement.
Obfuscation As discussed in Section 3.3, hiding Dev with a proper hardware obfuscation scheme [FBT17] will prevent the attacker from running the state-recovery algorithm in Section 7.
Related Works
Laser-Assisted Device Alteration (LADA) [RE03, BHK13] LADA is an LSI reliability analysis for isolating a failure mechanism typically in a digital circuit. LADA injects a continuous-wave laser on a target transistor while checking the Shmoo plot, the pass/fail test with different frequencies and voltages. If the injection changes the plot, the target transistor has a small operational margin and is a potential failure cause [RE03]. Both Redshift and LADA change transistor behavior with continuous-wave laser injection. Boit et al. mentioned LADA as a modern diagnosis tool at FDTC 2013 [BHK13], but there is no concrete attack so far, as far as the authors are aware. Also, LADA usually targets logical pass/failure in a digital circuit, not in delay-sensitive circuits.
Conclusion
We proposed a new laser injection attack on delay-sensitive circuits that are highly sensitive to light. The attack is feasible by using a low-power, continuous-wave laser that significantly reduces the attack cost and is more stealthy against sensor-based countermeasures. We experimentally verified that we could manipulate the frequency of oscillators by changing the injected laser power on our custom ASIC and the off-the-shelf microcontrollers. An attacker can leverage the above phenomenon to manipulate the PUF states from our ringoscillator PUFs. A similar state manipulation is possible on arbiter PUFs, showing that the proposed attack can be extended beyond oscillators. Our recovery algorithm, extended from Zeitouni et al.'s attack, successfully recovered secret information by exploiting the manipulated PUF states. There are several interesting problems to explore in the future. Extending Redshift to other applications and analog circuits can be an interesting challenge. Also, the causality discussed in Section 8.1 needs further verification through circuit simulation and controlled experiments.
A Evaluation of Unstable Bits
This section describes how we evaluated the unstable bits in the PUF outputs. For the target bit, we consider it stable if we get the same bit value for any r ∈ R; otherwise, we consider it unstable. We counted the number of unstable bits as follows. For each index i that represents the laser current, we count the number of unstable bits as h(i) = HW OR r,r ∈R,r =r XOR(s r i , s r i ) wherein OR and XOR represent bitwise operations over 256-bit words. Table 4 summarizes the average and standard deviation of h(i) regarding i. Table 4 summarizes the number of unstable bits in our measurements; roughly 5-10% are unstable in 256 bits. | 11,516.4 | 2022-08-31T00:00:00.000 | [
"Engineering",
"Physics",
"Computer Science"
] |
Maximum likelihood-based estimation of diffusion coefficient is quick and reliable method for analyzing estradiol actions on surface receptor movements
The rapid effects of estradiol on membrane receptors are in the focus of the estradiol research field, however, the molecular mechanisms of these non-classical estradiol actions are poorly understood. Since the lateral diffusion of membrane receptors is an important indicator of their function, a deeper understanding of the underlying mechanisms of non-classical estradiol actions can be achieved by investigating receptor dynamics. Diffusion coefficient is a crucial and widely used parameter to characterize the movement of receptors in the cell membrane. The aim of this study was to investigate the differences between maximum likelihood-based estimation (MLE) and mean square displacement (MSD) based calculation of diffusion coefficients. In this work we applied both MSD and MLE to calculate diffusion coefficients. Single particle trajectories were extracted from simulation as well as from α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) receptor tracking in live estradiol-treated differentiated PC12 (dPC12) cells. The comparison of the obtained diffusion coefficients revealed the superiority of MLE over the generally used MSD analysis. Our results suggest the use of the MLE of diffusion coefficients because as it has a better performance, especially for large localization errors or slow receptor movements.
The derivation of diffusion coefficient from mean square displacement (MSD) curve fitting (Matysik and Kraut, 2014) is a basic and frequently used method because it provides consistent results despite of the statistical shortcomings of MSD analysis (Saxton, 1997). The main problem with MSD analysis is that the overlapping time-averaging calculations in MSD curves from a single trajectory generate complex noise characteristics (Grebenkov, 2011;Qian and Sheetz, 1991). This resulted in an asymmetric distribution of the estimated diffusion constant around the true value that makes the interpretation of the results difficult (Yu, 2016). Another problem is that MSD cannot handle the uncertainty of the localization properly, in other words, the MSD requires the real coordinates of the particle to provide correct results. However, this is not the case in practice, because observed trajectories are compromised with both the localization error (Martin et al., 2002) and the motion blur effect (Savin and Doyle, 2005).
Maximum likelihood-based estimation (MLE) has already been successfully applied to estimate diffusion coefficients from singleparticle tracking experiments (Shuang et al., 2013). The MLE is one of the most frequently used method in statistics to estimate arbitrary parameters of theoretical models describing the observed event by using recorded data. Changing the model's parameters will alter the probability of the recorded dataset. MLE is an optimization method, that estimates a set of parameters that provides the maximal probability of the observed data. The MLE has asymptotically optimal properties, it determines the correct distribution of diffusion coefficients for a homogenous set of particles localized within a finite camera integration time and in the presence of localization error (Zacks, 1971). A comprehensive study on detailed comparison of MSD and MLE methods was recently published (Bullerjahn and Hummer, 2021), which concluded several advantages of the maximum likelihood estimator compared to other diffusion coefficient calculating methods.
There is a clear relation between the movement of cell surface receptors and their signal transduction activity. There are several single molecule detection (SMD) techniques to investigate this relationship. Events that result in clear changes, such as receptor ligand interactions can be studied by previously widely used analytical methods such as MSD curve analysis. However, for biological effects that cause only small variations in receptor movements but result in biologically significant changes, conventional methods can no longer be used for reliable investigation.
The reliability of the MSD and MLE methods were tested on simulated datasets as well as on data derived from live-cell experiments. For the live-cell measurements we detected changes in the surface movement of α-amino-3-hydroxy-5-methyl-4isoxazolepropionic acid (AMPA) receptors after estradiol exposure.
The gonadal steroid 17β-estradiol (E2) is a powerful molecule playing a key role in learning and memory formation by influencing glutamatergic neurotransmission and synaptic plasticity (Kramár et al., 2009;Ledoux et al., 2009;Lu et al., 2019;Murakami et al., 2018;Teyler et al., 1980;Vierk et al., 2014;Wong and Moss, 1992). Besides its well-known classical actions, E2 can influence gene expression indirectly by rapidly altering the functions of membrane receptors and the activity of second messenger molecules. These are referred to as the non-classical effects of E2 (Rudolph et al., 2016). Although ample data have been accumulated on the rapid effects of E2 on learning and memory (Phan et al., 2015;Taxier et al., 2020), the molecular mechanisms are still largely unknown. Single-molecule tracking studies showed that the lateral diffusion of membrane receptors determine the activation state of membrane receptors and consequently the downstream signaling events (Kusumi et al., 2014).
The surface movement of glutamate receptors including AMPA receptors is pivotal in glutamatergic neurotransmission and synaptic plasticity (Babayan and Kramar, 2013;Penn et al., 2017).
Accordingly, measuring the diffusion parameters of the AMPARs can provide a better understanding of the non-classical E2 effects on learning and memory processes (Godó et al., 2021). Therefore, it is crucial to improve currently available methods to analyze membrane receptor movements.
Recent studies (Barabas et al., 2021;Godó et al., 2021) on lateral movement of receptors in the plasma membrane have demonstrated the value of the data extracted from SMD. SMD is a technique that can identify individual molecules and create the trajectories of these particles for detailed analysis. This allows deeper insights into the function of the receptors and helps us to understand the underlying mechanisms of different agents actions such as E2.
When examining the effect of E2 on the movement of AMPA receptors, because of the shortness of the detected trajectories and the larger localization error due to the specificity of the labeling, the MLE method has been proven to be more accurate in determining the diffusion coefficient of the AMPA receptors.
In this current manuscript we found that MLE method is better to analyze single molecule receptor movements by comparing the MSD and the MLE analysis of simulated and real, live-cell datasets.
Simulated trajectories
A Matlab script was applied to generate sets of trajectories for two dimensional Brownian-diffusion with different characteristics. Besides the number of desired trajectories, the script allows the user to define the diffusion coefficient, the Gaussian localization error, the exposure time, the pixel size, the number of frames in each individual trajectory to customize the output according to the requirements. Moreover, there is an additional option that allows the user to turn the motion blur effect on or off.
Measured trajectories
To collect trajectories of real immobilized and diffusing molecules we performed single-molecule imaging using total internal reflection fluorescence microscopy (TIRFM). Singlemolecule imaging was carried out on an Olympus (Tokyo, Japan) IX81 fiber TIRF microscope equipped with Z-drift compensation (ZDC2) stage control, a plan apochromat objective (100X, NA 1.49, Olympus), and a humidified chamber heated to 37 • C and containing 5% CO 2 . The dish containing dPC12 was mounted in the humidified chamber of the TIRF microscope immediately after in vivo labeling. A 491 nm diode laser (Olympus) was used to excite ATTO 488, and emission was detected above the 510 nm emission wavelength range. The angle of the excitation laser beam was set to reach a 100 nm penetration depth of the evanescent wave. The parameters extracted by mean square displacement (MSD) (A) and maximum likelihood-based estimation (MLE) (B) based parameter estimation on three set of simulated trajectories. Each point on the graphs represents a set of parameters calculated from a trajectory. The value of diffusion coefficients is shown on the x-axis of both graphs. The y-axis represents another parameters provided by the diffusion coefficient's estimation, namely they are the y-intercept of the linear fitting and the extracted localization error for the MSD and MLE graph, respectively. The number of trajectories is 1,000 in each group.
A Hamamatsu 9100-13 electron-multiplying charge-coupled device (EMCCD) camera and Olympus Excellence Pro imaging software were used for image acquisition by TIRF microscopy. Image series were captured with 10-s sampling intervals and 33-ms acquisition times. Single-molecule tracking of labeled particles was performed with custom-made software written in C++ (WinATR, Kusumi Lab, Membrane Cooperativity Unit, OIST). The center of each particle was localized by two-dimensional Gaussian fitting, and the trajectory for each signal was created by a minimum step size linking algorithm that connected the localized dots in subsequent images. The trajectories were individually checked, and artifacts or tracks shorter than 15 frames were excluded from further analysis.
Immobilized particles
To measure immobilized particles, we dried a droplet of ATTO 488-labeled antibodies directed against the extracellular N-terminal domain of rat GluR2 (1:1,000 in PBS, Alomone Labs) onto a glass bottomed dish. The dried dyes were covered with Prolong Gold Antifade Mountant (P10144, Thermo Fisher, Waltham, MA, USA). After 24 h, image series of immobilized ATTO-488 dyes were collected and analyzed as described above.
Calculation of diffusion coefficients
Mean square displacement curve (MSD) for each trajectory was calculated by the following equation (Matysik and Kraut, 2014;Yu, 2016): where x i and y i are the observed coordinates of tracked particle, T: time interval between two consecutive frames, N: total number of frames, and m as an independent variable represents the time delay (in frames) applied for the particular point of the MSD curve. The calculation of diffusion coefficients was implemented by three points linear fitting on the MSD curve. The parameters extracted from the MSD fitting are also provided by the Matlab script available in the Supplementary material.
In order to obtain the corresponding D value by MLE, the MLE was applied as previously described (Berglund, 2010). x k and y k represent the observed displacements ( x k = x k+1 − x k and y k = y k+1 − y k ) arranged in N-component column vectors, where the total number of frames is equal to N+1. x n and y n are the coordinates of the signal's center on the nth frame, as usual. The N × N covariance matrix ( ) is defined by the following equation: where D is the diffusion coefficient, t is frame integration time, σ is the static localization noise, i and j are the row and column indexes in the covariance matrix and R summarizes the motion blur effect.
where s (t) is the shutter function, in our case, R = 1/6 as a consequence of continuous illumination. Mean and standard deviation (SD) values of diffusion coefficients extracted from a set of trajectories (N = 1,000) simulated with the following diffusion coefficients: .5µm 2 /s as a function of the length of trajectories. The diffusion coefficients were extracted by both the mean square displacement (MSD) (black) and maximum likelihood-based estimation (MLE) (red) method.
The likelihood was defined by the following function: The D and σ which provides the maximal likelihood is the estimated diffusion coefficient and static localization noise, respectively. The calculation of the determinant and the inverse of covariance matrix at each step of the optimization method can be a severe computational difficulty at high value of N. An approximation (Gray, 2005) based on the theory of circulant matrices is applicable (Berglund, 2010). In the script we defined a constant for the limit to switch between the direct and the simplified calculation method. Based on our experience we set the value of this constant to 1,001. When the number of frames exceeds 1,000 this simplified likelihood function is used for the global optimization, otherwise the direct likelihood function was Frontiers in Neuroinformatics 04 frontiersin.org The coefficients of variation (the ratio of the SD and the mean from Figure 2) as a function of the length of trajectories. The diffusion coefficients for the simulation were: (A) 0.01µm 2 /s, (B) 0.02µm 2 /s, (C) 0.05µm 2 /s, (D) 0.1µm 2 /s, (E) 0.2µm 2 /s, (F) 0.5µm 2 /s. applied. In this study the maximal length of trajectories was 1,000 frames, so the script applied the direct method for each trajectory.
To estimate the area of molecule trajectories the convex hull for each trajectory was created by a Matlab script. Area of the molecule trajectory was defined as the area of this convex hull. The Matlab script for the MLE based estimation of diffusion coefficient is available as a zip file available in the Supplementary material.
Simulated trajectories
Three sets of trajectories were generated with MSD and MLE estimations assuming the presence of the blur effect due to continuous recording. Each set containing 1,000 trajectories with a length of 501 frames differed in the values of the Distribution of diffusion coefficients derived from trajectories recorded on immobile particles. The measurement was carried out on different temperatures and the extracted trajectories were analyzed by the mean square displacement (MSD) (A) and the maximum likelihood-based estimation (MLE) (B) method. The inserted table shows the mean and SD values for each group, respectively. The effect of E2 treatment on the diffusion coefficient of GluR2-AMPAR molecules in the soma's plasma membranes of dPC12, live-cell. The E2 treatments were carried out by the concentration of 100 pM (A,B) and 100 nM (C,D). Both the mean square displacement (MSD) (A,C) and the maximum likelihood-based estimation (MLE) (B,D) methods were used for further analysis to obtain the diffusion coefficients from the recorded trajectories. The graphs represent the groups as mean and SD values. The probability values of significant differences calculated by Kolmogorov-Smirnov test (*p < 0.05) and the number of trajectories in each group are also shown. diffusion coefficient and the localization error. The first group contained immobile (D = 0µm 2 /s) trajectories in the presence of ε = 100 nm localization uncertainty. The second set contained mobile (D = 0.15µm 2 /s) trajectories without any localization error (ε = 0 nm). The last group simulated trajectories recorded on moving particles (D = 0.15µm 2 /s) with ε = 100 nm measurement error. Figure 1 shows the parameters provided by the MSD and MLE. Figure 1 demonstrates that both methods clearly separate the distinct sets of trajectories. The MLE reliably provides the expected parameters while diffusion coefficients provided by the MSD method are in good agreement with the theoretical values. A minor difference between the two methods is observed between the distribution of diffusion coefficients from the mobile trajectories with no localization error. The MLE estimates the diffusion coefficients with less standard deviation (SD). However, this observation has no significance in the single molecule imaging because the lack of localization error is a purely theoretical category. The main difference between the two sets of data is the distribution of diffusion coefficients extracted from the immobile trajectories. While the MSD based diffusion coefficients show some variability around the group's average of 0 µm 2 /s, the distribution of the same parameter in the same group provided by the MLE is much narrower. Since this scenario can easily happen if we observe slow particles, this finding has a great importance, and we went further to investigate it in detail.
To investigate this phenomenon, another set of trajectories were created and analyzed. While the localization error was constant (ε = 100 nm), both the length of trajectories and the diffusion coefficients were altered. The length was altered from 11 to 1,001 frames. The diffusion coefficients had the following values: 0.01µm 2 /s, 0.02µm 2 /s, 0.05µm 2 /s, 0.1µm 2 /s, 0.2µm 2 /s, and 0.5µm 2 /s. The number of randomly created trajectories in each group was 1,000. The set of raw simulated data is available in the Supplementary material.
The group means provide satisfactory estimation of the diffusion coefficient when the number of steps (i.e., the number of frames minus one) is equal or above 20. At the shortest trajectories (length is equal to 10 steps) some uncertainty is present independently of the applied method. In this case the mean values slightly differ from the expected ones. This finding confirms the legitimacy of the general practice that in studies with singlemolecule tracking the trajectories below the length of 15 steps are omitted from further analysis. Figures 2, 3 demonstrate that the SD and coefficient of variation (CoV) of diffusion coefficients derived by MSD are larger than the corresponding values extracted by MLE. In the two slowest group of trajectories (D = 0.01µm 2 /s and D = 0.02µm 2 /s) both the CoV and SD parameters provided by the two analyses differ to a large extent and this difference is independent of the trajectory length. The values of CoV of the MSD based diffusion coefficients for the slowest trajectories (D = 0.01µm 2 /s) are approximately three times higher than the corresponding values extracted by the MLE. In the case of the slightly faster group (D = 0.02µm 2 /s) the application of the MSD method provides two times higher CoV values for the diffusion coefficients than the MLE based analysis. In the group simulated with D = 0.05µm 2 /s the MSD provided values of CoV for the diffusion coefficients exceed the same values from MLE based calculation by 30%. This difference between the values of SD and CoV diminish slowly with the increasing diffusion coefficient. The values of SD and CoV are crucial in several types of statistical test, and a broader distribution can easily disguise a slight but a real difference between the investigated groups. While the provided mean values calculated by the MLE as well as the MSD method are in good agreement with the expected values, the distribution of the group's diffusions coefficients are narrower in each set of trajectories proving a better performance of MLE based calculation on simulated data.
Measured immobile particles
To test the usability of MLE on measured trajectories we carried out an analysis on trajectories recorded on immobile particles at different temperatures. However, the investigated particles are named "immobile" some movement is always present. For these particles diffusion coefficients are approximately two orders of magnitude smaller than receptor's diffusion coefficients. We expected more intense movement at elevated temperature. The trajectories are available in the Supplementary material. Figure 4 shows the distribution of diffusion coefficients measured at different temperatures on immobile samples. These distributions confirm the result derived from the simulated data. There is a shift in the mean values 5.9·10 −4 µm 2 /s and 3.0·10 −5 µm 2 /s for the trajectories measured at 24 • C. As it was expected the mean values are higher (1.2·10 −3 µm 2 /s and 6.1·10 −4 µm 2 /s) at 37 • C. More importantly, the values of SD are significantly decreased by applying the MLE. While provided values of SD by the MSD method are 3.5·10 −4 µm 2 /s and 2.6·10 −4 µm 2 /s, the distributions from MLE based analysis are significantly narrower (the corresponding SD values are: 2.7·10 −5 µm 2 /s and 1.7·10 −4 µm 2 /s). These findings match the results of our previous in silico experiments.
Trajectories measured on live dPC12 cells
Analysis performed on simulated data and immobile particles showed that the MLE had remarkable performance which occasionally exceeded the abilities of MSD based method. To compare the two approaches also in live-cell experiments, we tested their usability and reliability in an experimental model that has been routinely used in our laboratory. Therefore, comprehensive analysis was carried out on AMPA receptor (GluR2-AMPAR) trajectories measured in live dPC12 cells after E2 or vehicle treatment.
Administration of 100 pM E2 induced a significant decrease of diffusion coefficients in AMPAR in soma in the first 20 min after the treatment. The means were decreased to 0.018 µm 2 /s and 0.019 µm 2 /s, while the control's mean values were 0.020 µm 2 /s and 0.022 µm 2 /s for the MSD and MLE, respectively (Figures 5A, B). The probability of significance was p = 2.33% and less than 0.01% for the MSD and MLE method, respectively. The application of 100 nM E2 highlighted the difference between the two calculation methods. While analysis conducted by the MLE (Figure 5D) showed no effect (p = 14.85%) after E2 administration, the MSD method provided a significant decrease of the diffusion coefficients ( Figure 5C). In this case the mean of diffusion coefficients was 0.019 µm 2 /s, which was significantly lower (probability of significance is p = 2.86%) than the same value in the control group 0.029 µm 2 /s.
The result of MLE can be surprising as the lower E2 concentration (100 pM) evoked a significant decrease of the diffusion coefficients, while the administration of the higher dose of E2 (100 nM) did not induce any change. This effect was previously investigated (Godó et al., 2021) and it was revealed that the difference may be the consequence of GPER1 internalization in the soma induced by 100 nM E2. It was also demonstrated that both ERβ and GPER1 are required for the effect of E2. The higher dose of E2 induced elimination of GPER1 preventing E2 to cause decrease of the diffusion coefficient.
In soma, the 100 nM E2 treatment has distinct effect, based on the two calculation methods. On one hand, the MLE does not reveal The trajectories length distribution from GluR2-AMPAR molecules in the soma's plasma membranes of dPC12, live-cell in control state and after administration of 100 nM E2. any significant effect due to E2 treatment, on the other hand the application of E2 significantly decreases the diffusion coefficients based on statistics on the MSD results. Previous study (Godó et al., 2021) has shown that GPER1 internalization depletes the GPER1 which is crucial for the effectiveness of E2 in soma, indicating the propriety of MLE based result. Figure 6 shows the distribution length distribution of trajectories measured on GluR2-AMPAR molecules in the somatic plasma membrane of living dPC12 cells both in control state and after the administration of 100 nM E2. The vast majority of trajectories are shorter than 50 steps. Our previous results on simulated trajectories proved that MLE provides more reliable result on trajectories characterized with similar parameters (D = 0.02µm 2 /s and the length are less or equal to 100 steps). Based on this we think that in this case we can acknowledge the MLE provided results and statistical statement.
Discussion
The focus of the current study was to examine in depth the differences between MLE and MSD-based methods. First, we used simulated trajectories, which are suitable to detect localization errors. Our results show that while the obtained group averages of the diffusion coefficients perfectly corresponded to the expected values regardless of the computational methods, the SD values of the diffusion coefficients were significantly lower for the D = 0µm 2 /s (immobile trajectories with localization error) group using the MLE method. This difference between the distribution of the diffusion coefficient values is the consequence of the fundamental difference between the two methods. On one hand the MSD based calculation does not constrain the sign of the diffusion coefficient, therefore the D values, especially for slow or immobile trajectories, often have a negative sign, which is difficult to interpret. On the other hand, the MLE method does not provide sub-zero diffusion coefficients, so the distribution of D values is much narrower.
Secondly, the reliability of the methods was investigated, also using simulated trajectories to compare mean and SD values for low diffusion coefficients. The length of the trajectories and expected diffusion coefficients characterized the randomly generated trajectories in these groups. The analysis of the set of simulated trajectories showed no difference between the two methods in terms of mean values. Both analyses provided good estimates of the expected values. These results were consistent with our previous finding, namely that the MLE method gave more accurate estimation of diffusion coefficients. The SD value of diffusion coefficients from MSD method exceeded the SD provided by MLE based calculation when the value of D was less than 0.2µm 2 /s. In addition, both mean and SD values were identical when the diffusion coefficient was greater or equal to 0.2µm 2 /s. The analysis following numerical simulation showed that the MLE outperforms the MSD as a data analysis tool.
Regarding measured immobile trajectories at different temperatures, the two methods provided similar values for the average of the diffusion coefficient in any analyzed groups. According to the expectations, the higher temperature evoked a more intense movement, which was reflected in increased diffusion coefficients. The experiment clearly confirmed that the distribution of diffusion coefficients provided by the MLE is much narrower than the distribution calculated by the MSD approach. The reason for this difference is the following: in contrast to MLE method MSD is less effective in separating the static localization noise from the diffusion generated displacement, which causes increased uncertainty in the calculated diffusion coefficients. This phenomenon is pronounced when the localization error exceeds the expected displacement by diffusion (i.e., in the case of so-called immobile particles).
Finally, the two methods were tested on trajectories collected from live dPC12 cells. The effect of E2 on the movement of GluR2-AMPAR molecules was investigated in somata of dPC12 cells. On the one hand, the 100 pM E2 treatment significantly decreased the mean value of diffusion coefficients by applying either the MSD or the MLE method. On the other hand, the two calculation methods resulted in conflicting results when comparing the effect of 100 nM E2 in the soma. The MSD method showed a significant alteration in the diffusion coefficients of GluR2-AMPAR molecules, while the MLE demonstrated no effect. The result of MLE is consistent with the previously reported ineffectiveness of 100 nM E2 in the soma, due to GPER1 internalization. The investigation of length distribution of the trajectories and the results gained from simulated trajectories reveals that for this set of trajectories the MLE provides more reliable diffusion coefficients. So, the statistical result extracted from MLE based calculation seems to be more reliable and accurate in this particular case.
Conclusion
The performed analysis conducted on simulated trajectories revealed that the provided mean values of diffusion coefficients are in good agreement with the theoretical values, regardless of the applied method. The superiority of MLE based calculation over MSD was shown by examination of the coefficients of variation (ratio of SD and the mean) for the distribution of the estimated Frontiers in Neuroinformatics 08 frontiersin.org diffusion coefficients. The CoV is remarkably lower by using MLE based method instead of the application of MSD based in the case of slow particle movement. The results of simulation were confirmed by the results extracted from immobile trajectories measured at different temperatures. The distribution of diffusion coefficients is undoubtedly narrower in the case of MLE making the interpretation of obtained results easier.
Moreover, our findings were tested on AMPA receptor trajectories measured in live dPC12 cells after estradiol-treatment. The two calculation methods provided conflicting results when comparing the effect of 100 nM E2 in the soma.
On the one hand, MSD is less reliable for short trajectories or trajectories characterized with small diffusion coefficients. Moreover, MSD does not effectively separate the localization error from diffusion. On the other hand, MLE is applicable on short and slow trajectories, and it does separate the localization error from the movement. The superiority of the MLE method was demonstrated on simulated as well as on measured trajectories in live cells.
These results indicate that MLE method is one of the first recommended approach to analyze data obtained in singlemolecule imaging measurements.
Data availability statement
The original contributions presented in this study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.
Author contributions
IA, KB, and TJ contributed to conception and design of the study. DE, TK, and SG were involved in sample preparation for the TIRF measurements. SG performed the TIRF measurements. KB, SG, and GK extracted trajectories from measured videos. TJ created the Matlab script for analyzing trajectories. SS and GM checked and optimized the script. GM and TJ performed the statistical analysis and wrote the first draft of the manuscript. KB, DE, TK, GK, and SG wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version. | 6,532.4 | 2023-03-08T00:00:00.000 | [
"Biology"
] |
In Vitro Evaluation of Scaffolds for the Delivery of Mesenchymal Stem Cells to Wounds
Mesenchymal stem cells (MSCs) have been shown to improve tissue regeneration in several preclinical and clinical trials. These cells have been used in combination with three-dimensional scaffolds as a promising approach in the field of regenerative medicine. We compare the behavior of human adipose-derived MSCs (AdMSCs) on four different biomaterials that are awaiting or have already received FDA approval to determine a suitable regenerative scaffold for delivering these cells to dermal wounds and increasing healing potential. AdMSCs were isolated, characterized, and seeded onto scaffolds based on chitosan, fibrin, bovine collagen, and decellularized porcine dermis. In vitro results demonstrated that the scaffolds strongly influence key parameters, such as seeding efficiency, cellular distribution, attachment, survival, metabolic activity, and paracrine release. Chick chorioallantoic membrane assays revealed that the scaffold composition similarly influences the angiogenic potential of AdMSCs in vivo. The wound healing potential of scaffolds increases by means of a synergistic relationship between AdMSCs and biomaterial resulting in the release of proangiogenic and cytokine factors, which is currently lacking when a scaffold alone is utilized. Furthermore, the methods used herein can be utilized to test other scaffold materials to increase their wound healing potential with AdMSCs.
Introduction
Mesenchymal stem cells (MSCs) have been shown to improve tissue regeneration in vitro and in vivo. Clinical data corroborates their beneficial regenerative effects in several organs and tissues, such as the heart, nerves, bone, and skin [1][2][3][4]. In order to administer MSCs to patients, cells have been introduced systemically and locally. While MSCs do have a homing capability to migrate to injured tissue, it has been claimed that after systemic administration only a fraction of the cells can migrate to the target tissue, while the majority of cells accumulate in the kidneys and lungs [5,6]. In the case of local injections, a large number of these cells are required and while a substantial proportion of the cells remain in the area, another quantity is flushed out into the blood circulation [2,7]. In an attempt to increase the retention rate of the cells, MSCs have been applied in association with biomaterials; for example, fibrin sprays and microbeads have been used for chronic skin wounds [8,9], while meshes and threedimensional scaffolds have been used to treat ischemic heart tissue [10] and diabetic ischemic ulcers [11].
Engrafted MSCs can release a series of cytokines and growth factors by interacting with local tissue to enhance repair and regeneration [5,12]. Recent studies indicate that MSCs modulate the regenerative microenvironment by means of a controlled release of several paracrine factors related to key processes, such as angiogenesis, cell homing, immunomodulation, tissue remodeling, and fibrosis [13][14][15].
BioMed Research International
Thus, MSCs may impact regeneration primarily by releasing paracrine factors necessary for wound healing [16][17][18] rather than tissue replacement.
While MSCs have been found to exist in nearly every adult tissue [19][20][21][22][23][24], the proliferation rate of MSCs derived from adipose tissue (AdMSCs) is not affected by donor age [25][26][27], making it possible to use them in an autologous manner in elderly patients in regenerative medicine. A high quantity of MSCs can be obtained from a small amount of fat tissue (at least 1 × 10 6 AdMSCs can be obtained from 200 mL of lipoaspirates) with more than 90% viability and virtually no harm to the donor [28,29]. Furthermore, as vasculature is believed to be rich in MSCs, it is not surprising that a large quantity of AdMSCs can be isolated from a small amount of adipose tissue, which is highly vascularized [30,31].
Several studies have shown the immunosuppressive properties of AdMSCs, which has allowed for xenogeneic transplantation into immunocompetent recipients for various disease models evidencing significant improvement without suppressing the immune system [31,32]. Furthermore, clinical and preclinical studies have determined that allogeneic transplants of AdMSCs do not usually result in graft-versushost disease (GvHD). These transplants have been used to treat GvHD after hematopoietic stem cell transplantation [32][33][34].
The positive effects of the use of MSCs are well established for various tissues; however, several regulatory and practical issues make chronic ulcers an attractive target for the clinical use of MSCs. More importantly, chronic ulcers remain an eminent clinical problem negatively impacting patients' quality of life and simultaneously representing a substantial expenditure for the healthcare system. In the US, these problems affect more than 8 million people with annual costs of around $20 billion [35]. With an aging population and the likelihood that the majority of the healthcare costs will come from patients over 65, the costs are almost certain to increase [36].
Several studies have proposed the combined use of scaffolds for dermal regeneration with stem cells for the treatment of chronic skin ulcers. In those studies, it has been shown that after seeding cells are able to survive in scaffolds, releasing several bioactive molecules that enhance skin regeneration in vivo [7,[37][38][39]. Although the results of preclinical trials are robust, several issues have to be clarified and optimized before clinical translation. In the case of chronic wounds, the cells must produce optimum amounts of paracrine factors in order to achieve the quantity necessary for healing. The addition of AdMSCs to the scaffold should support the healing process by creating a proregenerative microenvironment in the wound area. The key issue of determining the best combination of cells with a biomaterial and the development of an optimized composite material with increased regenerative capacity remains to be addressed.
Scaffolds alone are currently being used to treat chronic wounds in clinics and are composed of a variety of materials. In this study, we chose three scaffolds that are currently being used in clinics and one that is under development, all comprised of different biomaterials, to incorporate AdMSCs.
BioPiel is a film-like scaffold derived from crustacean chitosan. Smart Matrix, currently under development, consists of a fibrin-alginate composite. Integra Dermal Regenerative Template (DRT) is a bilayer scaffold composed of type I bovine collagen and chondroitin-6-sulfate with a thin silicon layer and Strattice is derived from decellularized porcine dermis.
In this study, we analyzed and compared the behavior of AdMSCs in four distinct scaffolds, which were chosen because of their differences in the construction, material, and protein composition. The seeding efficiency, cellular distribution, attachment, survival, metabolic activity, and paracrine release of the seeded cells were analyzed in vitro as were the angiogenic effects in vivo.
Cell Isolation and Culture.
Adipose tissue was derived from lipoaspirates obtained from donors who had given informed consent to participate in the study. The aspirated fraction was added to 50 mL Falcon tubes with an equal volume of 0.3 U/mL collagenase A (Roche, Basel, Switzerland) and incubated for 30 min at 37 ∘ C. After centrifugation, the resulting stromal vascular fraction was plated under standard conditions in Dulbecco's Modified Eagle's Medium with 4.0 mg glucose/L, stable glutamine, phenol red (DMEM; Biochrom, Berlin, Germany), supplemented with 10% fetal calf serum (FCS; PAA, Pasching, Austria), and 1% penicillin/streptomycin (P/S; Biochrom) under standard cell culture conditions (37 ∘ C, 5% CO 2 ) and medium was changed every 3-4 days. In all experimental settings, cells from passage 3 were used with three donors ( = 3) and performed in triplicate ( = 3).
To test the osteogenic differentiation potential of the AdMSCs, 80-90% confluent cells were cultured for 18 d in either control medium (alpha-MEM (Biochrom) + 10% FCS and 1% P/S) or osteogenic medium (hMSC osteogenic differentiation BulletKit, Lonza, Basel, Switzerland) in 6-well plates with a medium change every 3-4 d. Then, cells were fixed with 10% v/v formalin solution for 15 min, rinsed with PBS, stained with 0.5% w/v Alizarin Red S indicator (Ricca Chemicals Company, Arlington, TX) 30 min with gentle shaking, washed 3 times with PBS, and imaged for calcium deposition.
To test adipogenic differentiation of AdMSCs, cells were seeded in 6-well plates to 80-90% confluence. Medium was changed to either control medium (alpha-MEM + 10% FCS and 1% P/S) or adipogenic induction medium (hMSC adipogenic differentiation BulletKit, Lonza). For Oil Red O staining, cells were fixed after 14 d with 10% v/v formalin solution, rinsed with PBS, and stained with Oil Red O (Electron Microscopy Sciences, Hatfield, PA), and adipocytes were imaged (Nikon Eclipse TS100 Inverted Microscope).
Chondrogenic differentiation potential was carried out with three-dimensional pellet cultures in 15 mL polypropylene conical tubes. The initial pellets contained 2.5 × 10 5 cells and were cultivated for 21 d in either control medium or chondrogenic induction medium (hMSC chondrogenic differentiation BulletKit, Lonza) supplemented with TGF Beta 3 (Lonza). After collection, pellets were rinsed with PBS and fixed in formalin. Pellets were sectioned (5 m) in paraffin and stained with Alcian Blue to visualize acetic mucins and acid mucosubstances and counterstained with Nuclear Fast Red (both from Sigma-Aldrich, St. Louis, MO, USA) before imaging. All stainings were carried out with = 3 and = 3.
Cell
Seeding Efficiency on Scaffolds. The percentage of cells incorporated into the scaffolds ( = 3, = 4) was quantified by counting the cells attached to the culture dish one hour after seeding, that is, cells that did not attach to the scaffold. The scaffolds were removed and the remaining cells were detached from the well plates with trypsin-EDTA solution and counted in a Neubauer chamber. Seeding efficiency was calculated as the percentage of cells in the scaffold from the total number of seeded cells.
Cellular Distribution throughout Scaffolds.
AdMSCcontaining scaffolds were rinsed with PBS, fixed (3.7% paraformaldehyde, 0.1% Triton in PBS) on ice for 30 min, and blocked in 2% BSA in PBS at 4 ∘ C overnight ( = 3, = 3). Scaffolds were then incubated in a blocking solution containing 2 U/mL Texas Red-X Phalloidin (Life Technologies, Grand Island, NY) to stain polymerized actin and 3.5 M To-Pro-3 (Life Technologies) to stain DNA. After washing 4 times with PBS (10 min each), scaffolds were dried with sterile gauze, mounted in Vectashield Mounting Medium (Vector Labs, Burlingame, CA) on glass bottom culture dishes (MatTek Corp., Ashland, MA), and imaged using an Olympus Fluoview FV10i confocal microscope (Olympus, Tokyo, Japan). Chitosan films were z-section imaged from top to bottom with the drop side facing down on the glass bottom, in 4 independent locations (one center and 3 periphery locations). As the fibrin matrix, collagen-GAG matrix, and decellularized dermis are too thick for visualization by confocal microscopy from top to bottom, they were sectioned using a razor blade and rotated onto their sides in order to generate z-section images from cross sections. Image analysis to assess cell morphology, number, and distribution was performed using Olympus FV10-ASW software (Olympus).
Metabolic Activity and Cytotoxicity in the Scaffold.
On days 1, 3, 7, and 14 after seeding, the metabolic activity of the seeded cells was evaluated by precipitation of tetrazolium salt (WST-1). Cellular death was measured by the release of lactate dehydrogenase (LDH) from the cells on days 1, 3, and 7 (both from Roche, Mannheim, Germany) ( = 3, = 3). As the medium needed to be changed after 7 d, the total LDH activity could not be measured over a 14 d period. Seeded scaffolds were incubated in DMEM and WST-1 solution (1 : 10 ratio) for 1 h. The absorbance of the resulting formazan dye was measured at 450 nm with a reference wavelength of 620 nm. For the measurement of LDH, supernatants were harvested from the same scaffolds used for WST-1 assay and the analysis was performed according to the manufacturer's instructions. In short, the absorbance was measured at 490 nm and a reference wavelength of 620nm with controls including medium alone (background), cells in well plates without scaffolds (spontaneous LDH release), and cells in well plates without scaffolds with Triton X-100 in the medium (maximum LDH release). The resulting value was then calculated with the equation: cytotoxicity (%) = (experimental value -spontaneous LDH release)/(maximum LDH release -spontaneous LDH release) × 100.
Characterization of Secretion
Profile. Supernatants were collected from AdMSC seeded on scaffolds or tissue culture plastic ( = 3, = 3; 1.8 × 10 5 cells/scaffold) after 48 h under standard cell culture conditions, shock frozen with liquid nitrogen, and stored at −80 ∘ C until analysis. Human Cytokine and Angiogenesis Array Kits (R&D Systems, Abingdon OX, UK) were used to characterize the release of multiple cytokines and angiogenesis related proteins, respectively. Membranes were imaged using a Peqlab Fusion FX7 chemiluminescence system (Erlangen, Germany) and the spot intensity was quantified with ImageJ software [42] using the MicroArray Profile plugin (OptiNav, Inc.). Scaffolds without cells served as controls.
Hypoxia-Inducible Factor-1 Expression. Seeded scaffolds ( = 3,
= 3) were incubated in standard (21% O 2 , 5% CO 2 ) or hypoxic conditions (1% O 2 , 5% CO 2 ) and collected at 4, 8, and 16 h. Then scaffolds were washed two times with sterile PBS. Three scaffolds from each time point, oxygen condition, and type were briefly sonicated in 500 L lysis buffer; the HIF-1 expression was analyzed using a Human Total HIF-1 ELISA kit (R&D Systems), and the optical density was measured at 450 nm using a Mithras LB 940 Microplate Reader. The total protein concentration was determined using a Pierce BCA Protein Assay Kit (Thermo Scientific, Rockford, IL) and the absorbance was measured at 560 nm.
Chicken Chorioallantoic Membrane (CAM) Assay.
Research grade fertilized eggs (SPF, Valo Biomedia GmbH, Osterholz-Scharmbeck, Germany) were placed on a rotating egg tray for 3 days after fertilization at 37 ∘ C and 60-70% humidity. On day 3, a small window was made in the shell under aseptic conditions and the contents of the egg were gently placed into a 200 mL plastic dish. The dish was further placed into a petri dish with 50 mL of distilled water, 1% P/S, and 1% partricin and incubated at 70-80% humidity to prevent drying of the membrane. On day 10, autoclaved filter paper punches (5 mm) were added to the CAM directly followed by 10 L of conditioned media collected from serumfree cell seeded scaffolds after 48 h in culture, DMEM, PBS, or 20 ng of VEGF, which was reapplied daily for 3 days [43]. The applied filter paper punches were imaged daily using a Canon EOS 20D digital SLR camera with a Canon EF 50 mm f/1.8 II Standard AutoFocus Lens. Samples were quantified ( = 3, = 6) by being given arbitrary values based on the distribution and density of CAM vessels around the filter paper punch [40].
Statistical Analysis.
Results were analyzed with Graph-Pad Prism version 6.0e for Mac OSX (GraphPad Software, San Diego, CA USA) and are shown as mean ± standard deviation. Significant differences between sample groups were determined by analysis of variance (ANOVA) with a Bonferroni posttest where < 0.05 was considered statistically significant.
Results
Mesenchymal stem cells were isolated from human adipose tissue and characterized in terms of their immune phenotypes and differentiation potential. Fluorescenceactivated cell sorting analysis showed that AdMSCs do not express pan-hematopoietic marker CD45 but are positive for CD73, CD90, and CD105 (Figure 1(a)). Interestingly, AdMSCs expressed very low levels of the pericyte marker CD146. Moreover, the cells showed a strong differentiation potential towards osteoblasts, adipocytes, and chondrocytes after culturing in their respective differentiation conditions (Figure 1(b)). Calcium deposits were stained with Alizarin Red S for AdMSCs exposed to osteoblast differentiation medium. Lipid vacuoles from adipogenic differentiation were stained with Oil Red O. Chondrogenic pellets were stained with Alcian Blue to show chondrocyte growth. In this work, four different scaffolds were compared for their usability with AdMSCs in dermal regeneration (Table 1). We evaluated and compared the macro-and microstructure of the four scaffolds, observing important differences ( Figure 2). The dry thickness of the scaffold varies from a minimum of 0.12 mm for chitosan films to 3.8 mm for fibrin matrices (Figure 2(a)). When wet, the structure of the fibrin matrices collapses to a fibrous mesh decreasing the measurable thickness. The decellularized dermis had the thickest structure at 1.5 mm, while the collagen-GAG matrix was 0.20 mm thick (Table 1). Compared to the other scaffolds, the chitosan has a film-like appearance, while the fibrin and collagen-GAG present a more mesh-like structure and exhibited high porosity throughout the scaffolds. The decellularized dermis exhibited much tighter pores and the chitosan did not have any visible porosity (Figure 2(b)).
Scaffold porosity and the degree to which the pores are interconnected determine the loading capacity of the scaffolds. After AdMSCs were seeded and allowed to attach to the scaffold for one hour the seeding efficiency was evaluated. A seeding efficiency of almost 90% was observed in the fibrin matrix (88.6 ± 2.9), collagen-GAG matrices (86.5 ± 3.8), and the decellularized dermis (89.2 ± 3.8), which was significantly higher than the chitosan films (60.1% ± 5.9%) ( < 0.05).
Differences in the mechanical properties should influence the cell behavior when seeded. For that, a detailed view into the interaction and distribution of the seeded cells in the scaffold was obtained by confocal microscopy. Except for the decellularized dermis, the AdMSCs were highly attached to the material, showing fibroblastic morphology, creating a complex tridimensional arrangement between the cells and the scaffold (Figure 3(a)). The images were analyzed to give quantitative, spatial information on the cellular distribution throughout the scaffold (Figure 3(b)). The AdMSCs formed a layer on the seeding surface of chitosan films, showing almost no cells in the core. In the case of the fibrin matrix, cells were observed throughout the scaffold with a tendency to accumulate at the inner core of the material. Cells seeded on collagen-GAG matrices also showed a different distribution pattern creating a cell gradient from the seeding side to the bottom. In the decellularized dermis, AdMSCs were more concentrated on the seeding side while migration through the scaffold was limited. The distribution of the AdMSCs is an important indicator for biocompatibility with the different scaffold materials. However, secretion activity relies on cell survival beyond the initial seeding, which was measured by means of their metabolic activity. Interestingly, there was no correlation between the metabolic activity and seeding efficiency. Twenty-four hours after seeding, the formation of formazan blue, as an indicator of metabolic activity, was the highest in fibrin and collagen-GAG matrices, while AdMSCs seeded on the chitosan film and decellularized dermis showed comparable values initially (Figure 4(a)). In order to evaluate if these differences were due to increases in cellular death, lactate dehydrogenase (LDH) activity was measured from the supernatants. It can be seen that the decellularized dermis had a high rate of cytotoxicity (almost 100%), even after only one day in culture, whereas the collagen-GAG matrix had virtually no cytotoxic effect (Figure 4(b)).
The long-term viability of the AdMSCs seeded on the scaffolds was measured and compared at further time points after seeding. Results show that while the chitosan film and decellularized dermis have comparable metabolic activity through day 7, 14 days after seeding the chitosan film exhibited similar results to the fibrin and collagen-GAG matrices (Figure 4(a)). The collagen-GAG matrix showed steady metabolic activity throughout the 14 days, while the fibrin matrix showed an increase in activity through day 7 after which the activity decreased at day 14. Cellular death results showed a general increase in cytotoxicity as the metabolic activity of the cells increased, except for in day 7 of the fibrin and collagen-GAG matrix where the metabolic activity peaked. The highest percentage of cytotoxicity was seen in the decellularized dermis being close to 100% (Figure 4(b)).
The differences detected between scaffolds in relation to the behavior of AdMSCs lead to the conclusion that depending on the physical and chemical conditions, the factors secreted from the AdMSCs can also vary considerably. Here, the secretion of 91 different angiogenic, cytokine, and chemokine factors were analyzed to obtain a characteristic secretion profile for each scaffold. Among the detected factors, the most prevalent one was macrophage migration inhibitory factor (MIF), plasminogen activator inhibitor 1 (Serpin E1), interleukin 6 (IL-6), interleukin 8 (IL-8), chemokine (C-X-C motif) ligand 1 (CXCL1), placental growth factor (PlGF), and vascular endothelial growth factor (VEGF) ( Figure 5). Due to the low viability of the cells observed after seeding, decellularized dermis scaffolds were excluded for this assay.
Compared to AdMSCs seeded directly onto tissue culture plastic, the scaffold condition itself significantly induces the release of PlGF while it reduces the release of VEGF ( < 0.05). Compared among the scaffolds, we observed that the release of angiogenesis inducing IL-8 was similar between fibrin matrices and two-dimensional cultures, while chitosan films and collagen-GAG matrices show a dramatic decrease ( < 0.05). The release of inflammation regulating IL-6 was elevated in supernatants from cells seeded on collagen-GAG matrices while chitosan films showed the highest expression of Serpin E1. MIF and CXCL1 did not show any significant differences between scaffolds or control conditions. Decellularized dermis scaffolds were excluded as previous data revealed that it did not provide a compatible environment for the cells to migrate and flourish. Chitosan films and collagen-GAG matrices show a decrease in expression of IL-8 in comparison to fibrin matrices, which is similar to two-dimensional conditions. Collagen-GAG matrices had a significant release of IL-6, while chitosan films had an increase of Serpin E1 release over all other conditions. There are significant differences in release of PIGF and VEGF from all scaffolds in comparison to two-dimensional cultures. * < 0.05; * * < 0.01; * * * < 0.001; * * * * < 0.0001 when compared to 2D control. = 3, = 3.
monitor de novo vessel formation. Here, conditioned medium was pipetted onto autoclaved filter paper punches in order to determine if there was an enhanced effect from the factors secreted from the AdMSCs that was due to the composition of the scaffold or if the scaffold alone had any angiogenic potential. In order to minimize irritation to the CAM and avoid affecting the result, the scaffolds themselves were not utilized. In the positive control (VEGF), large existing vessels showed a tendency to move toward the filter paper, while this was not evident in the samples exposed to the AdMSC conditioned medium, suggesting that the supernatant of the cells was not as proangiogenic as pure VEGF (Figure 6(a)). Nevertheless, as seen in the in vitro data, the highest instance of neovascularization in small vessel convergence and growth occurred with medium that was obtained from collagen-GAG matrices followed by fibrin matrices and, finally, chitosan films ( Figure 6). The quantification is based on arbitrary points given for de novo small vessel formation up to reorganization of existing vessels (Figure 6(b)) [40]. These results suggest that the composition of the scaffold has a direct effect on the angiogenic factors released from the AdMSCs. No significant differences appeared between PBS, conditioned medium without cells, and DMEM alone (data not shown for the medium exposed samples).
Discussion
Although various stem cell populations have been suggested for therapeutic use, MSCs are particularly attractive as they are well discerned and ongoing clinical trials have shown promising results in wounded tissue [4,[44][45][46]. Furthermore, there is great potential for using AdMSCs in regenerative medicine [1]. They are easy to isolate, are accessible with minimally invasive procedures, and contain a high number of cells within a small amount of tissue, and the age of the donor does not affect their proliferation rate or differentiation potential [26,27], making them ideal for clinical procedures.
For clinical application, administration, and effectiveness are key factors in describing the efficacy of a given treatment. In the case of AdMSCs, this encompasses the viability of the cells under the given conditions and their ability to release beneficial growth factors to the damaged tissue. To minimize migration, which reduces the utility of the method, dermal Figure 6: Chicken chorioallantoic membrane in vivo analysis. Autoclaved filter paper punches with conditioned medium from scaffolds after 48 h in culture were observed over a five-day period for neovascularization of the CAM. Note the increase in small vessel convergence from the scaffold, specifically in VEGF, collagen-GAG, and fibrin matrices (a). Samples exposed to conditioned medium without cells are not pictured, as they did not differ from the negative control (PBS). Growth was analyzed from 6 replicates per treatment based on an arbitrary scoring system dealing with new small vessel formation and the behavior of existing vessels as observed daily according to [40]. Briefly, a value was assigned for each 5 d ranging from (0) unchanged to slight changes in density and convergence towards filter paper punch (1) and further increases in density and convergence (up to 5) (b). 5 mm scale bar. * < 0.05, * * * < 0.001, * * * < 0.0001 when compared to VEGF positive control. scaffolds were used. The scaffolds examined here were of particular interest as they are currently in use or being tested for use in clinics, although their healing effectiveness to date has been subpar due to slow tissue revascularization. Furthermore, the scaffolds chosen were substantially different in structure ( Figure 2) and composition ( Table 1). The viability, migration, and growth factor release, especially of the angiogenic growth factors, were of particular interest in this study.
After analyzing the AdMSCs ( Figure 1) and scaffolds ( Figure 2) for individual characteristics, the distribution and attachment of the seeded cells was analyzed and compared. As expected, AdMSCs adhered to all of the scaffolds but due to their composition and properties, their distribution varied greatly.
Chitosan Films.
Consistent with the lack of porosity detected in chitosan films (Figure 2(b)), the AdMSCs created a single layer on the seeding side with virtually no penetration into the material (Figure 3). A level of porosity must be available in order for cells to be able to penetrate the scaffold and form a network for cells to communicate without overcrowding. Other chitosan derived scaffolds contain artificial pores in order to facilitate cell migration [47,48]. The seeding side is critical for chitosan-based scaffolds to generate either a superficial cell layer or to create an AdMSC interface between the scaffold and the wound bed. As there is only a layer of cells, these may migrate out of the scaffold soon after transplantation and the effects of the AdMSCs on the wound bed may be beneficial for a short time in order to start a pathway towards healing. While metabolic activity increased over time, an overcrowding of the cells could limit the potential of the AdMSCs to release healing factors. The antimicrobial properties of chitosan makes it a beneficial treatment for superficial wounds and burns, to minimize scarring, decrease pain sensation, and reduce inflammation [49].
Collagen-GAG and Fibrin Matrices.
In contrast to the chitosan film, the AdMSCs seeded onto the fibrin and collagen-GAG matrices showed better penetration into the material (Figure 3), with distribution in fibrin matrices peaking at the center of the scaffold. This effect may be due to the apparent unevenness of the porosity at the center of the fibrin matrix, showing a larger pore structure on the top and bottom of the scaffold, in comparison to the center, which might inhibit the cells from migrating throughout the scaffold (Figure 2). MSCs have been shown to possess a strong attachment to fibrin by way of small binding domains with the cell membrane, something not found with other cell types [50]. The cellular gradient observed in collagen-GAG matrices showed a higher concentration at the seeding side with a steady amount of migration throughout the scaffold. As collagen is the main component of the ECM, the cells were expected to be able to attach and distribute throughout the scaffold. Beyond their porosity, as both collagen and fibrin are dominant in the ECM, it is no surprise that they demonstrate a high cellular bond. Furthermore, AdMSCs isolated from lipoaspirates have previously shown a high affinity for binding to ECM proteins [51]. Beyond that, AdMSCs seeded in collagen-GAG matrices exhibited the highest level of metabolic activity and lowest level of cytotoxicity on day one.
The fibrin and collagen-GAG matrices showed the highest amount of cell migration of the four scaffolds, though at different distributions, which could be attributed to differences in cellular adhesion and migration triggered by the material itself [52,53]. Throughout the observation, the cells seeded on the collagen-GAG matrix evidenced the steadiest rate of metabolic activity and the lowest rate of cellular death indicating the most compatible relationship between cells and biomaterial.
Decellularized Dermis.
Although AdMSCs seeded onto the decellularized dermis were able to migrate through the material, a strong decrease in metabolic activity was seen soon after seeding, indicating that it may not provide adequate space for the cells to thrive or may even induce cell death (Figure 4). This might be particularly important for the decellularized dermis as it went through cell removal during preparation. The decellularized dermis utilized here, Strattice, has been used successfully as an internally placed scaffold for treatment of subcostal hernia repair [54] and breast reconstruction [55,56]. Mirastschijski et al. have found that the decellularized dermis may be best suited for dermal wound beds that require a higher mechanical load than in those previously mentioned [57]. While residual porosity does facilitate some AdMSC migration, the high mortality rate would make this an unsuitable scaffold for a cell seeded dermal wound treatment. This high mortality rate may be due to residual chemicals from the decellularization process that cannot be easily washed away before cell seeding. However, the low cellular infiltration that we observed in vitro is in line with previous data showing similar results after subcutaneous implantation of the scaffold in a rat model [58].
Although the metabolic activity increased over time in the decellularized dermis, despite the initial rate of cellular death, of the four examined it seems to be the least compatible combination of the AdMSCs and biomaterial. This may be a result of cell overcrowding, due to tight porosity, and could limit the number of cells able to flourish. In addition, pore sizes in the decellularized dermis were not uniform enough in size for the cell-cell interaction necessary for the cells to thrive. Furthermore, the low metabolic activity observed over two weeks of seeding correlates with a high count of cell death only one day after seeding.
Scaffold-AdMSC Secretion Profile.
During the first days after wounding, the release of paracrine factors is crucial for healing [59]. Independent of their differentiation capacity, MSCs have been shown to act as anti-inflammatory and immunoregulatory agents [59,60], promote cell migration and proliferation and angiogenesis, and improve scarring [4]. The application of AdMSC seeded scaffolds to wounds could, therefore, be beneficial in all the three phases of wound healing: inflammation, proliferation, and tissue remodeling.
The physical and chemical conditions experienced by the cell can alter the cell behavior, in general, and the secretion profile, specifically. In addition, there can be further influences by exposure to biomolecules on the scaffolds, such as peptides and proteins, either artificially or intrinsically [61,62]. Although the four scaffolds' chitosan is the only material that is not found in the human body, it has been employed successfully in wound healing treatments [49]. Surprisingly, the chitosan film released significantly higher levels of Serpin E1 than the control cells and fibrin matrices. Serpin E1 is known to regulate extracellular matrix (ECM) remodeling [63], which is why a high level of expression from the collagen-GAG matrices is expected ( Figure 5).
VEGF and PlGF work together to induce angiogenesis, endothelial cell growth, and promote cell proliferation and migration. VEGF expression is dependent on PlGF while the PlGF/VEGF heterodimer induces pathological angiogenesis [64][65][66]. In general, the scaffolds had a significant effect in reducing VEGF and increasing PlGF expression relative to the two-dimensional culture ( Figure 5). As little difference was found between the release of these factors from cells on each scaffold, this may imply that the scaffold itself upregulated the angiogenic potential of AdMSCs.
An increase in IL-6 may accelerate wound healing by increasing rates of angiogenesis and epithelial cell migration [18]. The scaffold composition did not seem to affect the release of this cytokine, except in the case of the collagen-GAG matrix where it was upregulated. IL-6 is known to induce collagen and GAG production [67] and a similar increase in IL-6 can be seen with primary human dermal fibroblasts seeded on a collagen-GAG matrix [68]. IL-6 also functions in pro-and anti-inflammatory situations and is a major regulator of acute phase reactions, which indicates a wound-like stimulation in vitro. IL-8 had a much lower expression rate in the chitosan film and collagen-GAG matrices. Fibrin is known to induce IL-8 expression in human umbilical vein endothelial cells (HUVECs) [69] and a relatively high expression of IL-8 was found in a previous study utilizing primary human dermal fibroblasts on the fibrin matrix [68]. IL-6 has been linked to angiogenesis by increasing VEGF expression [70], while IL-8 has been shown to upregulate VEGF in endothelial cells [71] and bone marrow derived MSCs [72] via signaling pathways.
A hypoxic environment creates cell stress and triggers AdMSCs to release angiogenic factors via an upregulation of HIF-1 [73]. No HIF-1 expression could be detected in cells seeded on scaffolds implying that the scaffolds themselves do not create a hypoxic environment. In the case of the chitosan scaffold, there was little cellular penetration creating a twodimensional like environment. The pore sizes in both the fibrin and collagen-GAG matrix were likely connected and large enough for proper gas exchange. While most of the cells died quickly that were seeded on the decellularized dermis, it is hard to gauge if the scaffold itself would create a hypoxic environment.
The results suggest that the scaffold allows for proper gas exchange, which is most likely explained by the thickness of the scaffolds. Proper gas exchange should not be hindered in a scaffold less than 200 m [74]. In larger three-dimensional scaffolds, oxygen was depleted after 7 days [75]. As Strattice is 1.5 mm thick (Table 1) this could also pose a problem for the survival of the cells. The other three scaffolds should not be affected as the thickness after cell seeding is less than 200 m. Furthermore, MIF and VEGF are both regulated by HIF-1 [73]. Even though the control shows a higher release of VEGF, there were no significant differences between the scaffolds and controls with MIF. Therefore, there is little chance of the scaffolds creating a hypoxic environment.
The cells released factors into the medium that contributed to increased angiogenesis in vivo as tested in the well-established CAM assay. The CAM offers an exceptional model, as there is no immune system and the vascular networks are exposed. The conditioned medium from the collagen-GAG matrix showed no significant difference in small vessel convergence and growth from that of the VEGF positive control (Figure 6(a)). The medium from the scaffolds themselves did not differ from the observed vascular growth when using PBS, indicating that the synergistic effects of the AdMSCs with the scaffolds were the main component in the increased rates of angiogenesis. Interestingly, the high levels of VEGF and PIGF released from the chitosan film in vitro did not seem to have a strong effect here. As the cell seeded scaffolds were not used directly on the CAM, to prevent irritation, there could still be an effect from the other factors released by the cells on the chitosan film that inhibits vascular growth. This may also indicate inhibitory effects from the material of the chitosan film.
These results are remarkable as they show that scaffolds not only can be designed to harbor AdMSCs but also should be optimized to work synergistically with the cells in order to enhance the release of necessary and desirable factors to enhance wound healing by promoting angiogenesis, reducing healing time, and minimizing scar tissue.
Conclusions
In this work, a suitable delivery vehicle for AdMSCs to the wound that can secrete factors to facilitate healing was evaluated. AdMSCs in conjunction with the different scaffold types examined released angiogenic factors and chemokines necessary for wound healing. Although the decellularized dermis (Strattice) is used in clinical settings, its lack of porosity and the poor environment it creates for the AdMSCs do not make it an ideal candidate for a cell seeded, topically applied wound treatment. Cells seeded on the chitosan film secreted factors that are helpful in wound healing although the scaffold lacked the capability to let cells migrate throughout, leaving a crowded film of cells at the seeding side which could be lost upon transplantation. The ability for the scaffold to provide (i) an ideal environment for the cells to migrate, (ii) porosity that facilitates cell migration and crosstalk, and (iii) a biocompatible material are necessary to achieve proper healing in vivo. Through our investigative efforts, the collagen-GAG and fibrin matrices proved to have the best potential under the applied conditions as a platform for AdMSCs to enhance wound healing in vitro. The in vivo CAM data correlates with the in vitro data to further show the collagen-GAG and fibrin matrices are superior in working with the AdMSCs to promote angiogenesis and thus speed healing. | 8,614.2 | 2015-10-04T00:00:00.000 | [
"Engineering",
"Medicine"
] |
Vision and Vibration Data Fusion-Based Structural Dynamic Displacement Measurement with Test Validation
The dynamic measurement and identification of structural deformation are essential for structural health monitoring. Traditional contact-type displacement monitoring inevitably requires the arrangement of measurement points on physical structures and the setting of stable reference systems, which limits the application of dynamic displacement measurement of structures in practice. Computer vision-based structural displacement monitoring has the characteristics of non-contact measurement, simple installation, and relatively low cost. However, the existing displacement identification methods are still influenced by lighting conditions, image resolution, and shooting-rate, which limits engineering applications. This paper presents a data fusion method for contact acceleration monitoring and non-contact displacement recognition, utilizing the high dynamic sampling rate of traditional contact acceleration sensors. It establishes and validates an accurate estimation method for dynamic deformation states. The structural displacement is obtained by combining an improved KLT algorithm and asynchronous multi-rate Kalman filtering. The results show that the presented method can help improve the displacement sampling rate and collect high-frequency vibration information compared with only the vision measurement technique. The normalized root mean square error is less than 2% for the proposed method.
Introduction
In structural health monitoring, it is necessary to deploy sensors to monitor the structure's response. These sensors collect important data on various aspects of the structure, such as vibrations and displacements [1,2]. These measurements can provide valuable insights into the structure's integrity and indicate any load anomalies or structural defects. Moreover, displacement monitoring can also be used to update the finite element model of the structure, which is essential for accurately assessing, monitoring, and controlling civil infrastructure [3][4][5][6][7]. For example, peak deformation demands, including peak inter-story drift ratio and peak roof displacement, are essential indicators in earthquake engineering for evaluating structural seismic performance [8][9][10][11]. Vehicle-induced displacement is also utilized to detect bridge damage and assess bridge conditions [12]. Additionally, the displacement of a high-rise building is an important indicator of safety [13,14]. Therefore, displacement is critical in ensuring civil infrastructure's health and integrity.
There are many means of directly measuring the displacement response of a structure in the field of structural engineering, which include pull-wire displacement gauges, linear variable differential transformers (LVDT) [15], laser Doppler vibrometers (LDV) [16], Real-Time Kinematic global satellite navigation systems (RTK-GNSS) [17], etc. LVDT usually need to be installed between the target point and a fixed reference point; hence, despite the high accuracy of LVDT measurements, they are not easy to be installed in practical engineering [18,19]. As LVDT is a contact measurement method, any severe structural deformation or breakage during a shaking table test can potentially damage the LVDT. On
A Brief Review of the Kanade-Lucas-Tomasi (KLT) Method
Optical flow refers to the pattern of apparent motion of objects in an image between two frames due to either the motion of the object or the camera. For instance, in Figure 1, three target points in two adjacent images can have their positions in the second image identified by detecting the pixels with consistent intensity values with the corresponding pixels in the first image. It represents the displacement of a 2D vector field (d x , d y ) when a feature point moves from the first frame I(x, y, t) to the second frame after a time interval of d t . The optical flow equation assumes that the object's brightness does not change.
I 1 (x, y, t) represents image pixels from the reference image, and I 2 (x + d x , y+d y , t + d t ) is the image pixels of the following image. For simplicity, let d = d x , d y T , X = [x, y] T .
A Brief Review of the Kanade-Lucas-Tomasi (KLT) Method
Optical flow refers to the pattern of apparent motion of objects in an image between two frames due to either the motion of the object or the camera. For instance, in Figure 1, three target points in two adjacent images can have their positions in the second image identified by detecting the pixels with consistent intensity values with the corresponding pixels in the first image. It represents the displacement of a 2D vector field ( , ) when a feature point moves from the first frame ( , , ) to the second frame after a time interval of . The optical flow equation assumes that the object's brightness does not change. Under the pixel window, the error function is constructed: where a window , centered on the position of a target point, is established in the first image. ( ) is a weighting function that assigns weight to the surrounding pixel. In the simplest scenario, ( ) = 1. Another commonly used function is the Gaussian function, which addresses the center of the window.
Set the partial derivative of with respect to as: The following formula can be obtained from Taylor's expansion: The substitution of Equation (4) into Equation (3) leads to where: Under the pixel window, the error function is constructed: where a window W, centered on the position of a target point, is established in the first image. w(X) is a weighting function that assigns weight to the surrounding pixel. In the simplest scenario, w(X) = 1. Another commonly used function is the Gaussian function, which addresses the center of the window. Set the partial derivative of ε with respect to d as: The following formula can be obtained from Taylor's expansion: The substitution of Equation (4) into Equation (3) leads to where: The following equation can be obtained from Equation (5): where Z = W p(X)p T (X)ω(X)dX, and e = W [I 1 (X) − I 2 (X)]p(X)ω(X)dX. Equation (7) is solved by an iterative method to obtain the value of d. When the value of e is less than the set threshold, the approximate solution of d can be obtained. In summary, the KLT tracker uses points from the previous and current frames to create motion vectors. Selecting these feature points is an essential part of the KLT method. Normally, a region-ofinterest (ROI) is used to focus on a specific part of an image to extract relevant information. Common feature detectors include scale-invariant feature transform (SIFT), speeded-up robust features (SURF), and oriented FAST and rotated BRIEF (ORB) [56]. The Harris point suggested in [44] is an efficient detector in real-time for calculating the optical flow because Harris points are simple, reliable, and efficient corner detection. Traditionally, the KLT algorithm calculates velocity by computing the optical flow between consecutive frames. If the small motion assumption is not satisfied, the traditional way is using an image pyramid, as shown in Figure 2, which is briefly described below. The following equation can be obtained from Equation (5): where = ∬ ( ) ( ) ( ) , and = ∬ [ 1 ( ) − 2 ( )] ( ) ( ) . Equation (7) is solved by an iterative method to obtain the value of . When the value of is less than the set threshold, the approximate solution of can be obtained. In summary, the KLT tracker uses points from the previous and current frames to create motion vectors. Selecting these feature points is an essential part of the KLT method. Normally, a region-of-interest (ROI) is used to focus on a specific part of an image to extract relevant information. Common feature detectors include scale-invariant feature transform (SIFT), speeded-up robust features (SURF), and oriented FAST and rotated BRIEF (ORB) [56]. The Harris point suggested in [44] is an efficient detector in real-time for calculating the optical flow because Harris points are simple, reliable, and efficient corner detection. Traditionally, the KLT algorithm calculates velocity by computing the optical flow between consecutive frames. If the small motion assumption is not satisfied, the traditional way is using an image pyramid, as shown in Figure 2, which is briefly described below. The overall pyramidal tracking algorithm proceeds as follows: as an initial layer 0, the original image is used, and the image is reduced by 2 times in length and width to serve as a layer . The Gaussian pyramid is generated using the obtained images by superimposing them from bottom to top. The corresponding points are also reduced by 2 times. The displacement value of the target point on the highest layer is calculated using the method described in the previous section. This value is used in the optical-flow calculation of the next layer as an initial guess to determine the accurate displacement value. Once the displacement value is calculated, it is passed to the following layer as an initial guess and then to the lowest layer (level 0) to obtain the actual displacement value. The work of Kim et al. [57] provides a detailed description of the propaganda process. The limitations of the KLT method are discussed and demonstrated by Won et al. [41] The paper demonstrates feature loss and drift occurrence in the KLT method. The overall pyramidal tracking algorithm proceeds as follows: as an initial layer 0, the original image is used, and the image is reduced by 2 L times in length and width to serve as a layer L. The Gaussian pyramid is generated using the obtained images by superimposing them from bottom to top. The corresponding points are also reduced by 2 L times. The displacement value of the target point on the highest layer is calculated using the method described in the previous section. This value is used in the optical-flow calculation of the next layer as an initial guess to determine the accurate displacement value. Once the displacement value is calculated, it is passed to the following layer as an initial guess and then to the lowest layer (level 0) to obtain the actual displacement value. The work of Kim et al. [57] provides a detailed description of the propaganda process. The limitations of the KLT method are discussed and demonstrated by Won et al. [41] The paper demonstrates feature loss and drift occurrence in the KLT method. Figure 3 shows an overview of the proposed method. As presented in Figure 3a, one camera is fixed on the ground to trace natural targets on the structure, and an accelerometer is placed on the same floor as the natural targets. Figure 3b illustrates the two stages of the proposed technique for displacement estimation. In the first stage, referred to as the calibration stage, shown in Figure 3, several tasks are accomplished, including the correction of lens parameters, time synchronization, and scale factor calculation. Following this, the second stage, which is called the displacement estimation stage, is initiated. Figure 3 shows an overview of the proposed method. As presented in Figure 3a, one camera is fixed on the ground to trace natural targets on the structure, and an accelerometer is placed on the same floor as the natural targets. Figure 3b illustrates the two stages of the proposed technique for displacement estimation. In the first stage, referred to as the calibration stage, shown in Figure 3, several tasks are accomplished, including the correction of lens parameters, time synchronization, and scale factor calculation. Following this, the second stage, which is called the displacement estimation stage, is initiated.
Video Preprocessing and Measurement Conversion
This section uses video preprocessing to correct the distortion caused by the wideangle lens typically used in consumer-grade cameras. A chessboard pattern is used to calibrate the camera to correct lens distortion [58]. The calibration process involves capturing multiple chessboard images from different angles and orientations, enabling the estimation of the parameters for the lens distortion model. Once the distortion parameters are determined, the images are rectified to remove the distortion and create a rectified image.
Time Synchronization between Vision and Acceleration
This study used two separate acquisition systems to collect data from the camera and the accelerometer.Due to varying sampling rates and data sources, time synchronization is critical before fusing them. As a result, it was necessary to synchronize the data in time. As shown in Figure 4, to avoid the low-frequency drift phenomenon commonly observed in acceleration sensors, the integration results were filtered using a bandpass filter. The lower limit of the passband in bandpass filtering should be sufficiently large to avoid drift, and the upper limit should be at 1/10 of the camera sampling frequency [59]. Additionally, the results of computer vision measurements were resampled to match the sampling
Video Preprocessing and Measurement Conversion
This section uses video preprocessing to correct the distortion caused by the wide-angle lens typically used in consumer-grade cameras. A chessboard pattern is used to calibrate the camera to correct lens distortion [58]. The calibration process involves capturing multiple chessboard images from different angles and orientations, enabling the estimation of the parameters for the lens distortion model. Once the distortion parameters are determined, the images are rectified to remove the distortion and create a rectified image.
Time Synchronization between Vision and Acceleration
This study used two separate acquisition systems to collect data from the camera and the accelerometer.Due to varying sampling rates and data sources, time synchronization is critical before fusing them. As a result, it was necessary to synchronize the data in time. As shown in Figure 4, to avoid the low-frequency drift phenomenon commonly observed in acceleration sensors, the integration results were filtered using a bandpass filter. The lower limit of the passband in bandpass filtering should be sufficiently large to avoid drift, and the upper limit should be at 1/10 of the camera sampling frequency [59]. Additionally, the results of computer vision measurements were resampled to match the sampling frequency of the acceleration measurements. The computer vision measurement results were also filtered using a bandpass filter with the same range as the integration results. This step reduces the impact of frequencies outside the filter range. The cross-correlation analysis was used to finely align the data from the camera and the accelerometer [54]. Here, the time lag is determined at the point where the maximum value of the cross-correlation occurs. This process enabled accurate data matching from both systems and properly synchronized the recorded data.
where is the actual dimension of the known object, and is the number of pixels in the image that covers the object.
After time synchronization, the displacements obtained from both methods are truncated to the same length. The scale factor is then estimated using the least squares method. By implementing these steps, potential discrepancies in the displacements can be minimized, and the study results can be reliable. Figure 5 describes the detailed procedure for estimating target displacement in the ith frame. It is important to note that the proposed technique only applies to in-plane motion estimation, and only one direction is considered, though it can be extended in two directions. The method includes the following steps: first, feature points, such as Harris corner points, are selected in the reference frame. Using a priori estimate , the current frame image is translated. Image translation allows for the adjustment of the images in a
Calculating the Scale Factor
The scale factor λ, determined by the distance between the camera and the target object, translates the image pixel values into real-world metric values, as shown below.
where D is the actual dimension of the known object, and d is the number of pixels in the image that covers the object. After time synchronization, the displacements obtained from both methods are truncated to the same length. The scale factor is then estimated using the least squares method. By implementing these steps, potential discrepancies in the displacements can be minimized, and the study results can be reliable. Figure 5 describes the detailed procedure for estimating target displacement in the i-th frame. It is important to note that the proposed technique only applies to in-plane motion estimation, and only one direction is considered, though it can be extended in two directions. The method includes the following steps: first, feature points, such as Harris corner points, are selected in the reference frame. Using a priori estimate y, the current frame image is translated. Image translation allows for the adjustment of the images in a way that the displacements fall within the range of small motion, enabling the application of the Taylor expansion of Equation (4).
Drift-Free KLT Method
where is the predicted displacement of the target object. Using a priori estimation in the proposed method improves the accuracy of displacement estimates by minimizing the impact of drift-type errors that can accumulate over time. Furthermore, by selecting feature points with strong texture in the reference frame and employing optical flow to calculate displacement, the method further improves the accuracy of displacement estimates.
Asynchronous Kalman Filter
The Kalman filter is a widely used method for data processing that estimates data by continuously predicting and correcting in the time domain. In general, the sampling frequency of the accelerometer is higher than the frame rate of the video. Smyth and Wu [60] used a multi-rate Kalman filter to fuse acceleration and displacement at different sampling rates to improve the estimation of the displacement signal. Ma et al. [55] proposed an asynchronous Kalman filer to fuse acceleration and displacement with adaptive parameters.
In the case of asynchronous situations, Ma et al. [55] categorized time steps into three types. Figure 6 shows the overview of the proposed methods. Type 1 involves only acceleration updates, while the second type involves visual updates. Type 3 involves acceleration updates following visual updates. Among these three types, only in type 2 are the values and probabilities of displacement fused when computing computer displacement updates. Consequently, this approach improves the accuracy and reliability of the displacement estimation, particularly in cases where the initial displacements may not meet the small motion assumption. Furthermore, by incorporating image translation, the proposed method demonstrates its adaptability to various scenarios, enhancing its practical applicability and performance. After translating the image, the KLT algorithm calculates the optical flow between the reference frame and the current frame to obtain the average velocity of the selected feature points, which is used to determine their average displacement.
In the Equation (9), d is the displacement of different frames, and d KLT is displacement calculated from the drift-free KLT method. d translate is the image translate pixel, calculated as follows: where D predicted is the predicted displacement of the target object. Using a priori estimation in the proposed method improves the accuracy of displacement estimates by minimizing the impact of drift-type errors that can accumulate over time. Furthermore, by selecting feature points with strong texture in the reference frame and employing optical flow to calculate displacement, the method further improves the accuracy of displacement estimates.
Asynchronous Kalman Filter
The Kalman filter is a widely used method for data processing that estimates data by continuously predicting and correcting in the time domain. In general, the sampling frequency of the accelerometer is higher than the frame rate of the video. Smyth and Wu [60] used a multi-rate Kalman filter to fuse acceleration and displacement at different sampling rates to improve the estimation of the displacement signal. Ma et al. [55] proposed an asynchronous Kalman filer to fuse acceleration and displacement with adaptive parameters.
In the case of asynchronous situations, Ma et al. [55] categorized time steps into three types. Figure 6 shows the overview of the proposed methods. Type 1 involves only acceleration updates, while the second type involves visual updates. Type 3 involves acceleration updates following visual updates. Among these three types, only in type 2 are the values and probabilities of displacement fused when computing computer displacement updates. Here, is calculated as follows: where 2 is the observation noise of displacement measurement. In type 3, the prior state and covariance are estimated according to the following state estimation:
Parameter Estimation
As described in Equation (8), the actual displacement is the product of the scale factor and the pixel displacement. Therefore, according to the law of error transfer, the variance of the displacement measurement can be calculated by the following equation: where 2 is the variance of the displacement measurement, ̄ and 2 are the mean and variance of the displacement, respectively, and ̄ and 2 are the mean and variance of the scale factor, respectively. For structural monitoring, the mean value of displacement ̄ can be assumed to be 0, and the variance of displacement 2 is estimated by calculating the mean of the variance of all frames based on the matching results for each frame. Suppose X k = x k , .
x k ] T is a state variable, and x k , .
x k represents displacement and velocity, respectively, at the k-th time step, then a discrete state space model for the relationship between acceleration and displacement can be described as: where w k and v k are the noises of measured acceleration and displacement, respectively. Q and R are the corresponding variances of w k and v k , respectively. dt is the time interval of the time step. A and B are the state transition matrix and control input matrix, respectively. In this case, they are functions of the time interval: Assume that during type 1, only acceleration is considered. TheX − k and its covariancê P − k were obtained as follows: where dt a and q denote the time interval and noise variance of the acceleration measurements, respectively. The q value can be easily estimated using laboratory testing. Since no other measurement is available in this time interval, In type 2, the prior stateŶ − i and covarianceĜ − i can be estimated according to the following state estimation: where dt k,i denotes the time interval between the k-th acceleration and the i-th vision measurements. With theŶ − i , the drift-free KLT method was applied to estimate displacement d i from vision measurements.
The posterior state and its covariance were calculated as follows: Here, R is calculated as follows: where σ 2 D is the observation noise of displacement measurement. In type 3, the prior state and covariance are estimated according to the following state estimation:X
Parameter Estimation
As described in Equation (8), the actual displacement is the product of the scale factor and the pixel displacement. Therefore, according to the law of error transfer, the variance of the displacement measurement can be calculated by the following equation: where σ 2 u is the variance of the displacement measurement, D and σ 2 D are the mean and variance of the displacement, respectively, and λ and σ 2 λ are the mean and variance of the scale factor, respectively. For structural monitoring, the mean value of displacement d can be assumed to be 0, and the variance of displacement σ 2 d is estimated by calculating the mean of the variance of all frames based on the matching results for each frame.
Comparison with Conventional Motion Estimation Approaches
This paper compares two commonly used motion estimation methods: (a) a featurematching-based method [32] and (b) the commonly used KLT tracker, as mentioned in Section 2. The feature-matching-based method consists of the following steps: (1) video preprocessing: the specific step is the same as the method in this article. (2) Feature detection and feature description: this step detects distinctive features or key points in ROI. These features are usually corners, edges, or regions with rich textures. Here, the Harris corner detection algorithm is used. After detecting the features, a descriptor is computed for each feature. (3) Feature matching: the matching step involves comparing the descriptors from the two images and finding the best match for each feature. (4) Outlier removal: since not all matched features correspond to the same physical point in the scene, some matches might be incorrect or outliers. One effective technique for outlier removal is random sample consensus (RANSAC), which is commonly used in computer vision and image processing. (5) Motion estimation: the relative displacement between the two images can be computed with the set of correctly matched features.
Experimental Setup
The proposed method for drift-free large motion measurement is investigated in a laboratory experiment to determine its performance and its sensitivity to the video's frame rate. Figure 7 illustrates the validation of a three-story steel building model excited by a uniaxial shaking table. The simultaneous measurement of structural responses was conducted using the proposed system and a laser displacement sensor used for ground truthing; the details regarding these devices are in Table 1 This paper compares two commonly used motion estimation methods: (a) a featurematching-based method [32] and (b) the commonly used KLT tracker, as mentioned in Section 2. The feature-matching-based method consists of the following steps: (1) video preprocessing: the specific step is the same as the method in this article. (2) Feature detection and feature description: this step detects distinctive features or key points in ROI. These features are usually corners, edges, or regions with rich textures. Here, the Harris corner detection algorithm is used. After detecting the features, a descriptor is computed for each feature. (3) Feature matching: the matching step involves comparing the descriptors from the two images and finding the best match for each feature. (4) Outlier removal: since not all matched features correspond to the same physical point in the scene, some matches might be incorrect or outliers. One effective technique for outlier removal is random sample consensus (RANSAC), which is commonly used in computer vision and image processing. (5) Motion estimation: the relative displacement between the two images can be computed with the set of correctly matched features.
Experimental Setup
The proposed method for drift-free large motion measurement is investigated in a laboratory experiment to determine its performance and its sensitivity to the video's frame rate. Figure 7 illustrates the validation of a three-story steel building model excited by a uniaxial shaking table. The simultaneous measurement of structural responses was conducted using the proposed system and a laser displacement sensor used for ground truthing; the details regarding these devices are in Table 1
Type Description
Camera A Sony ILCE-7RM4 camera, featuring a resolution of 1920 × 1080 p, is utilized to capture the video of the structural vibration at a frame rate of 100 fps.
Laser displacement sensor (LDS) A Panasonic HG-C 1200 micro laser distance sensor is employed to supply the ground-truth displacement data for the top floor, with a sampling rate of 500 Hz Accelerometer A KT-1100 accelerometer is employed to deliver the acceleration data for the top floor, with a sampling rate of 500 Hz.
Experimental Result
To quantify the measurement accuracy of the results, the error analysis is conducted using the normalized root-mean-square error (NRMSE): wherex is the estimated displacement; x is the reference displacement; N is the number of displacement measurements. Figure 8 shows the grayscale initial video frame; the selected target region is framed in a red box containing the salient corner features to be tracked. This figure shows that the Harris detector successfully detects the corner of the structure and other feature points. After selecting the feature points, the feature-matching-based method is employed to estimate the movement of the target object for each frame of the video in the calibration stage.
Laser displacement sensor (LDS)
A Panasonic HG-C 1200 micro laser distance sensor is employed to supply the ground-truth displacement data for the top floor, with a sampling rate of 500 Hz Accelerometer A KT-1100 accelerometer is employed to deliver the acceleration data for the top floor, with a sampling rate of 500 Hz.
Experimental Result
To quantify the measurement accuracy of the results, the error analysis is conducted using the normalized root-mean-square error (NRMSE): where ̂ is the estimated displacement; is the reference displacement; is the number of displacement measurements. Figure 8 shows the grayscale initial video frame; the selected target region is framed in a red box containing the salient corner features to be tracked. This figure shows that the Harris detector successfully detects the corner of the structure and other feature points. After selecting the feature points, the feature-matching-based method is employed to estimate the movement of the target object for each frame of the video in the calibration stage. Under case (1), the scale factor calculation results proposed in this paper are shown in Figure 9, while the scale factor obtained through structural size measurement is 0.78 mm/pixel. The two-scale factor shows that the scale method estimated here is effective, so in the absence dimension scenario, the scale factor can be estimated in this way. Under case (1), the scale factor calculation results proposed in this paper are shown in Figure 9, while the scale factor obtained through structural size measurement is 0.78 mm/pixel. The two-scale factor shows that the scale method estimated here is effective, so in the absence dimension scenario, the scale factor can be estimated in this way.
In the Kalman filter process, the noise parameter q is selected as 10 4 mm 2 /s 2 in this experiment, which is estimated based on prior experience. For case (1), with a video sampling rate of 100 Hz, the results are shown in Figure 10. As shown in the figure, compared to the feature-based and KLT methods, the KLT method exhibits a significant drift phenomenon, while the drift-free method proposed in this paper does not have this issue. All comparisons are made here by linearly interpolating the data to 500 Hz. The method proposed in this paper can improve the NRMSE value, reducing it by 38% and 83%, respectively. In the Kalman filter process, the noise parameter q is selected as 10 4 mm 2 /s 2 in this experiment, which is estimated based on prior experience. For case (1), with a video sampling rate of 100 Hz, the results are shown in Figure 10. As shown in the figure, compared to the feature-based and KLT methods, the KLT method exhibits a significant drift phenomenon, while the drift-free method proposed in this paper does not have this issue. All comparisons are made here by linearly interpolating the data to 500 Hz. The method proposed in this paper can improve the NRMSE value, reducing it by 38% and 83%, respectively. In the Kalman filter process, the noise parameter q is selected as 10 4 mm 2 /s 2 in this experiment, which is estimated based on prior experience. For case (1), with a video sampling rate of 100 Hz, the results are shown in Figure 10. As shown in the figure, compared to the feature-based and KLT methods, the KLT method exhibits a significant drift phenomenon, while the drift-free method proposed in this paper does not have this issue. All comparisons are made here by linearly interpolating the data to 500 Hz. The method proposed in this paper can improve the NRMSE value, reducing it by 38% and 83%, respectively. Figure 11 shows that the target is within ROI using the proposed image translating method. In this figure, the frame below moves 16 pixels, and the target in the ROI roughly remains the same. Thus, the effectiveness of image translation during significant displacement is verified. Figure 11 shows that the target is within ROI using the proposed image translating method. In this figure, the frame below moves 16 pixels, and the target in the ROI roughly remains the same. Thus, the effectiveness of image translation during significant displacement is verified. Figure 11. Comparison of ROI at different steps of the case (1). The red box represents the ROI, and the green dots represent the feature points.
The vision sample frequency was modified to investigate the influence of the vision sample frequency further and reduce computation time. In case (1), by resampling the video and reducing the sampling frequency to 50 Hz, 25 Hz, and 10 Hz, it can be found that as the sampling frequency decreases, the NRMSE values increase to 0.91%, 1.52%, and 1.51%, respectively, as shown in Figure 12. In comparison, the feature-matching method changes to 1.57%, 1.66%, and 2.51%, respectively, and the result for the KLT method is higher than the feature-matching method. In case (1), since the excitation frequency is only 1 Hz, the forced vibration frequency can be accurately captured in all cases.
In case (2), the input frequency at the base of the structure was set to 4 Hz, with an amplitude of 30 mm, allowing for the evaluation of the effectiveness of the proposed method under large displacement and high-frequency vibration conditions. As in previous tests, the laser displacement sensor at the top of the structure was used as a reference for calculating the error values. This experimental setup aimed to demonstrate the accuracy and reliability of the proposed method in large amplitude and high-frequency vibrations. The vision sample frequency was modified to investigate the influence of the vision sample frequency further and reduce computation time. In case (1), by resampling the video and reducing the sampling frequency to 50 Hz, 25 Hz, and 10 Hz, it can be found that as the sampling frequency decreases, the NRMSE values increase to 0.91%, 1.52%, and 1.51%, respectively, as shown in Figure 12. In comparison, the feature-matching method changes to 1.57%, 1.66%, and 2.51%, respectively, and the result for the KLT method is higher than the feature-matching method. In case (1), since the excitation frequency is only 1 Hz, the forced vibration frequency can be accurately captured in all cases. Figure 13 shows the displacement of 100 Hz and 10 Hz sample frequencies. As the figure shows, compared with 100 Hz, the pure 10 Hz vision sample frequency failed to capture several peak values. Under a frequency of 100 Hz, the NRMSE values for the proposed method and the feature matching method were 1.3% and 1.58%, respectively. At a sampling frequency of 10 Hz, the KLT method could not detect displacements and was, thus, omitted from the comparison. The NRMSE values for the proposed and feature In case (2), the input frequency at the base of the structure was set to 4 Hz, with an amplitude of 30 mm, allowing for the evaluation of the effectiveness of the proposed method under large displacement and high-frequency vibration conditions. As in previous tests, the laser displacement sensor at the top of the structure was used as a reference for calculating the error values. This experimental setup aimed to demonstrate the accuracy and reliability of the proposed method in large amplitude and high-frequency vibrations. Figure 13 shows the displacement of 100 Hz and 10 Hz sample frequencies. As the figure shows, compared with 100 Hz, the pure 10 Hz vision sample frequency failed to capture several peak values. Under a frequency of 100 Hz, the NRMSE values for the proposed method and the feature matching method were 1.3% and 1.58%, respectively. At a sampling frequency of 10 Hz, the KLT method could not detect displacements and was, thus, omitted from the comparison. The NRMSE values for the proposed and feature matching methods at 10 Hz were 5% and 12%, respectively. These findings indicate that for highfrequency vibrations, the accuracy of purely visual methods is limited due to the constraints imposed by the Nyquist sampling theorem, preventing real-time data acquisition. Figure 13 shows the displacement of 100 Hz and 10 Hz sample frequencies. As the figure shows, compared with 100 Hz, the pure 10 Hz vision sample frequency failed to capture several peak values. Under a frequency of 100 Hz, the NRMSE values for the proposed method and the feature matching method were 1.3% and 1.58%, respectively. At a sampling frequency of 10 Hz, the KLT method could not detect displacements and was, thus, omitted from the comparison. The NRMSE values for the proposed and feature matching methods at 10 Hz were 5% and 12%, respectively. These findings indicate that for high-frequency vibrations, the accuracy of purely visual methods is limited due to the constraints imposed by the Nyquist sampling theorem, preventing real-time data acquisition. The computational time for each frame is presented in Table 2. Table 2 reveals that the proposed method's computation time is shorter than the feature-matching method but longer than the KLT algorithm. In principle, the computation time for the proposed method should be close to that of the KLT algorithm. The discrepancy in computation times may be attributed to the time required for image translation and algorithm initialization. Further investigation into optimizing the proposed method's computation time may help close the gap and make it more comparable to the KLT algorithm, enhancing its The computational time for each frame is presented in Table 2. Table 2 reveals that the proposed method's computation time is shorter than the feature-matching method but longer than the KLT algorithm. In principle, the computation time for the proposed method should be close to that of the KLT algorithm. The discrepancy in computation times may be attributed to the time required for image translation and algorithm initialization. Further investigation into optimizing the proposed method's computation time may help close the gap and make it more comparable to the KLT algorithm, enhancing its practical applicability in real scenarios. Additionally, the proposed method can provide real-time estimations of drift-free displacements due to the reduced computation time. This advantage makes the method more suitable for applications where rapid and accurate displacement measurements are critical. The proposed method can outperform alternative approaches, particularly in scenarios with high-frequency vibrations or large displacements, by offering a balance between accuracy and efficiency.
Case (3) presents the displacement of the frame under the excitation of the El Centro earthquake wave. Due to the frame's flexibility, unlike case (1) and case (2), the top-floor displacement is primarily governed by the frequency of the structure. In Case (3), the time history curves and NRMSE values were calculated for different video sampling rates, as shown in Figure 14. As the sampling rate decreases, the NRMSE increases from 0.83% to 0.93%, 0.91%, and 1.13%. This trend demonstrates the influence of sampling rate on the accuracy of displacement measurements. (2), the top-floor displacement is primarily governed by the frequency of the structure. In Case (3), the time history curves and NRMSE values were calculated for different video sampling rates, as shown in Figure 14. As the sampling rate decreases, the NRMSE increases from 0.83% to 0.93%, 0.91%, and 1.13%. This trend demonstrates the influence of sampling rate on the accuracy of displacement measurements. Under earthquake conditions, the NRMSE values decrease for the given conditions, indicating that the proposed method exhibits robustness. This improved performance demonstrates the method's ability to maintain accuracy and reliability even in challenging situations. The method's robustness is crucial in practical applications, where dynamic conditions and external disturbances can significantly impact the quality and reliability of displacement estimates.
Power spectral density (PSD) is a function used to describe the energy distribution of a signal in the frequency domain. PSD is frequently used in signal processing and communication systems to describe the spectral characteristics of noise and signals. Figure 15 shows that the proposed method's PSD is closer to the reference measurement results. Note that here the PSD is calculated without interpretation. The high-frequency information of the structure is more similar to the LDV results, which is beneficial for determining the structure's frequency and mode shapes. For the low-frequency portion, after applying the Kalman filter, the power spectral density curve is closer to the pure visual results. The first frequency, 2.63 Hz, is successfully identified in both scenarios. Under the 10 Hz scenario, the vision method failed to identify the 6.84 Hz, the second mode. The third frequency, 18.55 Hz, is not apparent from those curves. These results Under earthquake conditions, the NRMSE values decrease for the given conditions, indicating that the proposed method exhibits robustness. This improved performance demonstrates the method's ability to maintain accuracy and reliability even in challenging situations. The method's robustness is crucial in practical applications, where dynamic conditions and external disturbances can significantly impact the quality and reliability of displacement estimates.
Power spectral density (PSD) is a function used to describe the energy distribution of a signal in the frequency domain. PSD is frequently used in signal processing and communication systems to describe the spectral characteristics of noise and signals. Figure 15 shows that the proposed method's PSD is closer to the reference measurement results. Note that here the PSD is calculated without interpretation. The highfrequency information of the structure is more similar to the LDV results, which is beneficial for determining the structure's frequency and mode shapes. For the low-frequency portion, after applying the Kalman filter, the power spectral density curve is closer to the pure visual results. The first frequency, 2.63 Hz, is successfully identified in both scenarios. Under the 10 Hz scenario, the vision method failed to identify the 6.84 Hz, the second mode. The third frequency, 18.55 Hz, is not apparent from those curves. These results demonstrate the effectiveness of incorporating the Kalman filter in improving the accuracy of the displacement estimates, particularly in capturing the structure's essential dynamic characteristics across various frequency ranges. Figure 15 shows that the proposed method's PSD is closer to the reference measurement results. Note that here the PSD is calculated without interpretation. The high-frequency information of the structure is more similar to the LDV results, which is beneficial for determining the structure's frequency and mode shapes. For the low-frequency portion, after applying the Kalman filter, the power spectral density curve is closer to the pure visual results. The first frequency, 2.63 Hz, is successfully identified in both scenarios. Under the 10 Hz scenario, the vision method failed to identify the 6.84 Hz, the second mode. The third frequency, 18.55 Hz, is not apparent from those curves. These results demonstrate the effectiveness of incorporating the Kalman filter in improving the accuracy of the displacement estimates, particularly in capturing the structure's essential dynamic characteristics across various frequency ranges.
Conclusions
This paper uses an accelerometer and computer vision techniques to fuse contact monitoring and non-contact tracking data of structural dynamics to exploit both advantages fully. In response to the shortcomings that computer vision techniques cannot capture high-frequency vibration information of structures and require additional parameters to estimate the scaling factor, while accelerometers cannot monitor low-frequency displacements and have zero drift, this paper proposes to fuse data from computer vision
Conclusions
This paper uses an accelerometer and computer vision techniques to fuse contact monitoring and non-contact tracking data of structural dynamics to exploit both advantages fully. In response to the shortcomings that computer vision techniques cannot capture highfrequency vibration information of structures and require additional parameters to estimate the scaling factor, while accelerometers cannot monitor low-frequency displacements and have zero drift, this paper proposes to fuse data from computer vision and accelerometers using Kalman filtering to calculate the scaling factor using the least squares method.
The method's reliability is also verified by using a frame structure shaker table test. The results show that (1) the method can reliably estimate the scale factor. (2) In the time domain, the NRMSE value is effectively reduced, and the overall displacement measurement accuracy is improved. (3) In the frequency domain range, the proposed data fusion method compensates for the low sampling rate of pure computer vision and effectively improves the signal-to-noise ratio of displacement data in the higher-order mode range.
The study also investigated the impact of lowering the sampling frequency on the video vision technique. The findings reveal that the accuracy of the displacements is only slightly affected when the sampling frequency is decreased from 100 to 10 Hz. The fused displacements' power spectral densities remain unchanged, even though the sampling frequency is reduced to a tenth of its original value. This demonstrates that the proposed fused method is a feasible and efficient alternative for measuring displacement in civil engineering structures. | 10,172.4 | 2023-05-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Predator mediated selection and the impact of developmental stage on viability in wood frog tadpoles (Rana sylvatica)
Background Complex life histories require adaptation of a single organism for multiple ecological niches. Transitions between life stages, however, may expose individuals to an increased risk of mortality, as the process of metamorphosis typically includes developmental stages that function relatively poorly in both the pre- and post-metamorphic habitat. We studied predator-mediated selection on tadpoles of the wood frog, Rana sylvatica, to identify this hypothesized period of differential predation risk and estimate its ontogenetic onset. We reared tadpoles in replicated mesocosms in the presence of the larval odonate Anax junius, a known tadpole predator. Results The probability of tadpole survival increased with increasing age and size, but declined steeply at the point in development where hind limbs began to erupt from the body wall. Selection gradient analyses indicate that natural selection favored tadpoles with short, deep tail fins. Tadpoles resorb their tails as they progress toward metamorphosis, which may have led to the observed decrease in survivorship. Path models revealed that selection acted directly on tail morphology, rather than through its indirect influence on swimming performance. Conclusions This is consistent with the hypothesis that tail morphology influences predation rates by reducing the probability a predator strikes the head or body.
Background
Many organisms exploit different environments over the course of their life cycle. Perhaps the most extreme example of this shift in resource use is that which accompanies metamorphosis in animals with complex life cycles [1]. Complex life cycles -hereafter referring to organisms with at least two discrete post-embryonic life-stages [2,3] -are ubiquitous in animals, being expressed in at least 80% of all species [4,5]. They may evolve for several reasons, such as trophic switching or specialized dispersal/breeding forms [6]. The tradeoffs that accompany shifts in niche occupancy will typically be accompanied by divergent selective regimes and alternative adaptations. In part, this accounts for the large differences in morphology, physiology, behavior, and other aspects of the phenotype observed among life stages. Although dramatically divergent morphologies among different life stages allow individuals to exploit multiple kinds of resources throughout ontogeny, complex life cycles also involve functional trade-offs and thereby create a new problem: how to optimize the transition between life stages [7,8].
The challenge of adapting to multiple adaptive peaks can be partially resolved by genetic and developmental decoupling among life stages [3]. Nonetheless, it is often the case that genetic, developmental, and functional correlations persist across life stages (e.g., [9][10][11][12][13][14]). Moreover, even if there is complete adaptive decoupling of divergent life stages, the transitional period between life stages is still likely to be a performance trough that exposes individuals to increased risks. Indeed, the more differentiated the life stages, the more intense the risks are likely to be. Metaphorically, the transition from juvenile to adult may be viewed as movement between alternative peaks on an individual's adaptive landscape [15,16], where peaks represent correspondence between an individual's phenotypic traits and the local maximum probability of survival.
Many amphibians exhibit a complex life cycle in which larval development (intervals of which are referred to as Gosner stages in frog tadpoles; [17]) is followed by metamorphosis into an adult form [5,18]. Tadpoles are highly specialized for feeding, and the tadpole body plan consists mostly of a globose body and a sheet-like, laterally compressed tail [19]. During metamorphosis, the tail is resorbed as hind and forelimbs emerge, thereby facilitating the transition from an aquatic swimming form (undulatory, axial locomotion) to a terrestrial hopping form (saltatory, appendicular locomotion). It is during the intermediate stages of metamorphosis that individuals are thought to experience increased predation risk [7]. The hypothesized period of increased predation risk separating larval and adult forms derives from the observation that metamorphs are optimized for neither larval nor adult niches [20]. For example, emergent hind limbs may impose drag and reduce swimming performance [19][20][21] and residual tail tissue may negatively impact saltatory locomotion [7]. For instance, metamorphosing chorus frogs, Pseudacris triseriata, are more likely to be captured by predatory garter snakes, Thamnophus sirtalis, than are either tadpoles or adult frogs [7]. Laboratory selection experiments on tadpoles likewise suggest the presence of a performance decline at metamorphosis [22]. Field experiments designed to measure both natural selection and variation in viability during ontogenetic stages near the developmental switch between life-stages, however, are still lacking.
Here, we test a set of related hypotheses about variation in survival probability in the wood frog, Rana sylvatica. We begin broadly, by first testing whether fitness (i. e., survival) correlates with morphology across tadpole development [22][23][24][25]. This first question is designed to test the hypothesis of increased predation risk during tadpole metamorphosis. We next use path analytic models to compare alternative hypotheses regarding the causal structure underlying selection on tadpole morphology. This includes a test of the hypothesis that tail morphology is subject to selection via its effect on swimming performance [24,26], which may be important for predator escape. We also consider the alternative hypothesis that tail shape may, as has been demonstrated previously, enhance survival by serving as a "lure" to attract predatory attacks towards the tail, thereby reducing the probability of mortal wounds to the head/body region [25,[27][28][29][30].
Methods
We collected tadpoles of the wood frog, Rana sylvatica, from a single pond near Randolph, Vermont, USA (43°5 4' N, 72°38' W) on June 11, 2010. The pool naturally contained larval odonates and other predatory invertebrates (Calsbeek and Kuchta, pers. obs.). Tadpoles were held overnight in 5 gallon food-grade plastic buckets with filtered pond water, and were fed an ad libitum diet of boiled lettuce. The morning after capture, we individually marked each tadpole with a unique color-coded combination of elastomer dyes (visible elastomer implants available from Northwest Marine Technologies, Shaw Island WA, U.S.A.) that we injected into the dorsal half of the tail fin, posterior to the body wall. Tadpoles were immobilized (but not anesthetized) during the marking procedure by holding them in a plastic multi-channel pipette well. We scored each tadpole's developmental stage [17] with the aid of a dissecting microscope just prior to the initiation of the selection experiment (mean Gosner stage = 34 ± 4.62 SD).
Tadpoles were then individually transferred to a Vshaped glass tank (which imposed a consistent orientation on the tadpoles) with a size standard, and were digitally photographed. We used digital images of each tadpole to make the following linear measurements: Head length: the distance from the anterior tip of the snout to the junction of the body with the tail; head height: the depth of head at its tallest point; tail length: from the junction of the tail with the body wall to the distal tip of the tail; tail muscle height: muscle height at the tallest point of the tail muscle; and tail fin height: fin height at the tallest part of the tail.
We measured swimming performance for half of the individuals in our selection study (N = 200 tadpoles) using a small (36L × 26W × 5H cm) tank containing filtered pond water and a size standard. Rapid development among the tadpoles held in buckets prevented us from measuring swimming performance for the remaining 200 individuals. Swimming trials were videotaped at 250 frames/sec using a high-definition digital camcorder (JVC Evario GZ-HM550-bu). Each tadpole was introduced to the swimming chamber and then motivated to initiate a "C-start" by touching the junction point between the tail fin and the body wall using a small metal pointer. We recorded three C-starts for each tadpole and used the fastest of these trials to estimate swimming performance, recording average speed over the 50 fastest frames. We chose to use this measure in our selection analyses because fifty frames was the average time required to swim one body length, and we assume that this is a good metric for predator avoidance. Burst speed was measured along the path of the tadpole movement using MotionAnalysis software (available from M. Chappell, University of California, Riverside, CA, U.S.A. http://warthog.ucr. edu/). We used the tadpole eye as a landmark for tracking individuals. All capturing, marking, photography, and swimming performance trials were conducted within 36 hours and the tadpoles were immediately transferred to cattle tanks for the selection experiment.
Selection experiment
We conducted our selection experiment using eight 1136 L (300 gallon) cattle tanks that were randomly selected from an array of 49 tanks housed in an open field near the Dartmouth College campus. One month before introducing tadpoles, cattle tanks were cleaned and filled with ground water, 0.550 kg of dried Oak leaf litter, 15.4 g of rabbit chow (a nutrient source), and a three-liter aliquot of mixed zooplankton and phytoplankton collected from a pond near Norwich, VT (43.73°N, 72.31°W). We added five larval dragonflies (Anax junius) to each tank to serve as predators on tadpoles. To provide developing frogs with retreat sites, we placed three White Water Lilly (Nymphea ordorata) fronds on the water surface of each tank. To provide dragonfly larvae with perches, we used stones to anchor three to five tree branches (~100 cm) to the bottom of each tank. Finally, we randomly assigned 50 tadpoles to each of the eight tanks. Set up this way, the cattle tanks functioned as self-sustaining mesocosms that mimicked conditions experienced by tadpoles in nature [31]. We covered each cattle tank with 0.5 × 0.5 cm hardware cloth pulled taut and secured with elastic cords. This functioned to shade the tanks, prevent predation, ensure that no metamorphosing individuals escaped, and preclude large, predatory insects from laying eggs.
We recorded the identity of individual surviving tadpoles in our selection experiment five and fourteen days following introduction to the artificial ponds. Survival was scored by removing all the leaf litter and filtering each tank to recover tadpoles with hand-held dip nets. We also verified the presence of all five dragonfly-larvae in each mesocosm (one dragonfly-larva in each of three tanks was replaced to account for single dead individuals). After the first census, we replaced the leaf litter, dragonfly-larvae, and tadpoles, and re-covered the tanks with the shade cloth. Following the second census, all tadpoles were brought back to the laboratory, sacrificed with an overdose of MS-222, and stored in 70% ethanol.
General linear models were used to calculate selection gradients [32,33] for linear (β) and non-linear (γ) forms of selection. First, competing models were compared using Akaike's information criterion (AIC, [34] Table 1). This metric, which does not require nested models, calculates the likelihood of the model given the data and the number of parameters. Consequently, two models with equal likelihoods that differ in the number of parameters will have different AIC values, and the model with the smaller number of parameters will be favored. Next, the difference between the preferred model (i.e., with the lowest AIC value) and each of the subsequent models was calculated (Δi) by stepwise inclusion of the remaining linear and quadratic terms. We then calculated the normalized relative likelihoods of the models, also known as the Akaike weights (w i ), which quantify the relative support for different models [35]. Finally, we calculated the evidence ratio, which compares each model to the best model and provides the relative odds of competing models.
Linear selection gradients were calculated from models that included only linear terms, whereas quadratic gradients (e.g., non-linear selection) and cross-product terms (i.e., correlational selection) were calculated from models that included both the linear and quadratic terms. Because the GLM underestimates quadratic terms by half [36,37], quadratic gradients and their standard errors were doubled. Though parametric statistics provide robust estimates of selection gradients and other parameters [32,38], these tests may be violated by survival data (live/die), which tend to have non-normally distributed errors [39,40]. We computed significance values for selection gradients using generalized linear models including a logit link function [41]. Prior to pooling data from individual tanks (i.e., replicates), we tested for any interaction between relevant terms and the factor for tank. None of these were significant, indicating that selection operated in the same way in all replicates. We dropped the interaction terms but retained a factor for "tank" in our models. The factor for tank explained a significant portion of the variance in all full models (0.02 > P < 0.03), but not in reduced models (0.06 > P < 0.08). All variables used in selection analyses were standardized to a mean of zero with unit standard deviation, except our fitness variable (survival), which was scaled by the mean [32,42]. The degree of multi-colinearity among traits was assessed by estimating variance inflation factors (VIF; [43]), all of which were less than five. We visualized fitness surfaces using cubic splines [44]. AIC score = Akaike Information Criterion; Δ I = difference in AIC scores between the best model and subsequent model; w i = normalized relative likelihood of the model given the data; Evidence Ratio = the relative odds that a model is the best given the data. The first row of the table represents the preferred model, which includes linear terms for Head Length, Tail Length, and Tailfin Height, and quadratic terms for Head Length and Tail Length (see Table 4). Subsequent rows show the influence of adding each indicated trait in succession. Traits labeled with a superscript "2" represent quadratic terms.
Path analysis was used to investigate the structure of causal relationships in our selection experiments [45,46]. First, we developed a set of a priori causal path models based on the competing hypotheses that variation in survival was most highly dependent on swimming performance versus predator evasion by caudal luring (Figure 1). The swimming performance model included causal paths from morphology performance fitness, while the caudal luring model was reduced to causal paths from morphology fitness only. This latter case models a situation in which a trait other than burst swim speed mediates the relationship between morphology and fitness [47]. A third model combined the two models above, and allowed for the possibility that morphology impacts fitness through both measured and unmeasured performance variables. Significance tests for individual path models were based on comparisons in which the covariance structure of each model was tested against the covariance expected under the assumption the model was correct [48]. A significant difference in this comparison indicates that the model in question provides a poor fit to the data. Path analyses, including significance tests, were performed using the program AMOS v. 18 [49] In a second analysis, our Model 3 ( Figure 1) was iteratively reduced to its significant components by sequentially setting causal paths with the lowest partial regression coefficients and the highest P values to zero [50]. Alternative models were compared using Akaike's information criteria (AIC, [34]), including the difference between the preferred model and each subsequent model (Δi), normalized relative likelihoods (w i ), and evidence ratios [35,51].
In addition to path analysis using maximum likelihood, we also conducted Bayesian analyses of the data. We did this to account for the binomial distribution of our fitness variable (survival), which likely violates the assumption of normal errors and multivariate normality in least-square calculations [52,53]. Bayesian analysis in Amos 18 [49] employs a Markov Chain Monte Carlo (MCMC) algorithm for estimating posterior distributions, and properly accounts for the binomial status of our fitness variable. Parameter estimates were estimated from 150,000 generations following a burn-in of 500 generations. Convergence of the MCMC algorithm was assessed using the convergence statistic developed by Gelman et al. [54] and implemented in AMOS [49]. The significance of parameter estimates was assessed using 95% Bayesian credibility intervals. The results of the Bayesian analysis of the path coefficients were very similar to the maximum likelihood estimates (data not shown) and will not be presented.
Results
Despite the presence of floating refugia in each tank, our artificial ponds turned up one metamorphosed frog that clearly drowned after failing to find a terrestrial refuge. It is likely that some fraction of the mortality that we attributed to predation was from metamorphs that drowned. On the other hand, we also recovered six fully metamorphosed frogs that survived to the end of the experiment. As a conservative approach to analyses, we broke our data up into two different data sets. The first dataset is based on all individuals in the study, and is referred to as the "Full" dataset; it is described throughout the rest of this paper. The second dataset excluded all individuals whose Gosner stage was > 39 at the experiment's outset; this is referred to as the "reduced" data set. Based on rates of development measured in our study, this reduced dataset increases the chance that most individuals were in an aquatic stage throughout much of the course of the experiment. Morphological measurements could not be made for a few individuals and the size of these data sets varies slightly (see table Figure 1 Path diagram of the relationships between morphology, swimming speed, and fitness. Model 1 is a classic path analysis diagram with links going from morphology to performance to fitness [62]. In Model 2, links go directly from morphology to fitness; performance is omitted. This would be the case if morphology impacted fitness through means other than swimming speed, for example by acting as a caudal lure. Model 3 is the "full model" and includes links between morphology and performance as well as between morphology and fitness, as would be the case if swimming speed as well as other factors mediated the relationship between morphology and fitness. legends for details). Results from the two sets of analyses were qualitatively nearly identical (Tables 2, 3 and 4).
In the full data set, the mean percentage survival (± SE) in each tank to the first census period (5 days) was 0.77 ± 0.02 (range 0.66-0.90). By the second census (14 days), mean survival had decreased to 0.57 ± 0.02 (range 0.48-0.66). Qualitatively, selection results during the two time periods were nearly identical (data not shown), but to maximize our power to detect selection, and to simplify the presentation of results, we use viability estimates from the second census as our measure of fitness. Frequent bite marks on the tails of surviving tadpoles suggest that dragonfly larvae were a key source of mortality in our study populations. We also recovered two complete tails during our census, with elastomer tags still intact, from tadpoles that did not survive. We conclude that mortality in the selection replicates was largely due to predation by dragonfly larvae.
Variation in survival was strongly linked to ontogeny (Gosner stage) and favored tadpoles at intermediate stages of development (quadratic effect of Gosner stage ANOVA F 1,385 = 63.83, P < 0001) with a decline in survival probability starting, on average, around Gosner stage 37 (Figure 2, center panel). We therefore included a term for Gosner stage in selection models. For completeness, we present models that include all measured traits (Table 2), models without swimming speed (which maximizes our sample size; Table 3), and a model using the set of independent variables corresponding to the smallest AIC score (Table 4). This last model, which we consider the preferred model (Table 1), included a linear term for tail fin height and Gosner stage, and linear and quadratic terms for tail length and head length. In this Table 2 Linear (b) and quadratic (g) selection on all morphological traits measured in this study in a data set that included all individuals (Full, N = 172), as well as in a second dataset from which we excluded all individuals whose Gosner stage was > 39 at the time of release were excluded (Reduced, N = 151). Swimming speed -0.05 ± 0.08 -0.10 ± 0.11 Asterisks indicate significant (* P < 0.05) selection, as determined by a generalized linear model with a logit link function and survival (0 or 1) as the response variable. A term for Gosner stage was included in these models, but because development stage is not a "trait" on which one can measure selection, is not shown here. A factor for tank was also included. See text for details. The results of two analyses are presented: a Full dataset that included all individuals (N = 381), as well as in a second dataset in data set (Reduced, N = 324), from which we excluded all individuals whose Gosner stage was > 39 at the time of release. Asterisks indicate significant (* P < 0.05, ***P < 0.005) selection, as determined by a generalized linear model with a logit link function and survival (0 or 1) as the response variable. A term for Gosner stage was included in these models, but because development stage is not a "trait" on which one can measure selection, is not shown here. A factor for tank was also included. See text for details. Reduced Head Length -0.007 ± 0.10 -0.20 ± 0.08* Tail Length -0.11 ± 0.08 -0.10 ± 0.08 Tail Fin Height 0.20 ± 0.07*** Two identical analyses are presented: the first included all individuals (Full, N = 381), the second is from a censured dataset (Reduced, N = 325) from which all individuals whose Gosner stage was > 39 at the time of release were excluded. Asterisks indicate significant (* P < 0.05, ***P < 0.005) selection, as determined by a generalized linear model with a logit link function and survival (0 or 1) as the response variable. A term for Gosner stage was included in these models, but because development stage is not a "trait" on which one can measure selection, is not shown here. A factor for tank was also included. See text for details. model, selection favored individuals with deep tail fins (β = 0.27 ± 0.05, P < 0.0001) and short tails (β = -0.17 ± 0.07, P = 0.02) (Figure 3). We also detected quadratic components to selection on tail length and head length that were both stabilizing (tail length: γ 1,1 , = -0.16 ± 0.06, P = 0.01; head length: γ 2,2 = -0.20 ± 0.07, P = 0.01) (Figure 3). To verify that the results were not biased by the relationship between size and Gosner stage, we regressed Gosner stage against tail and head morphology, and saved the residuals. Patterns of selection based on residual trait values were qualitatively similar to those using raw values (e.g., selection for deep residual tail fins [β = 0.22 ± 0.04, P < 0.0001] and short residual tail lengths [β = -0.10 ± 0.04, P = 0.03]).
Swimming speed was positively correlated with developmental stage (r 2 = 0.14, df = 186, P < 0.0001) and tail length (r 2 = 0.12; df = 186, P < 0.0001), and there was a weak quadratic relationship between swimming speed and tail fin height (individuals of intermediate tail fin height swam fastest: r 2 = 0.07, df = 186, P = 0.051). However, we did not detect any selection on swimming speed in our experiment. In a model that included linear terms for tail length and tail fin height, the selection gradient for swimming speed was weakly negative and non-significant (β = -0.03 ± 0.07, P = 0.65). Even when we considered selection on swimming speed alone (i.e., the selection differential for swimming speed) we detected no variation in survival that was related to swimming performance (s = -0.07 ± 0.06, P = 0.23).
The results of the path analyses parallel the multiple regression analyses. The best fit model was Model 2 (Morphology Fitness; AIC = 52.41; DIC = 1056.30; Figure 1; Table 5). Over 99% of the relative likelihood was captured by this model, and the relative odds of the second best model being better than the most strongly supported model was 3159:1 (Table 5). In contrast, Model 1 (Morphology Performance Fitness; Figure 1) was significantly different from the data (χ 2 = 46.88; P < 0.001). We thus conclude that Model 2 is strongly supported relative to alternatives.
In our second path analytic approach, we iteratively reduced causal paths by removing the most poorly supported paths after each run until we were left with a model in which all causal paths were significant. We started with the Full Model (Model 3; Figure 1) because this model included all theoretically interesting causal paths. The most fully reduced model received the strongest support (AIC = 61.38; Table 6). In this best fit model, the only significant causal paths were between tail fin height and fitness (β = 0.24; P < 0.001) and tail length and fitness (β = -0.32; P < 0.001). However, the best-fit model was not a robust improvement over related models. For example, the difference in AIC between the best-fit model and the 4 th best model was only 1.00, and for the 5 th best model, 2.19 (Table 6). In addition, the relative odds of models 2-4 ranged from 1.35-1.65:1, and the relative odds of model 5 was 3:1 ( Table 6). We conclude that the first five models are not easily distinguished, we therefore show the results of Model 5 in our path analysis diagram because it is the fullest model receiving statistical support (Figure 4). Relative to the best-fit model, the 5 th best model includes causal paths between head height fitness, head length fitness, tail muscle height maximum swim speed, and maximum swim speed fitness (Figure 4). None of these were significant, however, there is a trend between head length fitness (β = -0.16; P = 0.06). Note that the link between swimming performance fitness is weak and not significant (β = 0.04; P > 0.05).
Discussion
One of the most common hypotheses regarding the evolution of complex life cycles is that alternative morphological strategies are employed to exploit different resources throughout ontogeny. The transition between life stages, however, can be a vulnerable period in which individuals suffer higher rates of mortality. We have Figure 3 In the preferred model (Table 3) chosen based on AIC scores (Table 1), natural selection acted primarily on tail length, tail height and head. Selection on tail length and head length both were stabilizing around shorter values. Selection on tail height was directional and positive. See the text and Table 3 for statistical details. The dark line represents the best fit cubic spline for each trait, and light lines indicate the 95% confidence limits. Chi-square values (χ 2 ), degrees of freedom (df), and the associated P value report the significance of the model. AIC = Akaike Information Criterion; Δ I = difference in AIC scores between the best model and subsequent model; w i = normalized relative likelihood of the model given the data; Evidence Ratio = the relative odds that a model is the best given the data.
presented empirical evidence that tadpoles of the wood frog, Rana sylvatica, when facing predation by dragonfly larvae, experience a higher probability of mortality as they approach metamorphic climax. That mortality probabilities increase during metamorphosis is not unexpected, as a tadpole with emergent hind and forelimbs is well adapted for neither swimming nor jumping [7,20,21]. For instance, Arnold and Wassersug [8] Chi-square values (χ 2 ), degrees of freedom (df), and the associated P value report the significance of the model. Note that model 9 is saturated, and thus the fit of the model to the data could not be tested using the chi-square statistic. In models that are not significant, the data are a good fit to the model. AIC = Akaike Information Criterion; Δ I = difference in AIC scores between the best model and subsequent model; w i = normalized relative likelihood of the model given the data; Evidence Ratio = the relative odds that a model is the best given the data. Figure 4 Path diagram of the relationships between morphology, swimming speed, and fitness. The results of model 5 (Table 5) are illustrated here. Double-headed arrows represent covariances (range: 0.551 -0.854), and all of them are significant. Values near single-headed arrows are maximum likelihood parameter estimates of partial regression coefficients (direct effects. Arrows lacking a number represent causal paths set to zero (Table 5). Arrow thickness is proportional to the strength of relationship. Black arrows present significant parameter estimates, and grey arrows represent relationships that are not significant. The P-value of the dark grey arrow (head length fitness) is 0.06. Note that the causal paths between head length fitness and head length maximum swim speed are significant in model 9, which had the best AIC score ( Table 6).
showed across a large geographic range (Mexico to Washington state) that garter snakes, Thamnophis spp., were more likely to have consumed anuran metamorphs (tree frogs and toads) than either tadpoles or adults. They concluded that transforming anurans were highly susceptible to snake predation as a consequence of "locomotor ineptitude." Our data further suggest that selection acts strongly on morphological traits, favoring tadpoles with short tails and deep tail fins, but that this selection acts largely independently of swimming performance. This latter result is surprising given that tail shape influences swimming performance [55]. Indeed, in our data swimming performance was correlated with both tail length and tail fin depth, and larger values of both tail elements produced greater swimming speeds, consistent with patterns demonstrated elsewhere [56,57]. Our analyses may have suffered from reduced power given that we could only measure swimming speed for half of our study animals. However, even when we removed all other terms from the model and measured selection differentials on swimming speed alone, the results were not significant. Moreover, path analyses revealed that the effects of morphology (tail length and tail fin height) were largely direct, acting to enhance survival probability per se, rather than serving as a functional link to swimming performance. We interpret this result as consistent with the hypothesis that short tails and deep tail fins are adaptive because they attract predatory strikes and increase the probability that a predator will strike tail tissue rather than sites on the head or body (i.e., "the caudal lure hypothesis"; [25,27,30]).
Tadpoles of many frog species exhibit developmental plasticity in response to chemical cues from potential predators, whereby they develop a relatively deep tail fin and a small body (e.g., [22,[58][59][60][61]). In particular, enlarged tail fins lead to enhanced survival in the presence of larval odonates (summarized in [58]). There is reason to believe, however, that differences in tail shape do not influence swimming performance effectively enough to have a large impact on survival in the presence of odonate larvae. This result is unexpected at first blush, given the high prevalence of causal relationships between morphology and performance in other animal systems [62][63][64]. Van Buskirk and McCollum [24] used experimental manipulation of tail fin morphology, trimming tissue to reduce both the total length and depth of the tail fin, to investigate the direct effects of changes in tail morphology on swimming performance. Their study revealed that changes in swimming performance were not apparent until one third of the tail was surgically removed, leading them to conclude that reduced susceptibility to predation must have been due to something other than enhanced swimming performance. Similarly, Wilbur and Semlitch [65] showed that damaged tails of R. utricularia incurred little survivorship cost in the presence of predatory newts (Notophthalmus viridescens). On the other hand, Van Buskirk et al [28] showed that tadpoles with predatorinduced morphologies suffered fewer lethal strikes to the body, suggesting that enlarged tail fins may enhance survival via a "caudal lure" effect.
The approach adopted in this study was to quantify relative survival and selection across ontogeny. One challenge faced by such an approach is that changes in size and shape are confounded throughout the development of the tadpole. This is the phenomenon summarized by Gosner stages. In addition, we were only able to quantify swimming performance and morphometric variables at the start of the study. Depredated tadpoles, unfortunately, cannot be measured. Our analyses thus assume that fundamental elements of size and shape were captured in our initial measures, and that the quantitative signal is maintained to some degree throughout ontogeny. If this were not the case, it is unlikely that we would have obtained sensible results.
Though the number of studies of natural selection has grown rapidly in recent decades [66,67], there are still fundamental gaps in our understanding of the selective process. This is, in part, owing to the fact that selection studies are rarely replicated either temporally or spatially [41] and when studies are replicated, selection estimates tend to be highly variable among replicates [68]. Our study provides a rare example of repeatable selection, as replicate estimates of selection were highly congruent among mesocosms, suggesting that the changes that characterize metamorphosis are subject to strong and consistent patterns of selection among individuals.
Conclusions
Our study demonstrates an increase in mortality risk as tadpoles began to metamorphose. Owing to the nature of our experimental design, which focused on tadpole mortality, our data did not examine the effects of the transition from tadpole to froglet on survivorship (see [8]). As metamorphosis proceeds and the tail fin is resorbed, we expect that froglets would become better at hopping and thus less susceptible to predation. We suggest, as have others [19], that selection should thus favor individuals that minimize the transition time during metamorphic climax. This does not necessarily mean that selection should favor the most rapid possible development. Indeed, faster overall development often results in small adult body sizes, a condition that can have serious fitness consequences for adult anurans [69,70]. Rather, the optimal strategy should be to metamorphose at a rate that maximizes the balance between the probability of surviving metamorphosis and later fitness costs. Future studies should aim to measure selection on the separate components of developmental timing to improve our understanding of the targets of selection, including the costs and benefits of pursuing alternative metamorphic strategies. | 7,869.4 | 2011-12-07T00:00:00.000 | [
"Biology"
] |
Large spin Hall angle in vanadium film
We report a large spin Hall angle observed in vanadium films sputter-grown at room temperature, which have small grain size and consist of a mixture of body centered tetragonal (bct) and body centered cubic (bcc) structures. The spin Hall angle is as large as θ V = −0.071 ± 0.003, comparable to that of platinum, θ Pt = 0.076 ± 0.007, and is much larger than that of bcc V film grown at 400 °C, θ V_bcc = −0.012 ± 0.002. Similar to β-tantalum and β-tungsten, the sputter-grown V films also have a high resistivity of more than 200 μΩ∙cm. Surprisingly, the spin diffusion length is still long at 16.3 nm. This finding not only indicates that specific crystalline structure can lead to a large spin Hall effect but also suggests 3d light metals should not be ruled out in the search for materials with large spin Hall angle.
, where the material-specific spin Hall angle θ SH 0 characterizes the spin current conversion efficiency from the charge current J C , and σ is the spin polarization vector of the pure spin current. One common method to quantify θ SH 0 in NMs is to employ NM/FM bilayers and to measure the current-driven spin-orbit torques on the FM 5,6 . In this letter, we use a phenomenological parameter θ SH to represent the effective spin Hall angle which is extracted from the measured spin-orbit torques in NM/FM bilayers.
To date, most studies have focused on the 4d and 5d transition metals, since the spin-orbit coupling strength of individual atoms scales as Z 4,7,8 where Z is the atomic number. Large spin Hall angles have been observed in heavy metals such as Pt 9, 10 , β-Ta 11 , β-W 12,13 , Hf 14,15 , etc. Considerable efforts have also been focused on enhancing the conversion efficiency by introducing external scattering mechanisms in the heavy metals, which has lead to the observation of giant spin Hall angles in CuBi alloys 16 , AuW 17 , CuIr 18 , CuPd 19 , etc. Due to their relatively low Z, 3d light transition metals are often neglected in the search for efficient spin Hall materials. However, very recently, Du et al. observed significant spin pumping-driven inverse SHE (ISHE) voltages in YIG/Cr bilayers, and obtained a spin Hall angle as large as −0.051 ± 0.005 20 . Qu et al. have also demonstrated sizeable ISHE in Cr by using a thermal spin injection method 21 . In this letter, the spin-orbit torques (SOTs) in V films has been characterized by using an optical spin torque magnetometer based on the polar magneto-optical Kerr effect (MOKE) 6,22 . A large spin Hall angle of −0.071 ± 0.003 has been found in V/Co 40 Fe 40 B 20 bilayers. As comparison, the spin Hall angles find in Ta/Co 40 Fe 40 B 20 and Pt/Co 40 Fe 40 B 20 by using the same MOKE setup are −0.139 ± 0.003 and 0.076 ± 0.007, respectively. The large spin Hall angle appears to correlate to the structure of the V layer, which consists of body centered tetragonal (bct) and body centered cubic (bcc) phases. Unlike β-Ta and β-W films, these room-temperature sputter-grown V films still have a long spin diffusion length of 16.3 nm. Vanadium films grown at high temperature exhibit dominant bcc structure and a much smaller spin Hall angle of θ V _ bcc = −0.012 ± 0.002, which is comparable to the reported value of θ V = −0.010 ± 0.001 20 .
CoFeB(2)/SiO 2 (5) with x = 2, 5, 10, 30, 50 nm ("||" denotes the substrate end, and the values in parentheses represent the thicknesses in nm). The deposition rates and sputtering power were 0.067 nm/s and 18 W for CoFeB and 0.070 nm/s and 24 W for V, respectively. The pressure was maintained at 3.0 mTorr. One control sample F || V(30)/ CoFeB(2)/SiO 2 (5) was fabricated at 400 °C with a lower base pressure of 8 × 10 −8 Torr. Figure 1(a) shows X-ray diffraction (XRD) patterns of samples D and F, which have the same 30 nm V thickness, but were grown at room temperature and 400 °C, respectively. Sample D shows a broad and asymmetric diffraction peak with the center located at 40.3° whereas the main diffraction peak of sample F is at 42.1°. Figure 1(b) shows the scanning transmission electron microscopy (STEM) cross section view of sample D. Figure 1(c) and (d) show the transmission electron microscopy (TEM) and electron diffraction (ED) patterns of samples D and F, respectively. In sample D, the average grain size is about 5 nm and the interlayer spacing varies from 2.20 Å to 2.31 Å at different locations. Sample F has a larger grain size above 10 nm, and the interlayer spacing is dominantly 2.16 Å.
To better characterize the V structure in our samples, we performed fast Fourier transform (FFT) analysis based on high resolution transmission electron microscopy (HRTEM) images in Fig. 2(a) and (b). The structure from the grains surrounded by white solid curves in Fig. 2(a) can be best indexed by a [111] zone axis of a bct V 23, 24 , whereas the grains surrounded by dashed curves can be best described by a bcc V. These analyses suggest the sputter-grown V films at room temperature are a mixture of bct and bcc structures, which may also explain the broad XRD peak in Fig. 1(a). This is similar to β-Ta films, which have tetragonal nanocrystalline phase in an amorphous matrix 25 , while α-Ta films have bcc structure. In sharp contrast, as shown in Fig. 2(b), sample F grown at 400 °C shows dominant bcc V structure from the FFT analyses. As shown in Fig. 2(c), the resistivities of samples A -E, all grown at room temperature, vary from 290 μΩ•cm to 220 μΩ•cm as the V thickness changes from 2 to 50 nm. On the other hand, sample F, grown at 400 °C, shows much reduced resistivity.
Polar MOKE measurements of current-driven spin-orbit torque in V(x)/CoFeB(2)/SiO 2 (5) samples were performed using the experiment setup shown in Fig. 3(a). The bilayer was patterned into a 50 μm × 50 μm strip. An AC current was sent through the sample. The current in the V layer generated an out-of-plane Oersted field and an effective field due to spin-orbit torque, which cause a change of the magnetization Δm z in the CoFeB layer. The change of the magnetization was detected by measuring the polarization change in a laser beam with 2 µm diameter. The MOKE voltage signal consists of SOT ( Fig. 3(b)) and out-of-plane Oersted field terms (Fig. 3(c)) which can be separately extracted based on the symmetry with respect to the external magnetic field 6,22 .
In order to extract the spin diffusion length λ sf of the V layer, we analyzed the dependence of the Gilbert damping coefficient α of the CoFeB layer as a function of the V layer thickness using a spin pumping experiment [26][27][28] . The inhomogeneous broadening (ΔH 0 ) and the effective magnetization (μ 0 M eff ) are shown in Fig. 4(a) and (b), respectively. ΔH 0 (defined as the zero-frequency intercept of the FMR linewidth) indicates the V film quality and inhomogeneity. The five V films exhibit film quality fluctuations. The effective magnetization field is related to the perpendicular anisotropy field, which may vary for different interfacial conditions, μ 0 is the permeability of vacuum, M S is the saturation magnetization, and K ⊥ is the surface anisotropy Fig. 4(c). The damping constant increases with the V layer thickness, and saturates above 30 nm of V. The increase of the damping constant due to the V layer α α α can be described as 29,30 : The spin Hall angle θ m can be extracted from the damping like spin Hall torque measured by MOKE magnetometer, using where h SOT is the out-of-plane effective field, an electrical current I C flows through samples with the sample width w = 50 μm, and d CoFeB is 2 nm in the MOKE measurement. Due to the complexity of the current distribution in the bilayer structure and the large resistivity of the 2 nm CoFeB layer, here we make a simplifying assumption that all the charge current I C flows through the V, which underestimates the spin Hall angle but specifies a lower bound. The saturation magnetization μ 0 M S = 1.60 T, extracted from another 40 nm CoFeB sample through a FMR measurement. As shown in Fig. 4(d), the spin Hall angle increases with the V layer thickness and approaches saturation as the V thickness goes above the V spin diffusion length. In order to account for spin transparency and interface coupling, we use the modified spin transport model to extract the spin Hall angle θ SH (∞) 30 , the V thickness dependence of the measured spin Hall angle θ m (d V ) becomes: 31 . The extracted spin Hall angle is θ SH (∞) = −0.071 ± 0.003, with the fitting parameter R = −0.908 ± 0.017. On the other hand, the control sample F, with its V layer grown at 400 °C, has a measured spin Hall angle of θ m (d V = 30 nm) = −0.012 ± 0.002, which is comparable to the reported value of V film 20 . The non-zero R indicates the complex interfacial condition at the V/CoFeB interface, which could be caused by spin backflow (SBF) and/or enhanced spin scattering [32][33][34][35] .
Discussion
It has been found that the spin transparency at the NM/FM interface can play a critical role in determining the spin torque efficiency [32][33][34][35] . The insertion of atomically thin magnetic layers at a Pt/Py interface 32 , or one ultra-thin Hf layer between Pt/CoFeB could significantly modulate the interfacial transparency and enhance the spin injection efficiency from Pt to the FM layer 33 . Due to the importance of the interfacial condition, we have analyzed the spin mixing conductance of the V/CoFeB interface. The effective spin mixing conductance is 2 . The value of ↑↓ G eff is two orders of magnitudes larger than G V , making the bare spin mixing conductance ↑↓ G < 0. This unphysical negative value indicates that there may be other additional magnetic damping enhancement mechanisms at the V/CoFeB interface, which could lead to the overestimation of ↑↓ G eff 34 . Due to the complication at the V/CoFeB interface, it becomes difficult to extract the spin Hall angle of V. However, under the assumption of a completely transparent interface = , it is still reasonable to quantify a lower bound of the effective spin Hall angle as θ V = −0.069 ± 0.002. Because of the transparent interface assumption, the fitting spin diffusion length λ = 5.2 ± 0.3 nm doesn't match with λ sf = 16.3 ± 0.7 nm, which has been extracted from spin pumping experiment by taking account of a non-transparent interface condition. Previous research has related a large spin Hall angle with specific crystal structures 11,12,36 . For example, a giant spin Hall angle θ SH = −0.12 ~ −0.15 has been reported in β-Ta 11 , which has a stretched tetragonal crystal structure with an enlarged lattice constant and a higher resistivity of 190 µΩ·cm compared with α-Ta. Similar behavior has also been observed in β-W 12 . As a group 5 element, V has a similar Fermi surface as those of Nb and Ta 37 . We therefore speculate the mechanism for the large spin Hall angle in V films is also due to the presence of a tetragonal phase, similar to β-Ta 25 . However, unlike β-Ta and β-W, these sputter-grown V films still have a long spin diffusion length.
In summary, a large spin Hall angle is observed in 3d light transition metal V, which is deposited at room temperature and characterized with small grain size and enlarged interlayer spacing with mixed bct and bcc states. The spin Hall angle is at least θ V = −0.071 ± 0.003, comparable to that of Pt, and is much larger than that in bcc V film grown at 400 °C. Similar to β-Ta and β-W, the V films with mixed bct and bcc phases also show high resistivity. However, the spin diffusion length is still as long as 16.3 nm. The surprisingly large spin Hall angle in V will not only be useful for potential applications in spin-orbit-torque-based magnetization switching, but also have ramifications on understanding the origin of the spin Hall angle. In particular, this research suggests that light metals should not be ruled out in the search for efficient spin Hall materials with large spin Hall angle. | 2,989 | 2016-03-16T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Alpha/Beta Interferon Receptor Signaling Amplifies Early Proinflammatory Cytokine Production in the Lung during Respiratory Syncytial Virus Infection
ABSTRACT Type I interferons (IFNs) are produced early upon virus infection and signal through the alpha/beta interferon (IFN-α/β) receptor (IFNAR) to induce genes that encode proteins important for limiting viral replication and directing immune responses. To investigate the extent to which type I IFNs play a role in the local regulation of inflammation in the airways, we examined their importance in early lung responses to infection with respiratory syncytial virus (RSV). IFNAR1-deficient (IFNAR1−/−) mice displayed increased lung viral load and weight loss during RSV infection. As expected, expression of IFN-inducible genes was markedly reduced in the lungs of IFNAR1−/− mice. Surprisingly, we found that the levels of proinflammatory cytokines and chemokines in the lungs of RSV-infected mice were also greatly reduced in the absence of IFNAR signaling. Furthermore, low levels of proinflammatory cytokines were also detected in the lungs of IFNAR1−/− mice challenged with noninfectious innate immune stimuli such as selected Toll-like receptor (TLR) agonists. Finally, recombinant IFN-α was sufficient to potentiate the production of inflammatory mediators in the lungs of wild-type mice challenged with innate immune stimuli. Thus, in addition to its well-known role in antiviral resistance, type I IFN receptor signaling acts as a central driver of early proinflammatory responses in the lung. Inhibiting the effects of type I IFNs may therefore be useful in dampening inflammation in lung diseases characterized by enhanced inflammatory cytokine production. IMPORTANCE The initial response to viral infection is characterized by the production of interferons (IFNs). One group of IFNs, the type I IFNs, are produced early upon virus infection and signal through the IFN-α/β receptor (IFNAR) to induce proteins important for limiting viral replication and directing immune responses. Here we examined the importance of type I IFNs in early responses to respiratory syncytial virus (RSV). Our data suggest that type I IFN production and IFNAR receptor signaling not only induce an antiviral state but also serve to amplify proinflammatory responses in the respiratory tract. We also confirm this conclusion in another model of acute inflammation induced by noninfectious stimuli. Our findings are of relevance to human disease, as RSV is a major cause of infant bronchiolitis and polymorphisms in the IFN system are known to impact disease severity.
I nfections at mucosal surfaces need to be managed carefully by the host in order to avoid damage to barrier functions. The pathogen needs to be eradicated rapidly, but inflammation must be tightly regulated to prevent detrimental effects on organ function. Nowhere is this more evident than in the lung, where any excess cell infiltration or damage will markedly affect gas exchange. The lung is a major site of infection by viruses, and the adverse effects of dysregulated lung inflammation have a very significant impact on human health.
The initial response to viral infection in the lung and elsewhere is characterized by the production of interferons (IFNs). There are 3 types of IFNs; type I IFNs (including alpha IFN [IFN-␣] and IFN-), type II IFNs (IFN-␥), and the recently discovered type III IFNs (IFN-). Irrespective of type, IFNs induce cell-intrinsic antiviral responses, activate natural killer (NK) cells, macrophages, and dendritic cells (DCs), and regulate innate and adaptive immune responses (1)(2)(3). IFN-␥ is mainly produced by NK and T cells, while the synthesis of type I and type III IFNs, as well as other cytokines such as interleukin-6 (IL-6) and tumor necrosis factor alpha (TNF-␣), is induced in immune and nonimmune cell types upon direct recognition of viral molecules by pattern recognition receptors (PRRs) such as Toll-like receptors (TLRs) and RIG-I-like receptors (RLRs) (1)(2)(3)(4). The effect of IFN-is restricted to epithelial cells at mucosal surfaces, which express the relevant receptor (1,2). In contrast, the receptor for type I IFNs, called IFNAR, is expressed ubiquitously by all cells. IFN-␣/ is produced in the lung following infection with many viruses such as Newcastle disease virus, influenza virus, respiratory syncytial virus (RSV), and human metapneumovirus (2,(5)(6)(7)(8)(9).
RSV is the major cause of infant bronchiolitis (10). While RSV disease manifests as a simple common cold in the majority of cases, between 2 and 3% of children develop severe bronchiolitis. The variation in disease severity seems to be mostly due to host rather than viral factors and has recently been associated with polymorphisms in several innate immunity genes, in particular many that control the IFN system (11)(12)(13)(14). The IFN system may therefore be a key regulator of RSV-induced lung inflammation.
To test the impact of type I IFNs on viral infections, IFNAR1deficient (IFNAR1 Ϫ/Ϫ ) mice, which lack all signaling in response to IFN-␣/ (15), have been widely used. For infections such as with reovirus or Chikungunya virus, loss of IFNAR signaling is detrimental, leading to overwhelming infection and death (16,17). For some viruses (including influenza virus and RSV), the effect of IFNAR1 deficiency is not as severe and does not impact on survival from infection (18,19).
Whether type I IFNs are involved only in inducing an antiviral state in the lung or whether they have a more general effect in regulation of lung inflammation has not been fully elucidated. Here, we address this question by comparing inflammation in the lungs of wild-type and IFNAR1-deficient mice in response to challenge with selected TLR agonists or RSV infection. Surprisingly, in all cases IFNAR deficiency was associated with a marked decrease in the production of proinflammatory cytokines and chemokines in the lung. Furthermore, type I IFN administration to the lung potentiated inflammation in mice. We suggest that type I IFN production and IFNAR receptor signaling not only induce an antiviral state but also serve to amplify proinflammatory responses in the respiratory tract.
Mice, virus stocks, TLR agonists, cytokines, and infection.
Six-to 10week-old C57BL/6 (Harlan or Charles River, United Kingdom) or IFNAR1 Ϫ/Ϫ mice on a C56BL/6 background (obtained from C. Reis e Sousa, London Research Institute, London, United Kingdom) were maintained under pathogen-free conditions under UK Home Office guidelines.
Plaque-purified human RSV (originally strain A2 from the ATCC, United States) was grown in HEp-2 cells (20). Age-and sex-matched mice were lightly anesthetized and infected intranasally (i.n.) with 2 ϫ 10 6 focus-forming units (FFU) of RSV in 100 l. For lung challenge with innate stimuli, CpG (1.25 g/g bodyweight), poly(I·C) HMW (high molecular weight; 3.5 g/g), and lipopolysaccharide (LPS; 500 or 50 ng/g) in 100 l were administered i.n. (all from Invivogen). Recombinant IFN-␣11 (Miltenyi Biotech) was administered i.n. at 500 ng/mouse. Cell collection and preparation. Bronchoalveolar lavage (BAL) was carried out by flushing 3 times with 1 ml phosphate-buffered saline (PBS) containing 0.5 mM EDTA. For determination of cellular composition in the BAL fluid, cells were transferred onto a microscope slide (Thermo Scientific) using a cytospin centrifuge and stained with hematoxylin and eosin (H&E; Reagena).
Chemokine and cytokine detection. Chemokines and cytokines were quantified by a 20-plex Luminex kit according to the manufacturer's instructions (Life Technologies), and data were acquired with a Bio-plex 200 system (Bio-Rad laboratories, United Kingdom). The concentration of cytokines in each sample was determined according to the standard curve using the Bio-plex 6 software (Bio-Rad laboratories). IFN-2/3 (R&D), CXCL1 (R&D), or IFN-␣ (21) levels in the BAL fluid were measured by enzyme-linked immunosorbent assay (ELISA). Data were acquired on a SpectraMax Plus plate reader (Molecular Devices) and analyzed using SoftMax software (version 5.2).
Statistical analysis. Results are presented as means Ϯ standard errors of the means (SEM). The significance of results between the groups was analyzed by two-tailed, nonparametric, unpaired Mann-Whitney t test (Prism software; Graph-Pad Software Inc.) and is indicated in the figures as follows: *, P Ͻ 0.05; **, P Ͻ 0.01; ***, P Ͻ 0.001). P values of Ͻ0.05 were considered significant.
RESULTS
Type I, II, and III interferon production is abrogated in IFNAR1 ؊/؊ mice infected with RSV. In order to investigate the role of IFNAR signaling in lung inflammation induced by infectious challenge, wild-type (wt; C57BL/6) and IFNAR1-deficient (IFNAR1 Ϫ/Ϫ ) mice were infected i.n. with RSV. We first assessed the effect of IFNAR deficiency on viral control. IFNAR1 Ϫ/Ϫ mice showed a significantly higher viral load in the lung, as measured by the copy number of viral L gene RNA, compared to wt mice from 8 h postinfection (p.i.) until day 14 p.i. (the latest time point studied; Fig. 1A). A delay in viral clearance was also apparent, as 11 of 12 IFNAR1 Ϫ/Ϫ mice had detectable L gene in the lungs at day 14 p.i. compared to only 1 of 12 mice in the wt group (Fig. 1A). The increase in lung viral load was accompanied by greater weight loss, a measure of infection severity (25): IFNAR1 Ϫ/Ϫ mice started to lose weight at day 5, lost significantly more weight than infected wt mice, and had a slower recovery (Fig. 1B).
We then assessed the levels of IFNs in the lung early after infection by mRNA analysis and protein detection. IFN-␣ in the BAL fluid of wt mice was detected from 8 h p.i., with peak production at 12 to 18 h p.i. In contrast, no IFN-␣ was detectable in IFNAR1 Ϫ/Ϫ mice after RSV infection or in wt mice inoculated with UV-inactivated RSV ( Fig. 2A and data not shown). Both wt and IFNAR1 Ϫ/Ϫ mice displayed similar levels of IFN- mRNA at 4 h p.i., but this increased 100-fold at later time points in wt but not IFNAR1-deficient mice (Fig. 2B). This is consistent with the known IFNAR-dependent positive-feedback loop for type I IFN production (26). IFN-␥ (mRNA or protein) was not detected in IFNAR1 Ϫ/Ϫ mice at any time point (Fig. 2C), in contrast to wt mice, which showed IFN-␥ levels peaking at 12 and 18 h p.i. for mRNA and protein, respectively. Furthermore, although expression of IFN-mRNA was induced in IFNAR1 Ϫ/Ϫ mice, it was manifestly lower than in wt controls and did not result in detect-able protein in BAL fluid (Fig. 2D). These data confirm and extend a previous study showing that expression of type I IFNs is reduced in RSV-infected IFNAR1 Ϫ/Ϫ mice (9).
We assessed whether the reduced levels of all IFNs in IFNAR1 Ϫ/Ϫ mice impacted the expression of selected interferonstimulated genes (ISGs). CXCL10 could not be detected at either the mRNA or the protein level in lungs of IFNAR1 Ϫ/Ϫ mice but was induced in wt mice ( increased above basal expression at 4 h p.i. in both IFNAR1 Ϫ/Ϫ and wt mice but continued to rise only in the wt mice until 12 h p.i. (Fig. 2F). Similar results were obtained with Mx-1 (Mx-1; Fig. 2G), Rsad2 (Viperin), Oas1a (OAS1), and Eif2ak2 (PKR) gene expression (data not shown). In sum, our data suggest that the loss of type I IFN receptor signaling results in decreased expression of all IFN types early after RSV infection and prevents appropriate induction of ISGs. This is associated with a failure to control RSV replication and clear the virus rapidly, leading to increased pathology (weight loss).
F)
Proinflammatory cytokine responses are diminished in IFNAR1 ؊/؊ mice infected with RSV. To further evaluate the inflammatory response early after RSV infection, we measured additional cytokines, including IL-6, IL-1, and TNF-␣. Very little, if any, mRNA encoding these cytokines could be detected in the lungs of RSV-infected IFNAR1 Ϫ/Ϫ mice (Fig. 3A). In contrast, such mRNAs were easily detectable in wt mice as early as 4 h p.i., with a peak of expression at 8 h p.i. (Fig. 3A). Early infiltration of neutrophils did not differ between wt and IFNAR1 Ϫ/Ϫ mice (Fig. 3B), so we also analyzed the expression of CXCL1 (KC), a known neutrophil attractant. CXCL1 mRNA and protein were detected at similar levels in both wt and IFNAR1 Ϫ/Ϫ mice at early time points (Fig. 3C). However, later during the infection CXCL1 levels were significantly higher in wt mice (Fig. 3C).
Levels of proinflammatory cytokines and chemokines were additionally measured in BAL fluid using a multiplex approach. IL-2, IL-4, IL-10, IL-13, and IL-17A were not detected in the airways of either wt or IFNAR1 Ϫ/Ϫ mice at any time point after infection (data not shown). For some cytokines (IL-6, IL-1, TNF-␣, and IL-12p40), there were measurable levels in IFNAR1 Ϫ/Ϫ mice at 4 to 8 h p.i., but this was not comparable to the levels in wt mice, which were 5 to 10 times greater (Fig. 4). For other cytokines and chemokines such as IL-1␣, granulocyte-macrophage colony-stimulating factor (GM-CSF), IL-5, IL-12p40, CXCL9, and CCL3, there was no or very little induction in the airways of IFNAR1 Ϫ/Ϫ mice at any time point, while wt mice showed substantial levels peaking at 8 to 12 h p.i. (Fig. 4B). Overall, these results indicate that the lack of type I IFN receptor signaling results in a marked reduction in induction of proinflammatory mediators in lung and airways upon pulmonary viral infection.
IFNAR1-deficient mice display reduced proinflammatory cytokine responses to lung challenge with TLR agonists. To investigate whether similar results applied to noninfectious stimulation of the airways, we administered different innate immune stimuli intranasally (i.n.) to wt and IFNAR1 Ϫ/Ϫ mice. Mice were sacrificed 24 h following administration of CpG, poly(I·C), or LPS, and lung cytokine expression was measured using qPCR and ELISA. We observed significantly reduced mRNA levels of IFN-in response to the TLR9 agonist CpG and the TLR4 agonist LPS and of CXCL10 in response to all TLR agonists tested in IFNAR1 Ϫ/Ϫ mice (Fig. 5A). The expression of proinflammatory cytokines was also quantified. Decreased induction of IL-6, IL-1, and TNF-␣ mRNA was observed with the TLR3, RIG-I, and MDA-5 agonist, poly(I·C), in IFNAR1 Ϫ/Ϫ mice. Moreover, a significant reduction in IL-6 mRNA was seen in IFNAR1 Ϫ/Ϫ mice treated with CpG compared to wt mice (Fig. 5A). Similarly, LPS-dependent induc- tion of IL-1 mRNA was reduced in lungs of IFNAR1 Ϫ/Ϫ mice (Fig. 5A). A similar pattern was detected at the protein level: IFN-␣ was not induced after poly(I·C) treatment, and IL-6 was not induced by any of the TLR agonists in IFNAR1 Ϫ/Ϫ mice (Fig. 5B). Thus, the lack of type I IFN receptor decreases the lung inflammatory response provoked by innate stimulation of the airways. IFN-␣ potentiates proinflammatory cytokine induction. The above results suggested that type I IFNs might potentiate proinflammatory responses in the lung. To explicitly test this hypothesis, one representative type I IFN, IFN-␣11, was administered intranasally to mice concomitantly with innate immune challenge. The dose of IFN-␣ was similar to a dose previously used to inhibit RSV infection (8), and LPS was chosen as the stimulus because it does not induce high expression of type I IFNs in the lung (data not shown). Mice that received both IFN-␣ and a suboptimal dose of LPS showed increased lung expression of mRNAs encoding IL-6 and TNF-␣ (Fig. 6A) or IFN-␥ (data not shown) 24 h postchallenge compared to mice that received only LPS or IFN-␣. Interestingly, intranasal administration of IFN-␣ alone was sufficient to induce an increase in IL-6, TNF-␣, IFN-␥, and IL-1 mRNA at 12 h (Fig. 6B) and IL-6 and TNF-␣ mRNA in the lung at 24 h (Fig. 6A). This was not observed in IFNAR1 Ϫ/Ϫ mice, indicating that the effect is dependent on signaling via IFNAR (Fig. 6) and not due, for example, to a contaminant. We conclude that type I IFNs markedly potentiate acute proinflammatory responses induced by innate immune stimulation of the airways.
DISCUSSION
Type I IFNs are produced early after viral infection as a first line of host defense. They act on all cell types via the ubiquitously expressed IFNAR to induce increased expression of more than 300 different genes whose products eventually interfere with viral replication and viral spread, as well as lead to the initiation of immune responses (3,10). Previous studies addressing the role of type I IFNs in the lung have focused mainly on adaptive immunity (8,9,19,27,28). Thus, the importance of type I IFNs signaling for the early innate immune response in the lung remains elusive. In this study, we uncover a general role for IFNAR signaling in amplifying acute lung inflammatory responses to innate stimuli. Further, we provide in vivo evidence for an important role of IFNAR in innate resistance to RSV lung infection and show that signaling through IFNAR is necessary for coordinating the inflammatory response to the virus. Our data suggest that type I IFNs are pivotal contributors to lung inflammation.
We anticipated that other IFNs might compensate for the lack of type I IFN signaling during RSV infection, as previously shown for influenza virus (9,(29)(30)(31), but were surprised to find that our data did not support this hypothesis; neither IFN-nor IFN-␥ was upregulated in the lungs of IFNAR1 Ϫ/Ϫ mice during the early stages of RSV infection. Instead, expression of all IFNs and ISGs was decreased in RSV-infected IFNAR1 Ϫ/Ϫ mice compared to wt controls. This suggests that type I IFNs are involved in controlling the expression of IFN-␣/, IFN-␥, IFN-, and ISGs during RSV infection. For IFN-␣/, this is expected because type I IFN pro- duction relies on a positive-feedback loop through the type I IFN receptor (26). In addition, previous studies have shown that type I IFNs play a critical role in induction of IFN-␥ gene expression through the activation of STAT4 (32,33) or increased signaling through other cytokine receptors such as IFN-␥ receptor by increased levels of STAT1 (19,34). Furthermore, since the IFN responses were reduced in IFNAR1 Ϫ/Ϫ mice, this resulted in a diminished induction of ISGs as has previously been shown for TLR stimulation (35,36) and for bone marrow-derived DCs (BMDCs) stimulated with RSV (37). Therefore, type I IFN production with subsequent IFNAR signaling is a key component of the entire IFN response early after RSV infection. Surprisingly, our data indicate that type I IFN production and subsequent IFNAR signaling are also a key component of the entire inflammatory response. Indeed, we found that the induction of proinflammatory cytokines (e.g., IL-6, IL-1, and TNF-␣) was abrogated in the lungs of IFNAR1 Ϫ/Ϫ mice after RSV infection. A similar pattern was seen in BAL fluid for a broader array of proinflammatory cytokines and chemokines. Furthermore, IFNAR deficiency decreased the induction of proinflammatory cytokines in response to airway challenge with different innate immune stimuli, and IFN-␣ augmented the proinflammatory response to LPS stimulation. Also, IFN-␣ alone given intranasally drove a rapid and transient induction of proinflammatory cytokines in the lung. It has been shown that bone marrow-derived macrophages can produce CCL2 after stimulation with IFN- (38) and that recombinant IFN-␣ (rIFN-␣) potentiates serum TNF-␣ response to LPS administration (39). Furthermore, the dependence on IFNAR signaling for cytokine production has previously been suggested in studies where BMDCs from IFNAR1 Ϫ/Ϫ mice were found to produce less IL-12p70 after RSV exposure (37) or after treatment with select combinations of TLR agonists (36). However, it is possible that the effect of IFN-␣ in the lung is not to initiate de novo cytokine synthesis but to amplify that which has been initiated by other stimuli such as environmental endotoxins or airway commensals. Whichever the case, our data point to a hitherto unappreciated key role for IFNAR signaling in amplifying lung inflammation. This has been noted in systemic models of inflammation, where IFNAR1 Ϫ/Ϫ mice have been shown to be more resistant to LPS-induced septic shock and to lethal immu-nopathology induced by systemic Candida albicans infection (3,40,41).
There are several possible mechanisms that could explain our findings. First, proinflammatory cytokines are regulated mainly by the transcription factor NF-B. Synergy between IFNAR signaling and NF-B pathways has been suggested (36), and a recent report revealed multiple IFN-stimulated pathways that can activate NF-B (42). However, NF-B is activated at 0.5 to 1.5 h after RSV inoculation independently of viral replication (43). This could explain the early induction of some cytokines such as IFN-, TNF-␣, and CXCL1, which were detected at 4 h p.i. in IFNAR1 Ϫ/Ϫ mice. Another possibility is that the expression of molecules involved in virus recognition (e.g., RIG-I), signaling (e.g., MyD88), and cytokine production (e.g., MTOR) (44) is dependent on type I IFN receptor signaling. A third possibility is that the cellular source of the proinflammatory cytokines needs to be recruited into the lung and that this recruitment is dependent on IFNAR signaling. That source is unlikely to be neutrophils, as we observed comparable induction of CXCL1 and early infiltration of neutrophils into the airways of RSV-infected IFNAR1 Ϫ/Ϫ and wt mice.
Proinflammatory cytokines are known to cause discomfort and reduce appetite, leading to weight loss (10,45). Oddly, we observed increased weight loss during RSV infection in IFNAR1 Ϫ/Ϫ mice, even though proinflammatory cytokines were not detected. The weight loss during RSV infection coincides with the infiltration of T cells (days 5 to 7 p.i.) and is reduced if T cells are depleted (46). However, during both RSV and Sendai virus infections, virus-specific T cells in IF-NAR1 Ϫ/Ϫ mice are generated in numbers comparable to those seen in wt mice (references 19 and 47 and data not shown). Therefore, we suppose that the increased weight loss in the IFNAR1 Ϫ/Ϫ mice is not a manifestation of the immune response but, rather, a direct consequence of increased viral load and possible associated cytopathic damage to the lung structure and epithelial barrier.
The notion that weight loss can be a direct manifestation of RSV load has been previously suggested (25). Consistent with that notion, we found that IFNAR1 Ϫ/Ϫ mice were more permissive to RSV infection both as measured by weight loss and as measured by quantitating viral RNA in the lungs. This is in contrast to results from other groups who have found no effect of IFNAR in the resistance to RSV (19,31) and might be explained by the use of distinct mouse strains (19), volumes and virus titers used for infections, virus purity (31), and/or methods for viral detection (qPCR versus plaque assay). In addition, differences in microbiota among mice from different animal facilities might be a contributing factor, as increasingly appreciated (48).
In summary, our study shows that lack of signaling through the IFNAR has a negative impact on the production of proinflammatory cytokines in the lung both after exposure to different innate stimuli and during RSV infection. This reveals a dual role for type I IFNs during respiratory infections, limiting viral replication while at the same time regulating the cytokine milieu. Furthermore, it suggests that an excessive induction of type I IFNs could potentially be detrimental for the host by driving excessive lung inflammation. The demonstration that type I IFNs can amplify lung inflammation when given exogenously may have applications in the design of vaccines or therapeutics for use at mucosal (A) C57BL/6 wt mice were intranasally challenged with a suboptimal dose of LPS (50 ng/g body weight) with or without recombinant IFN-␣11 (rIFN-␣11) (500 ng/mouse), and lungs were collected after 24 h. (B) C57BL/6 wt and IFNAR1 Ϫ/Ϫ mice were intranasally challenged with rIFN-␣ (500 ng/mouse), and lungs were collected after 12 h. RNA was isolated from lungs, and gene expression levels of IL-6 and TNF-␣ (A) and IL-6, TNF-␣, IFN-␥, and IL-1 (B) were determined by qPCR. Gene expression relative to GAPDH was calculated for IL-6 and IL-1, and for TNF-␣ and IFN-␥ copy numbers were determined using a plasmid standard. Data shown are pooled data from 2 individual experiments with 4 or 5 mice per group in each experiment. Error bars indicate the SEM. Significance when comparing RSV-infected wt with RSV-infected IFNAR1 Ϫ/Ϫ mice: ***, P Յ 0.001; **, P Յ 0.01; *, P Յ 0.05. surfaces; conversely, local IFN blockade might be deployed as a strategy by which to limit lung inflammation. | 5,509 | 2014-03-19T00:00:00.000 | [
"Biology",
"Medicine"
] |
Topological optimization of quantum key distribution networks
A Quantum Key Distribution (QKD) network is an infrastructure that allows the realization of the key distribution cryptographic primitive over long distances and at high rates with information-theoretic security. In this work, we consider QKD networks based on trusted repeaters from a topology viewpoint, and present a set of analytical models that can be used to optimize the spatial distribution of QKD devices and nodes in specific network configurations in order to guarantee a certain level of service to network users, at a minimum cost. We give details on new methods and original results regarding such cost minimization arguments applied to QKD networks. These results are likely to become of high importance when the deployment of QKD networks will be addressed by future quantum telecommunication operators. They will therefore have a strong impact on the design and requirements of the next generation of QKD devices.
Introduction
Quantum Key Distribution (QKD) is a technology that uses the properties of quantum mechanics to realize an important cryptographic primitive: key distribution ‡. Unlike the techniques used in traditional "classical" cryptography, for which the security relies on the conjectured computational hardness of certain mathematical problems, QKD security can be formally proven. Secret keys established via QKD are informationtheoretically secure, which implies that any adversary trying to eavesdrop cannot obtain any information on the transmitted keys at any point in the future, even if she possesses extremely large computational resources.
The communication channels needed to perform QKD consist in an optical channel, on which well-controlled quantum states of light are exchanged, and a classical channel that is used for signaling during the quantum exchanges and for the classical postprocessing phase, namely key reconciliation. Their combination forms a communication link, over which quantum key distribution allows two distant users to exchange a specific type of data, in particular secret keys. In this sense, QKD is by nature a telecommunication technology, and so QKD links can be combined with appropriately designed nodes to form QKD networks.
The performance of QKD links has rapidly improved in the last years. Starting from pioneering experiments in the 90s [1], important steps have been taken to bring QKD from the laboratory to the open field. Thanks to the continuous efforts invested in developing better QKD protocols and hardware, in parallel to the advancement of security proofs (see [2,3,4] for reviews), the performance that can now be achieved, in terms of attainable communication distance, secret key generation rate and reliability, positions QKD as the first quantum information processing technology reaching a level of maturity sufficient to target deployment over real-world networks. Indeed, off-theshelf QKD systems are now commercially available [5], and the first QKD networks have recently been implemented [6,7,8].
Up till now, research in QKD has focused on building and optimizing individual systems to reach the longest possible distance and/or the highest possible secret bit rate, without taking into account the cost of such systems. However, as the perspective of deploying QKD networks becomes a reality, the question of optimal resource allocation, intrinsically linked to cost considerations, becomes relevant and important, as is the case for any telecommunication network infrastructure. It becomes therefore necessary to consider QKD from a cost perspective, and in particular study the potential trade-offs of cost and performance that can occur in this context.
Following the above arguments, we consider in this work the design of QKD networks from a topology viewpoint, and present techniques and analytical models that can be used to optimize the spatial distribution of QKD devices and QKD nodes within specific network architectures in order to guarantee a given level of service to the network ‡ More accurately, the primitive is that of secret key agreement using a public quantum channel and a public authenticated classical channel. users, at a minimum cost. We also study how cost minimization arguments influence the optimal working points of QKD links. We show in particular that, in the perspective of QKD networks, individual QKD links should be operated at an optimal working distance that can be significantly shorter than their maximum attainable distance.
The paper is structured as follows. In section 2, we define a QKD network and discuss the topology and characteristics of the network architecture that we consider in this work. We also introduce the concept of a backbone network structure. In section 3, we present our calculations and results on network topological optimization based on cost arguments. In particular, we provide a comprehensive set of modeling tools and cost function calculations in specific network configurations, and discuss the effect of our results on the design of practical QKD networks. Finally, in section 4, we discuss open questions and future perspectives for QKD networks.
Definition and types of QKD networks
Extending the range of quantum key distribution systems to very long distances, and allowing the exchange of secret keys between multiple users necessitates the development of a network infrastructure connecting multiple individual QKD links. Indeed, QKD links are inherently only adapted to point-to-point key exchange between the two endpoints of a quantum channel, while the signal-to-noise ratio decrease occurring with propagation loss ultimately limits their attainable range. It is then natural to consider QKD networks as a means to overcome these limitations.
A QKD network is an infrastructure composed of QKD links, i.e. pairs of QKD devices linked by a quantum and a classical communication channel connecting two separate locations, or nodes. These links are then used to connect multiple distant nodes. Based on these resources and using appropriate protocols, this infrastructure can enable the unconditionally secure distribution of symmetric secret keys between any pair of legitimate users accessing the network.
QKD networks can be categorized in two general groups [9]: networks that create an end-to-end quantum channel between the two users, and networks that require a transport of the key over many intermediate trusted nodes. In the first group, we find networks in which a classical optical function such as switching or multiplexing is applied at the node level on the quantum signals sent over the quantum channel. This approach allows multi-user QKD but cannot be used to extend the key distribution distance. Much more advanced members of this group are the quantum repeater based QKD networks. Quantum repeaters [10] can create a perfect end-to-end quantum channel by distributing entanglement between any two network users. The implementation of quantum repeaters, however, requires complex quantum operations and quantum memories, whose realization remains an experimental challenge. The same is true for the simpler version of quantum repeaters, namely quantum relays [11], which on the one hand do not require a quantum memory but on the other cannot arbitrarily extend the QKD communication distance.
Trusted repeater QKD networks: characteristics and assumptions
In this work, we are interested in the second group of networks, which we call trusted repeater QKD networks. In these networks, the nodes act as trusted relays that store locally QKD-generated keys in classical memories, and then use these keys to perform long-distance key distribution between any two nodes of the network. Therefore, trusted repeater QKD networks do not require nodes equipped with quantum memories; they only require QKD devices and classical memories as well as processing units placed within secure locations, and can thus be deployed with currently available technologies. Indeed, the implementation of such networks has been the subject of several international projects [7,8,12,13].
As we will see in detail in the following section, the analysis of trusted repeater QKD networks from a topology viewpoint and with the goal of achieving optimization based on cost considerations involves modeling several characteristics of such a network, namely the user distribution, the node distribution, the call traffic, and the traffic routing. The user and node distributions, denoted by Π and M respectively, will be considered as Poisson stochastic point processes, and will be thus modeled using convenient stochastic geometry tools. Modeling the traffic demand is particularly subtle because of the variation with respect to time and distance that this traffic may feature in a real network. Calculations here will neglect these variations and will be performed under the assumption of a uniform call volume between any pair of users, denoted as V .
Finally, routing in trusted repeater QKD networks is performed according to the following general principle: First, local keys are generated over QKD links and are stored in nodes that are placed on both ends of each link. Global key distribution is then performed over a QKD path, i.e. a one-dimensional chain of trusted relays connected by QKD links, establishing a connection between two end nodes. Secret keys are forwarded, in a hop-by-hop fashion, along these QKD paths. To ensure their secrecy, one-time pad encryption and information-theoretically secure authentication, both realized with a local QKD key, are performed. End-to-end information-theoretic security is thus obtained between the end nodes, provided that the intermediate nodes can be trusted.
Quantum backbone network architecture
Introducing hierarchy into network design can be an extremely convenient architectural tool because it allows to break complex structures into smaller and more flexible ensembles. Indeed, such hierarchical levels offer an efficient way to help solve resource allocation problems arising in networks, ranging from network routing to network deployment planning. In this work, we will associate the notion of hierarchy in QKD networks with the existence of what we will call a quantum backbone network.
In classical networks and especially the Internet, a backbone line is a larger transmission line that carries data gathered from smaller lines that interconnect with it. By analogy with this definition, the backbone QKD network is an infrastructure for key transport that gathers the traffic of secret key from many individual QKD links. QKD backbone links and nodes clearly appear as mutualized resources shared to provide service to many pairs of users. Keeping the fruitful analogy with classical networks, we will call access QKD links the point-to-point links used to connect QKD end users to their nearest QKD backbone node. The principle of traffic routing that we described above can be conveniently transposed in the context of backbone networks. In this case, traffic from individual users is gathered locally to backbone QKD nodes. This mutualized traffic is then routed hopby-hop over the backbone structure. Furthermore, it is important to note that the node and user point process distributions are distinct when a backbone network is considered, which might not be the case in a network without backbone.
In the following, we will derive cost functions for different QKD network configurations, under the above assumptions regarding the topology and the way traffic is routed in these networks, and as a function of the characteristics of individual QKD links. We will then use the results to discuss how QKD networks should be dimensioned, the optimal working points of QKD links, as well as the interest of adopting a hierarchical architecture, materialized by the existence of a backbone, in QKD networks.
QKD links: characterizing the rate versus distance
The main element underlying the cost optimization related to the deployment of quantum networks is the intrinsic performance of QKD links. This performance can essentially be summarized by the function R(ℓ), which gives the rate, in bit/s, of secret key that can be established over a QKD link of length ℓ.
Clearly, this secret key bit rate varies from system to system and comparisons between systems are thus difficult to establish. Moreover, comparisons have to be related to the security proofs for which the secret key bit rates have been derived. Security proofs are not yet fully categorized, although important steps in this direction have been taken [4].
As shown on figure 1, the typical curve describing the variation with distance of the logarithm of the mean rate of secret bit establishment R(ℓ) can be essentially separated into two parts: • A linear part that is the region where the rate of secret key establishment varies as a given power of the propagation attenuation. Since the attenuation η(ℓ) is exponentially increasing with distance, log R(ℓ) is linear in ℓ.
• An exponential drop-off at longer distances, where the error rate rapidly increases due to the growing contribution of detection dark counts. In this region, the decrease of the secret key rate is multi-exponential with distance. The slope of the curve representing log R(ℓ) is thus becoming increasingly steep until a maximum distance is reached.
For completeness, it is also important to mention the possibility that, for short distances, the secret bit rate could be limited by a saturation of the detection setup. This will be the case if the repetition rate at which the quantum signals are sent in the quantum channel exceeds the bandwidth of the detector. We will however not investigate this possibility any further in the remaining of this work.
The behavior of the secret bit rate function R(ℓ) can be described using essentially three parameters, schematically shown on figure 1: (i) The secret bit rate at zero distance, R 0 ; (ii) The scaling parameter λ QKD in the linear region such that R(ℓ) = R 0 e −ℓ/λ QKD ; (iii) The distance at which the scaling of the rate becomes exponential, which is comparable to the maximum attainable distance, D drop ∼ D max .
R 0 is determined by the maximum clock rate of the QKD system. In QKD relying on photon-counting detection setups, R 0 is limited by the performance of the detectors, and is usually in the Mbit/s range. Clearly, the solutions allowing to improve the performance of the detectors have a direct impact on R 0 [15,14,16,17]. For QKD systems relying on continuous variables [18], based on homodyne detection performed with fast photodiodes, the experimental bound on R 0 can be significantly higher, potentially in the Gbit/s range. The computational complexity of the reconciliation however currently limits R 0 in the Mbit/s range in the practical demonstrations performed so far [19].
The scaling parameter λ QKD is essentially determined by the attenuation η(ℓ) over a quantum channel of length ℓ, and by a coefficient r that is mainly related to the security proof that can be applied to the experimental system. In the case of a typical network based on optical fibers, the attenuation η(ℓ) can be parametrized by an attenuation coefficient α (in dB/km) as η(ℓ) = 10 −αℓ/10 (for scaling of the attenuation in free space, see [4]). In the linear part of the curve shown on figure 1, the rate R(ℓ) varies as a given power r of the attenuation, R(ℓ) = R 0 η(ℓ) r . We can thus define the scaling parameter as λ QKD = 10/(α r log (10)). For QKD performed at telecom wavelengths, with protocols optimized for long distance operation, we can take α = 0.22 dB/km and r = 1, which leads us to λ QKD = 19.7 km, as the typical scaling distance for such QKD systems. This parameter is important since, as we shall see in the following, the optimal working distance of QKD links will essentially scale as λ QKD .
Finally, the existence of a rapid drop-off of the secret key rate at distances around D drop arises when the probability to detect some signal sent in the quantum channel, p s , becomes comparable to the probability to detect a dark count per detection time slot, p d . This occurs around the distance D drop , for which we have In practice, when working with InGaAs single-photon avalanche photodiodes (SPADs) operating at 1550 nm, the ratio η d /p d is optimized by varying the different external parameters of the detector such as the temperature, gate voltage or time slot duration. The best published performances for InGaAs SPADs [20,21] report values of the dark counts p d ≃ 10 −7 to 10 −6 for a detection efficiency η d around 10%, which leads to D drop ∼ D max ∼ 100 − 120 km for QKD systems employing such detectors. For a similar detection efficiency, the best available superconducting single-photon detectors (SSPDs) present dark counts p d ≃ 10 −8 to 10 −6 [22], leading to a maximum distance that can reach 140 km.
Toy model for QKD network cost derivation: a linear chain between two users
The linear chain as a simple asymptotic model of a quantum backbone network As a first example of QKD network cost derivation and optimization, we will consider what we will call the linear chain scenario. In particular, we consider two users, A and B, that want to rely on QKD to exchange secret keys in a scenario that imposes the use of several QKD links: • The two QKD users are very far away: their distance is L = ||AB|| with L ≫ D max .
• The two QKD users are exchanging secret bits at a very high rate. We will call V the volume of calls between the two users A and B (units of V : bits of secret key), and will assume V ≫ R 0 .
Because of the first condition, many intermediate nodes have to be used as trusted key relays to ensure key transport over QKD links from A to B. Because of the second condition, many QKD links have to be deployed in parallel to reach a secret key distribution rate capacity at least equal to the traffic volume.
The linear chain QKD network scenario is in a sense the simplest situation in which an infrastructure such as a quantum backbone network, described in section 2, is required. It therefore provides an interesting toy model for cost optimization and topological considerations.
Cost model: assumptions and definitions
The generic purpose of cost optimization is to ensure a given objective in terms of service, at the minimum cost. In the case of the linear chain scenario, this objective is to be able to offer a secret bit rate of V bit/s between two users A and B separated by a distance L, while minimizing the cost of the network infrastructure to be deployed.
In this and all subsequent models, we will consider as the total cost C of a QKD network, the cost of the equipment to be deployed to build the network. This can be seen as a simplifying assumption, since it is common, in network planning, to differentiate between capital and operating expenditures. We have chosen here to restrict our models to capital expenditures of QKD networks and will consider that their cost is arising from two sources: • The cost of QKD link equipment to be deployed. We will denote as C QKD the unit cost per QKD link. C QKD essentially corresponds to the cost of a pair of QKD devices. Note that here we implicitly assume that the deployment of optical fibers is for free, or more precisely that it is done independently and prior to the deployment of a QKD network.
• The cost of node equipment, which we denote as C node . C node typically corresponds to the hardware cost (for example some specific kind of routers need to be deployed inside QKD nodes), as well as the cost of the security infrastructure that is needed to make a QKD node a trusted and secure location.
As explained before and shown on figure 2, a linear chain QKD network is composed of a one-dimensional chain where adjacent QKD nodes are connected by QKD chain segments, each segment being potentially composed of multiple QKD links to ensure that a capacity equal to the traffic volume is reached.
Alice Bob L l Figure 2. The one-dimensional QKD chain linking two QKD users, Alice and Bob, over a distance L. Since L is considered much longer than the maximum span of a QKD link, D max , intermediate QKD nodes are needed to serve as trusted relays.
Total cost of the linear chain QKD network
For convexity reasons, discussed in more detail at the end of this section, the topology ensuring the minimum cost will correspond to place QKD nodes at regular intervals between A and B. We denote by ℓ the distance between two intermediate nodes, which then corresponds to the distance over which QKD links are operated within the linear chain QKD network. As we shall see, the question of cost minimization will reduce to finding the optimum value of QKD link operational distance, ℓ opt , for the linear chain QKD network.
There are clearly two antagonistic effects in the dependence of the total cost of the considered network on ℓ: • On the one hand, if QKD links are operated over long distances, their secret bit capacity R(ℓ) decreases. This will impose the deployment of more QKD links in parallel, on each chain segment linking two adjacent QKD nodes, and thus tends to increase the total cost.
• On the other hand, it is clear that increasing the operating distance ℓ allows to decrease the required number of intermediate trusted relay nodes, which leads to a decreased cost.
The optimum operating distance ℓ opt corresponds to the value of ℓ that minimizes the total cost function C: It is important to note that, in the above equation, we have made the assumption that we can neglect the effects of discretisation. This means that the length of the chain, L, can be considered much longer than the length of individual QKD links, ℓ, and that the traffic volume V can be considered as a continuous quantity, neglecting the discrete jumps associated to variations in the number of calls.
Cost minimization and optimum working distance of QKD links
In the asymptotic limit of very high traffic volume V , the cost of nodes can be neglected in comparison with the cost of QKD devices. The expression of the total cost in equation (1) then reduces to the first term, and we have the following interesting properties: • The total cost is directly proportional to the product of the traffic volume V and the total distance L.
• Optimizing the total cost C is equivalent to minimizing C(ℓ)/ℓ where C(ℓ) = C QKD /R(ℓ) is the per-bit cost of one unit of secret key rate.
Furthermore, assuming that QKD links are operated in the linear part of their characteristic (see figure 1), we can write C(ℓ) = C QKD R 0 e ℓ/λ QKD . Then, the value of ℓ opt that minimizes the quantity C(ℓ)/ℓ can be explicitly derived as where λ QKD was defined in section 3.1 as the natural scaling parameter of the function R(ℓ).
In the general case, the second term of the cost function in equation (1), corresponding to the cost of nodes, cannot be neglected. This second term does not depend on the volume of traffic V , and is always decreasing with ℓ. As a consequence, the optimum operating distance that minimizes C will always be greater than λ QKD , the value minimizing the first term in equation (1).
Under the assumption that the optimum distance will remain in the linear part of the function log R(ℓ), we can derive the following implicit relation for ℓ opt : The above equation allows for a quantitative discussion of the "weight" of the nodes in the behavior of the cost function. Indeed, we can see that the influence of the node cost is potentially important and can lead to an optimum working distance that can be significantly greater than λ QKD when C node
Existence of an optimum working distance and convexity of C(ℓ)
In most of the explicit derivations performed in this work, we assume a purely linear dependency of log R(ℓ) on ℓ. This assumption is convenient but remains an approximation since it does not take into account the drop-off of R(ℓ) occurring around D drop .
It is however possible to demonstrate the existence of an optimum working distance for QKD links in a more general case, by solely relying on the assumption that the function R(ℓ) is log-concave, i.e. that log R(ℓ) is concave. The log-concavity of R(ℓ) can be checked on a simple model inspired by the secret key rate formula for the BB84 QKD protocol with perfect single photons [4]. In particular, in this case we have R(p) = 1 − 2h(p), where h(p) is the entropy associated to a quantum bit error rate p, and assume that the dependence of the error rate p on the distance is of the form p = a + b/η(ℓ) = a + b ℓ/λ QKD , where a and b are parameters linked to the detection system. In this setup, it is straightforward to verify numerically that log R(ℓ) is concave for all reasonable values of a and b.
Since C(ℓ), the per-unit cost of secret bit rate on a QKD link, is proportional to 1/R(ℓ), the log-concavity of R(ℓ) implies the log-convexity of C(ℓ), which itself implies the convexity of C(ℓ). Finally, we can write the total cost of the linear chain QKD network as the sum of the cost of each chain segment and the cost of the node equipment, namely C(ℓ 0 , . . . , ℓ n ) = V n i=0 C(ℓ i ) + n C node .
In the above equation, ℓ 0 denotes the distance between A and the first node, ℓ k , k = 1, . . . n − 1, the distance between the kth node and the k + 1th node, and ℓ n the distance between the last node and B. For a convex function C, the minimization of n i=0 C(ℓ i ) under the constraint n i=0 ℓ i = L, where L is the distance between A and B, is obtained with ℓ i = L/(n + 1) for all i. Once we set ℓ i = L/(n + 1), the cost expression in the above equation only depends on n, or equivalently on ℓ = L/(n + 1). For large L, we can disregard the fact that ℓ is an integer divider of L and approximate (n + 1)/n by 1, which then leads to equation (1).
Cost of QKD networks: towards more general models
The linear chain toy model developed in section 3.2 provides an interesting intuition into the behavior of the cost function. The most important result is that, in the limit of large traffic rates and/or low cost of QKD nodes, the QKD network cost optimization reduces to the minimization of C(ℓ)/ℓ ∼ 1/(R(ℓ)ℓ). This leads to the existence of an optimum working distance, ℓ opt , at which QKD links need to be operated in order to minimize the global cost of the network deployment.
The linear chain QKD network model is however too restrictive in many aspects: it is one-dimensional and limited to the description of a network providing service to two users. We will now consider more general models, which allow us to study the more realistic case of QKD networks spanning a two-dimensional area, and providing service to a large number of users.
Modeling network spatial processes with stochastic geometry Stochastic geometry is a very useful mathematical tool for modeling telecommunication networks. It has the advantage of being able to describe the essential spatial characteristics of a network using a small number of parameters [23]. It thus allows to study some general characteristics of a given network, like the behavior of its cost function, under a restricted set of assumptions. This approach fits well with the objectives of this work, and so we have employed stochastic tools to model a QKD backbone network.
As we shall see, instead of calculating the cost of a QKD network for fixed topologies and traffic usage, we will try to understand the general behavior of the cost function by calculating the average cost function, where the average will be taken over some probability distributions of spatial processes modeling QKD users and QKD node locations.
The collection of spatial locations of the QKD nodes over the plane will be represented by a spatial point process M = {X i }. Then, as illustrated in figure 3, we define a corresponding partition of the plane § as the ensemble of the convex polygons {D i }, known as the Voronoï cells of nucleus {X i }. Each Voronoï cell D i is constructed by taking the intersection of the half-planes bounded by the bisectors of the segment [X i , X j ] and containing X i . The system of all the cells creates the so-called Voronoï partition. Finally, we define the Delaunay graph as the graph, whose vertices are the {X i } and § More accurately, the geometrical object we consider here is a tesselation, the boundaries of which are neglected. whose edges are formed by connecting each Voronoï cell nucleus {X i } with the nuclei of the adjacent Voronoï cells. u v Figure 3. Thick black lines: Voronoï partition associated to a given distribution of nodes. Thin black lines: the Delaunay graph, connecting the center of neighboring Voronoï cells. In the backbone QKD network model, backbone QKD links will indeed correspond to the Delaunay graph, while backbone nodes correspond to the nucleus of the Voronoï cells. We have also represented on the same figure a typical end-to-end path, between two QKD users u and v, under the Markov-path routing policy (see text in section 3.6.2 for details).
User distribution and traffic
In the remaining of this paper, and in contrast to the linear chain toy model developed in section 3.2, we will consider QKD networks providing secret key distribution service to a large number of users, distributed over a two-dimensional area.
The user distribution will be modeled by a Poisson stochastic point process, Π = {U i }, defined over the support D of size L × L, while the average number of QKD users will be denoted by µ. The point process Π will also be assumed to have an intensity density f satisfying µ = f < ∞, which means that for every set E the number of users within E is a Poisson random variable with mean E f .
Finally, whenever this additional assumption will prove to be useful to perform the desired calculations, we will consider that the distribution of users is homogeneous over D, i.e. that the intensity function f is constant over D. We will denote this constant user density by 1/α 2 u so that α u corresponds to a distance (it can be shown that for large L, α u /2 is the average distance between the origin and the point U i closest to the origin). We will have in this case: For the traffic model, we will generalize the assumption made for the linear chain QKD network model: the traffic between any pair of QKD users will be seen as an aggregate volume of calls (expressed in units of secret key exchange rate). The volume of traffic will be assumed to be the same between any pair of users, and will be denoted by V .
QKD networks with or without a hierarchical architecture As was discussed in section 2, it is interesting to study to which extent deploying a structure such as a backbone, which is synonymous to the existence of hierarchy in a network, would be advantageous in the case of QKD networks. To this end, continuing to place ourselves in the perspective of cost optimization, we will derive cost functions for QKD network models with or without a quantum backbone. The obtained results will then allow us to establish comparisons and thus discuss the interest of hierarchy in quantum networks.
Cost function for a two-dimensional network without backbone: the generalized QKD chain model
A direct way to generalize the two-user one-dimensional chain model presented in section 3.2 is simply to assume that a chain of QKD links and intermediate nodes will be deployed between each pair of users u and v within the QKD network. Each chain will therefore be dimensioned in order to accommodate a volume V of calls. The routing of calls is trivial on such a network. The distance between the intermediate nodes on a chain will be denoted by ℓ, as in section 3.2.
Here as well, we neglect the effects of discretisation, i.e. the length of the chains, ||u − v||, will be considered much longer than the length of individual QKD links, ℓ, and the traffic volume V will be considered a continuous quantity. Under these assumptions, we know that the cost associated with a pair of users located respectively at positions u and v and exchanging a volume V of calls is (see equation (1)) Recall that the distribution of users is described by a Poisson point process Π = {U i }. Then, we can calculate the average total cost of the QKD network, C, by summing up the costs C pair (U k , U l ) associated with the QKD chains deployed between each pair of users over k = l and then average this sum over the stochastic user point process Π: where δ is the average sum of distances over all pairs of two different users, namely For a homogeneous Poisson point process Π with spatial density of users α −2 u over a square domain D of size L × L, it is possible to perform the exact integral calculation of δ, yielding
Cost function for a two-dimensional QKD network with backbone
The backbone architectures we will consider in this work are topological : for a given distribution of QKD nodes, which will be either deterministic (section 3.6.1) or stochastic (section 3.6.2), the backbone cells and backbone links will strictly coincide with the Voronoï cells and the edges of the corresponding Delaunay graph defined above, respectively.
Routing traffic over a QKD backbone network
The backbone hierarchical structure provides a convenient way to solve the routing problem that we have adopted in our cost calculations. For a given origin-destination pair of users (A,B) wishing to exchange a volume of calls V AB , the traffic is routed in the following way: • The traffic goes from A to its nearest QKD backbone node X A (center of the backbone cell containing A), through a single QKD link (an access link).
• The traffic is routed through the optimal (less costly) path over the backbone QKD network from X A to X B (QKD node closer to B).
• The traffic goes from X B to B.
The routing rule defined above can be characterized as geographical, in the sense that it is driven by distance considerations. However, determining the optimal path in a given backbone network of arbitrary topology may not be a tractable problem. Even in standard networks, where the optimal path is the shortest one, an analytic computation of the average length/cost is not always possible. In the context of backbone nodes distributed as a Poisson point process, an alternative suboptimal routing policy, the so called Markov path, has been proposed, and leads to analytic computation of the average path length. In QKD networks, the cost is a non-linear function of the length and some adjustments are required. We consider two different geometries for the backbone: (i) A square backbone QKD network (section 3.6.1), i.e. a regular structure where nodes and links form a regular graph of degree 4. In this case finding the length of the shortest path between two nodes is trivial: backbone nodes X A , X B can be designated by cartesian coordinates (x A , y A ), (x B , y B ) and the shortest path length is simply |x A − x B | + |y A − y B |. Moreover, cost calculations are simplified using the fact that the links between two neighbor nodes of the backbone all have the same length.
(ii) A stochastic backbone network (section 3.6.2), where backbone nodes are distributed following a random point process and backbone cells are the corresponding Voronoï partition. For this stochastic backbone, we have used a routing technique called Markov-path routing for which, as previously established by Tchoumatchenko et al. [24,25], the average length of routes can be calculated. In the following, we will adapt these calculations to our cost function C(ℓ).
Generic derivation of the cost function for QKD backbone networks
For a QKD network with a backbone structure, we define M = {X i } as the point process of the network node distribution, and Π = {U i } as the point process of the network user distribution, with intensity density f . Each node X i is connected to some nodes in its neighborhood and to the clients belonging to the associated cell D i . In the following, we will assume that M is statistically independent of Π, and that the cells D i are the Voronoï cells associated to M, that is In the case of the QKD backbone network, our routing policy allows to calculate C pair (u, v; M), the QKD equipment cost associated with sending one unit of call between users u and v, over a network whose backbone nodes are described by the point process M: where C(ℓ) is the cost spent to send a secret bit on a QKD link over a distance ℓ and C hop (i, j; M) is the cost to send a secret bit between the nodes X i and X j of the backbone for the given routing policy. Given that the volume between each pair of users is V , the average total cost C of the QKD network then reads where N 2 is the average number of nodes of the backbone deployed in the domain D of size L × L. Here E denotes the average cost over the spatial distributions of users and backbone nodes, that is over the realizations of Π and M. Since M and Π are supposed independently distributed, we may compute this average successively with respect to M and Π. The total cost, averaged only over Π, can be decomposed as follows: As we can see from the last expression, the total cost C can be separated in three terms: where C loc takes into account all connections from one client to the closest backbone node, C bb all connections from one backbone node to another, and C node is the cost of node equipment. The explicit models that we will study will allow us to compare the behavior of these different terms and thus to understand how QKD network backbone topologies can be optimized.
Cost of the square backbone QKD network
Network model: We consider, as a first simple example, the case of a QKD backbone network that has a perfectly regular topology, and for which the shortest path length between two backbone nodes is easily determined. The architecture we consider is the following: users are distributed as previously over a large area D of size L × L and the backbone QKD network is a regular graph of degree 4, i.e. the backbone QKD nodes and links constitute a square network. The structure of the square backbone QKD network and the way a call is routed is summarized on figure 4. The free parameter with respect to which we will perform the cost optimization is the size of backbone cells α bb . We will also make the assumption that the user density function f is uniform over D.
Computation of C bb for the square network: We set X k = kα bb and D k = X k + α bb [−1/2, 1/2] 2 with k ∈ Z 2 and, for all k = l, Here, k − l 1 corresponds to the number of hops between X k and X l and C(α bb ) to the per bit cost of one hop. Calling µ i the average number of QKD users in a backbone cell i, we have: Hence, where µ is the column vector with entries µ k , k ∈ Z 2 , and Γ is the Toeplitz array indexed on Z 2 with entries Γ k,l = k − l 1 .
Since the density of users f is constant and equal to σ on its support D, where D := k∈{0,...,N −1} 2 D k , µ k is the same for all cells D k : µ k = µ/N 2 , with N 2 denoting the total number of backbone cells, and µ = (L/α u ) 2 the mean number of users over D (see equation (4)). Hence, we find where the asymptotic equivalence holds as N → ∞. Using N ∼ L/α bb and equation (4), we obtain, as N → ∞, In the latter expression, we have four multiplicative terms: (i) 2/3, a constant depending only on the dimension and the geometry of the backbone network (for a cube of dimension d, we could generalize our calculation and would find d/3); (ii) C(α bb )/α bb , a cost function depending only on the distance α bb between the nodes of the backbone; (iii) µ 2 V , the square of the mean number of users times the volume of call per pair of users, i.e. in our communication model, the total volume of the communications over which the total cost is computed; (iv) L, the size of the support of f , that is of the domain where the users lie.
To understand better the derived expression for C bb , it is interesting to compare it with C loc and C node . Indeed, we can show that C loc ≃ µ 2 C, where C stands for the per-bit cost function C averaged over one cell. In the case of the square network with α bb × α bb square cells, these cells are contained between two circles of radius α bb /2 and α bb √ 2/2 < α bb . Since C is an increasing function of distance we have C < C(α bb ), and we can thus derive the important following property: In the limit of large networks, i.e. for L ≫ α bb , the backbone cost is dominant over the local cost. We will see in the following section that this property is preserved for a backbone with randomly positioned nodes and an appropriate routing policy. Furthermore, we will see that for large L, the backbone node equipment cost C node is negligible. Therefore, to optimize the cost (equation 10), we only need to minimize C bb . Assuming a square regular backbone, this means choosing α bb so as to minimize C(α bb )/α bb , exactly as in the case of the linear chain QKD network model of section 3.2.
Hence, if we take C(ℓ) = C QKD R 0 e ℓ/λ QKD , the cost is minimized for
Cost calculation for a stochastic QBB with Markov-path routing
We now compute C loc and C bb in the case where the routing policy is the so called Markov path, as proposed in [25], where some general formulae are given for computing average costs in a general framework (see also [24]). The routing policy is defined as follows. First, all pairs of nodes whose cells share a common edge are connected. The corresponding graph is a Delaunay graph. Next, given two users A and B with respective positions u and v, we define a finite sequence of the nodes X k 0 , X k 1 , . . . , X kn in the successive cells encountered when drawing a line from u to v. This routing policy is illustrated on figure 3.
By definition, X k 0 and X kn are the centers of the cells containing u and v respectively and where µ := f is the average total number of users and, by stationarity of the point process M, with X 0 defined as the center of the cell containing the origin. Note that κ loc denotes the average local cost per secret bit and per pair of users. If M is a Poisson point process with intensity α −2 bb , we further have P( X 0 > t) = P(#{X k : X k ≤ t} = 0) = exp(−πt 2 α −2 bb ) , and hence For C bb , we can write Applying [25,Theorem 2] where κ bb := 2α −1 bb (r,ψ,φ)∈A C (2α bb r sin({ψ − φ}/2)) {cos(φ) − cos(ψ)} r 2 e −π r 2 dψ dφ dr , (16) and A = R + × {(ψ, φ) : 0 < |φ| ≤ ψ < π}. Finally we find that where δ is the average total distance between two different users defined in equation (7) and computed in equation (8), and κ bb denotes the average backbone cost per secret bit and per length unit of the distance separating a pair of users. From equations (10), (14) and (17), and observing that here the average total number of backbone cells N 2 = (L/α bb ) 2 , we find where µ 2 and δ are related to the spatial distribution of the users, and κ loc and κ bb are constants related to the geometry of the backbone and to the routing policy. For users uniformly distributed in a square of side length L with intensity α −2 u , we have µ 2 ≃ (L/α u ) 4 and δ ≃ L 5 /α 4 u . Using (15), (16), (18) and the above approximations of µ 2 and δ, we see that the total cost C only depends on L, α u and α bb . Now, for given α u and L, we take α bb so that C is minimized and examine which term in the right-hand side of (18) dominates the total cost C as L → ∞ in this context. To this end, we first study each term separately. We let c denote a constant not depending on L, α bb in the following reasoning. Observe that since C is convex and increasing, C(ℓ) ≥ c × ℓ. Using this in (15) and in (16), we get C loc ≥ c α bb L 4 and C bb ≥ c L 5 , respectively. Concerning the last term, we have C node ≈ c L 2 /α 2 bb . It follows that at fixed L, C loc → ∞ as α bb → ∞ and C node → ∞ as α bb → 0, from which we can deduce that the optimal α bb stays away of 0 and ∞. Now, clearly, if α bb stays away from 0 and ∞, the above bounds show that C bb dominates as L → ∞. Hence, for large L, the optimal intensity α bb is the one that minimizes C bb or, equivalently, κ bb . To find this optimal intensity, the following result is useful for an exponential cost C(ℓ) = C QKD R 0 e ℓ/λ QKD : Lemma 3.1 Define κ bb as in equation (16) with C(ℓ) = C QKD R 0 e ℓ/λ QKD . Then the following analytical formula holds Proof. Let s = λ QKD /α bb . We have Integrating with respect to r yields Further computations yield which is the desired expression.
Using Lemma 3.1, the α bb minimizing κ bb , denoted as α opt bb below, can easily be calculated using a numerical procedure. We find This result should be compared with the result of equation (13), where the backbone geometry is deterministic and also characterized by the node intensity 1/α 2 bb . The two results show that the choice of the backbone and routing policy does influence the optimal node intensity, albeit in a modest way.
From cost optimization results to QKD network planning
Matching QKD network topology with QKD links optimum working distance The calculations in sections 3.6.1 and 3.6.2 point to one common result: it appears that, for large networks, the costs associated with the QKD devices that have to be deployed in backbone nodes to serve the demand are always dominant over the local costs, associated to the end connections between QKD users and backbone nodes.
Moreover, the optimization of backbone costs indicates that minimum cost will be reached when the typical distance between backbone nodes is of the order of λ QKD , the scaling parameter of the curve R(l).
These results lead to the following statements: • When a QKD network deployment is planned, is seems optimal to choose the location of network nodes so that QKD links will be operated over distances comparable to the optimal distance ℓ opt . As we have seen in our different models, ℓ opt is always lower bounded by a pre-factor times λ QKD . Indeed, when the total cost of node equipment can be neglected compared to the cost of QKD devices, as it is the case for large networks, then the optimum distance ℓ opt is indeed comparable to λ QKD , which is roughly equal to 20 km. This indicates that current QKD technologies, for which D max is already significantly larger than 20 km, are well suited for metropolitan operation. On the other hand, the typical distance between amplifiers, in optical wide area networks, is of the order of 80 km. If we wanted to deploy trusted QKD networks with the current generation of QKD devices, the QKD links would have to be operated close to their maximum distance, where the unit of secret bit rate becomes very expensive. Although technically already feasible, the deployment of wide area QKD networks thus remains a challenge. We can however anticipate that this challenge will be overcome within the next years, as new generations of QKD protocols and devices, able to generate keys at higher rates, and with larger maximum distances are already being presented [26,27,28].
• The results on cost minimization that we have obtained could provide some helpful guidelines for QKD device developers: they may help promoting the idea that what will really matter, in the perspective of real network deployment, will be to focus on the optimization of their systems around typical network-optimum working distances. Optimizing QKD devices in this regime means reducing the cost of a unit bit rate at a reasonable distance, where the throughput of the QKD link is not considerably smaller than R 0 . It will be of course always profitable to design QKD devices that can reach very long distances, but as discussed in [29], from a system development point of view it can be significantly different to optimize QKD devices to reach the longest possible distance D max , and to optimize them so that the cost of unit of bit rate is as low as possible, around the distance ℓ opt minimizing network costs.
In which regime are backbones useful?
We would like now to use our calculation results to analyze in which regime QKD backbones become economically interesting, i.e. under which conditions it is interesting to introduce some hierarchy and resource mutualization in QKD networks, in order to decrease the total deployment cost.
In the previous sections we have performed cost calculations that can be used to establish some quantitative comparisons between: • The cost of a QKD network with no hierarchy as in the generalized linear chain QKD network, whose cost calculations have been performed in section 3.4.
• The cost of a QKD network with one level of hierarchy, which is the case of the square backbone QKD network studied in section 3.6.1.
Since these two cost calculations have been performed under the same assumptions regarding user distribution and traffic demand, we can use the results given in equations (6) and (12) to compare the total network deployment costs, respectively for the generalized linear chain model and for a QKD network with a square backbone (for which we have seen that we could neglect the cost of the local access network).
The condition under which it will be more cost effective to deploy a quantum backbone than to connect all pair of users by one-dimensional chains of QKD links can be described by the following inequality between the respective optimal costs C opt,chain 2D,chain ≥ C opt,square 2D,square ⇔ V C(ℓ opt )/ℓ opt + C node /ℓ opt γσ 2 L 5 ≥ 2 3 C(α opt bb )/α opt bb σ 2 L 5 V + C node L 2 /α opt bb 2 (20) The above equation is not very convenient to handle because in general α opt bb = ℓ opt . However, C opt,chain 2D,chain ≥ C opt,square 2D,square ⇒ C opt,square 2D,chain ≥ C opt,square 2D,square Thus, we can derive a necessary condition under which the deployment of a backbone for a QKD network is a better solution than a design that would solely rely on the generalized linear chain of QKD links to transport the traffic: C opt,square 2D,chain ≥ C opt,square 2D,square ⇔ C node (σ 2 L 3 α opt bb γ − 1) ≥ C(α opt bb )V σ 2 L 3 α opt bb ( with σ * = 1/ L 3 α opt bb γ .
Keeping in mind that 2 3γ − 1 is a positive number, we can use the last inequality to make the following observations: • First, it appears that, if the user density σ is smaller than σ * , which we can qualify as a critical user density, then equation (22) can never be verified. This means that below σ * it will never be interesting to deploy a backbone. This result has a clear interpretation: backbone infrastructures can only be interesting in the case where sharing resources offers a cost reduction. And the incentive to share a backbone infrastructure can only exist if there are enough users. The minimum total number of users required to have a cost incentive towards backbone deployment is σ * L 2 = L/(γα opt bb ). • In case σ is larger than the critical user density σ * , we enter a regime where there will be an incentive to deploy a quantum backbone essentially if the cost of a node C node dominates over the cost of QKD link equipment to be deployed, which scales as C(α opt bb )V . This also has a clear interpretation: if we take the extreme case where building a node (and installing node equipment inside it) is zero, we can foresee that there will be no incentive to build a backbone: it will always be cheaper to deploy direct chains between each pair of users. The motivation to build a backbone arises when efforts associated to opening a QKD node are important. This will of course be the case if QKD node equipment is expensive, as we can see from equation (22), but it is also intuitive that, in case significant efforts are required to build new QKD nodes, mutualization of nodes through a backbone structure will be a cost effective solution.
Conclusion and Perspectives
In this paper, we performed a topological analysis of quantum key distribution networks with trusted repeater nodes. In particular, under specific assumptions on the user and node distributions as well as the call traffic and routing in such networks, we derived cost functions for different network architectures. We first considered a linear chain network as a basic model that served the purpose of illustrating the main techniques and ideas that we used, and then moved on to more advanced network configurations that were in some cases enhanced with a backbone structure. Using cost minimization arguments, we obtained results on the optimal working points of QKD links and spatial distribution of QKD nodes, and examined the importance of introducing hierarchy into QKD networks.
Our results indicate that, in the context of QKD networks, it is more cost-effective and therefore advantageous to operate individual QKD links at their optimal working point, which is in general significantly shorter than the maximum span of such links. This conclusion motivates the research of new experimental compromises in practical QKD systems, and can be illustrated by considering examples of such systems where the characteristics of either a hardware component (for example a single-photon detector) or a software algorithm (for example a reconciliation code) can be experimentally manipulated as a function of distance [29].
In general, it is clear that, as the realization of more and more advanced QKD networks approaches the realm of actual deployment, it becomes necessary to orient the research on QKD devices and links towards cost-related directions, and extend the techniques we have presented here to more sophisticated network technologies and architectures. | 13,263.6 | 2009-03-04T00:00:00.000 | [
"Computer Science",
"Physics"
] |
Effects of $\rho$-meson width on pion distributions in heavy-ion collisions
The influence of the finite width of $\rho$ meson on the pion momentum distribution is studied quantitatively in the framework of the S-matrix approach combined with a blast-wave model to describe particle emissions from an expanding fireball. We find that the proper treatment of resonances which accounts for their production dynamics encoded in data for partial wave scattering amplitudes can substantially modify spectra of daughter particles originating in their two body decays. In particular, it results in an enhancement of the low-$p_T$ pions from the decays of $\rho$ mesons which improves the quantitative description of the pion spectra in heavy ion collisions obtained by the ALICE collaboration at the LHC energy.
The influence of the finite width of ρ meson on the pion momentum distribution is studied quantitatively in the framework of the S-matrix approach combined with a blast-wave model to describe particle emissions from an expanding fireball. We find that the proper treatment of resonances which accounts for their production dynamics encoded in data for partial wave scattering amplitudes can substantially modify spectra of daughter particles originating in their two body decays. In particular, it results in an enhancement of the low-pT pions from the decays of ρ mesons which improves the quantitative description of the pion spectra in heavy ion collisions obtained by the ALICE collaboration at the LHC energy.
It is well known that pions originating from decays of resonances have a steeper p T -distribution than the thermal pions [4], and that they provide a dominant contribution to the spectrum at low transverse momentum. Thus, resonance decays require a particular attention when modeling spectra of particles originating from an expanding thermal fireball.
In fluid-dynamical calculations, the interacting hadrons are usually described by the hadron resonance gas (HRG), where the system is modeled as a gas of free hadrons with resonances considered as particles with vanishing widths.
To properly address the dynamics of hadrons, the effect of resonance width must be included. A conventional way is to impose a Breit-Wigner distribution on the resonance mass. Unfortunately, this approach proves to be too crude in many circumstances. For example, for a broad resonance like the σ meson [23], or the (yet-to-beconfirmed) κ meson [24], the Breit-Wigner approach can give misleading results on the resonance contribution to the thermodynamics.
We thus take a more fundamental approach to evaluate the properties of interacting hadrons based on the S-matrix formulation of Dashen, Ma and Bernstein [25]. For elastic scatterings, the interaction part of the partition function reduces to the Beth-Uhlenbeck form for the second virial coefficient, expressed in terms of the scattering phase shifts [26]. In the context of heavy-ion physics, this approach has been applied to evaluate the contribution of πN [5,7,27], ππ [5,23], and πK interactions [5,24] to the thermodynamics of hadronic matter, and to analyse the resonance production [28].
In this letter, to make the effects of resonance width on particle p T -spectra more tractable, we concentrate on the ππ system. As shown in Refs. [5,23], the effects of the scalar-isoscalar and the scalar-isotensor channels largely cancel each other. This cancellation remains when the single particle distribution of pions is evaluated. Thus for our purposes it is sufficient to consider only the vectorisovector channel, i.e. the channel of the ρ meson.
In the S-matrix formalism, the density of states per unit volume and unit invariant mass M , assuming thermal equilibrium at temperature T , is given by [5,7,26,28] dn where f is the Bose-Einstein or Fermi-Dirac distribution, and B(M ) is an effective spectral weight, derived from the scattering phase shift δ IJ , of the isospin I and spin J channel. In the elastic region (M 1 GeV), the empirical phase shift [29][30][31] of the (I = 1, J = 1) channel can be effectively described by a phenomenological formula, inspired by a one-loop perturbative calculation of the ρ self-energy [32,33], π is the center-of-mass momentum of the scattering pions, and α 0 = 3.08, m 0 = 0.77 GeV, and c = 0.59 GeV −2 are the model parameters chosen to reproduce not only the phase-shift data, but Left: pT spectra of π + originating from decays of ρ, the πK(S-and P-wave) system, and the ∆(1232)-channel of πN using both the S-matrix treatment and the zero width approximation at T = 155 MeV temperature. The contribution from ρ decays is calculated also using the relativistic Breit-Wigner description of ρ's. Right: Contributions to pion density from various sources as function of freeze-out temperature. In this calculation, the η and ω resonances have zero widths, and the S-matrix treatment has been applied to the system of ρ, and to the processes indicated as "other": the system of ππ (S-wave), πK (S-and P-wave) and the ∆(1232)-channel of πN (see text). In both figures, solid and dashed lines correspond to results of the S-matrix approach and the conventional zero-width approximation, respectively.
also the known value of the P-wave scattering length. The phase shift and the scattering length are related as We constrain the scattering length to a 1 1 = 0.038 m −3 π , matching the experimental value and chiral perturbation theory prediction a 1 [34] and 0.037(10) m −3 π [35,36], respectively. This requirement is essential for the correct description of the near-threshold behaviour of the density function, introduced in Eq. (2).
An important feature of the current approach is the use of the effective spectral weight B(M ) instead of the standard spectral function. This effective weight includes contributions from both a pure ρ state and the correlated ππ pair. The latter tends to shift the strength of the weight function towards the low invariant-mass region [7]. Such a shift can potentially translate into an enhancement of the low-p T daughter pions from the decays of ρ mesons.
To quantify this expectation, we evaluate the distribution of ρ's using the Cooper-Frye description [37], with the thermal distribution augmented by the effective spectral weight B in Eq. (2), as where f ρ , d ρ are respectively the Bose-Einstein distribution and the spin degeneracy for ρ, and u is the flow velocity. In the case of a static source, the integration over the surface, dσ µ p µ , becomes a simple multiplication by the volume of the system, V , and by the energy of the particle, E. The momentum spectrum of the decay pions can be evaluated by applying the conventional decay kinematics [4,38,39] to the distribution of ρ's from Eq. (5). For a static source, one gets where We evaluate the p T distributions at T = 155 MeV, in the vicinity of the pseudocritical temperature obtained in the lattice formulation of QCD [40,41]. In Fig. 1-Left we show the rapidity and azimuthal angle integrated transverse momentum spectra of π + originating from ρ decays. The ρ's are treated as zero-width particles, particles with the standard Breit-Wigner width, or according to the S-matrix approach introduced in Eq. (6). The latter description leads to a substantial enhancement of the pion decay spectra. The effect is most prominent in the low-p T region of the decay pions, where at p T ≈ 0 one observes a factor of two increase of the differential pion yield. Note that at larger values of the transverse momentum the spectrum of decay pions is practically unaffected by the width of ρ. For future reference, we also present results on decay π + spectra from the system of πK interaction (sum of S-and P-wave) and from πN interaction in the ∆-channel. In all the channels studied we find overall enhancement of low-p T pions in the S-matrix approach compared to the zero-width results. Nevertheless, the difference is most noticeable in the ρ sector.
To illustrate the effect of leading resonances on the pion yield in the HRG, we show in Fig. 1-right the temperature dependence of the contributions from various sources to pion density after resonance decays. For this analysis, we have included the three-body decays of zero-width η and ω (with branching ratios of 0.228 and 0.893, respectively). Furthermore, we have applied the S-matrix treatment to the ππ (S-wave) and πK (Sand P-wave) systems and the ∆(1232)-channel of πN . At T = 155 MeV, when no heavier resonances are included, the relative abundance of π + from ρ decay is 25.1%, while the thermal pion yield remains dominant at 49.4%. Three-body decays considered constitute 12.2%, and the sum of the rest of two-body channels we treated give 13.3% of the total yield. The S-matrix treatment significantly affects the yield of pions from ρ decays, resulting in its increase by approximately 15%, whereas the effect is smaller for other channels considered. However, because of the contribution from all the other sources, the overall change in the final pion yield due to the S-matrix approach is only a few per cent. In general, on the level of particle yields, and at higher temperatures T > 100 MeV, the zero-width treatment of resonances gives comparable results to the S-matrix approach [5] despite the fact that the phase shifts in most cases do not resemble a step function and the assumption of a zero (and at times even a narrow) width is strictly speaking not justified. However, as already seen in Fig. 1, essential differences can appear when p T -differential observables of individual resonance channels are studied. Evidently, the more physical treatment by the S-matrix formulation is needed for precision calculations of particle spectra, as e.g. in modeling data in heavy-ion collisions.
In a realistic heavy-ion collision, however, the situation is further complicated by the expansion of the system, and the presence of all the other resonances. To gauge whether the S-matrix description of ρ mesons would affect the pion distributions observed in heavyion collisions, we describe the system using a blast-wave model [42]. There, the thermal source is assumed to be a boost-invariant [43] cylindrically symmetric transversely expanding tube of radius R, from which particles are emitted at constant longitudinal proper time τ with the radial flow velocity v(r) = v max (r/R).
We calculate the distributions of all the resonances in the Particle Data Book up to the 2 GeV mass, apply the two-and three-body decay kinematics, and sum the contributions to the spectrum of thermal pions. We take advantage of the recent finding in the dynamical model calculations in heavy-ion collisions that the pion p T -distribution changes only very little during the subsequent evolution in the hadronic phase [44]. Thus, we fix the freeze-out temperature at T = 155 MeV, which coincides with the chiral crossover in LQCD. The further parameters of the blast-wave model, τ = 13.7 fm, R = 10 fm, and v max = 0.8 were chosen to get the best description of spectra for positive pions in 0-10% most central √ s N N = 2.76 TeV Pb+Pb collisions as measured by the ALICE collaboration. The above freeze-out temperature and the resulting volume of the fireball, V 4300 fm 3 , are consistent with that obtained previously in the HRG model description of hadron production yields and some fluctuation observables in heavy-ion collisions at the LHC [10,22].
The resulting pion distribution is shown in the left panel of Fig. 2. In this calculation the conventional zero-width treatment of ρ's leads to a distribution which underestimates the data in the low-p T region (p T < ∼ 200 MeV). When ρ mesons are treated according to the S-matrix description, there is a clear, up to 7%, increase of the low-p T pions, which is sufficient to reach the data.
To check further the quality of the model parametrisation, we also show in the right panel of Fig. 2 the pion, proton and Ω baryon distributions in a broader m T -range. As seen in this figure, the pion data are well described up to m T 2 GeV, and the model predictions are also consistent with the data for the Ω distribution. These results verify the chosen values for temperature and volume, and they are also consistent with the idea that Ω baryons hardly rescatter in the hadronic phase [46,47], and thus their spectra are fixed at the phase boundary [47]. On the other hand, the proton distribution is steeper than the data, and the overall yield of protons is larger than the experimental value. The observed deviation on the level of proton yield is already discussed in the literature [10]. The deviations in the proton spectrum could be possibly due to their further rescattering during evolution in the hadronic phase [46,48].
In conclusion, we have investigated how the explicit treatment of the ρ-meson width affects the pion yield and p T distribution in √ s N N = 2.76 TeV Pb+Pb collisions at LHC. We have used the S-matrix approach to describe ρ mesons, and found that compared to the conventional zero-width treatment the pion yield increases, particularly at low values of transverse momentum. This indicates that the observed enhancement of low-p T pions may be possibly explained in fluid-dynamical calculations by a proper implementation of the width of resonances within the S-matrix approach. However, the S-matrix treatment of ρ's alone may not be fully sufficient. A natural extension of this work is to apply a more complete model for the fluid dynamical calculations [49][50][51][52][53], as well as, to account for a possible medium modification on the phase shifts. Essential in-medium effects for ρ mesons are suggested by studies based on many-body Green's function [33,[54][55][56][57][58]. This, together with the Smatrix treatment of three-body decays, can presumably further increase the pion yields in the low-p T region. We leave this as a matter of future investigation. Nevertheless, even in their present level, our results demonstrate the importance of the proper treatment of resonances in modeling heavy-ion collisions, and the need to improve on the customary hadron resonance gas models for precision calculations of particle spectra at low values of transverse momentum. These studies are also important in hydrodynamics-cascade hybrid models [51,59] for particle production in heavy ion collisions when describing particalization of the fluid as an input to hadronic transport. | 3,298.4 | 2016-08-24T00:00:00.000 | [
"Physics"
] |
Iodine-catalyzed Transformation of Aryl-substituted Alcohols Under Solvent-free and Highly Concentrated Reaction Conditions
Iodine-catalyzed transformations of alcohols under solvent-free reaction conditions (SFRC) and under highly concentrated reaction conditions (HCRC) in the presence of various solvents were studied in order to gain insight into the behavior of the reaction intermediates under these conditions. Dimerization, dehydration and substitution were the three types of transformations observed with benzylic alcohols. Dimerization and substitution reactions were predominant in the case of primaryand secondary alcohols, whereas dehydration prevailed in the case of tertiary alcohols. The relative reactivity of substituted 1-phenylethanols in I2-catalyzed dimerization under SFRC provided a good Hammett plot ρ + = −2.8 (r2 = 0.98), suggesting the presence of electron-deficient intermediates with a certain degree of developed charge in the rate-determining step.
Introduction
Green chemistry is currently a popular topic in chemistry.5][6][7] In the solid/solid system, a remarkable reaction rate enhancement was observed just by introducing small amounts of solvent vapor into the reaction mixture. 8Moreover, the course of the reaction can be dramatically influenced under highly concentrated reaction conditions (HCRC). 94][25] In recent years, iodine 26,27 has emerged as remarkable catalyst exhibiting high water tolerance in diverse types of reactions.One of beneficial properties of iodine is its high affinity towards molecular oxygen as well as functional groups bearing at least one oxygen atom. 28,292][33] This protocol has been already applied under various conditions [34][35][36] all indicating participation of the reaction intermediates having a partial positive charge.Recently, important mechanistic studies on the iodine-catalyzed reactions in solution have been published, 37,38 but the knowledge of the behavior of iodine under SFRC and HCRC remains largely undiscovered. 39ereb and Vražič: Iodine-catalyzed Transformation of Aryl-substituted ... The above reasons prompted us to investigate the reactivity of several model substrates in iodine-catalyzed transformations of alcohols under SFRC and HCRC.The alcohol substrates were selected to study different electronic effects and geometry.The role of the potentially present heteroatom and antiaromaticity of intermediates on transformations will be validated on sterically hindered dibenzo-substituted alcohols.Stereochemistry and regioselectivity of dehydration reactions and the role of the reaction medium polarity (protic vs. aprotic), nucleophilicity and pK a under HCRC will be examined as well.
General Procedure for Iodine-catalyzed Transformation of Alcohols Under SFRC
The procedure is the same for solid and liquid substrates.Alcohol (1 mmol) and iodine (3 mol%) were mixed together in a 5 mL conical reactor and the reaction mixture stirred at 25 °C or 55 °C for various times (5 min to 192 h), progress was monitored by TLC or by 1 H NMR spectroscopy.The crude reaction mixture was diluted with tert-butyl methyl ether, washed with an aqueous solution of Na 2 S 2 O 3 , water, dried over Na 2 SO 4 and the solvent evaporated under reduced pressure.The crude reaction mixture was subjected to column chromatography or preparative TLC using hexane or petroleum ether/tert-butyl methyl ether mixtures and pure product(s) were obtained.Conversions were determined by 1 H NMR spectroscopy.The effects of reaction variables on the type of transformation and conversions are stated in Tables and Figures.In order to obtain the information (role) of the reaction variables and structure of substrate, the data (reaction times with lower conversion) are presented in some Tables.In the experimental section, the best reaction conditions are named; 3 mol% of I 2 were used, giving the highest yield.
3. General Procedure for Iodine-catalyzed Transformation of Alcohols Under HCRC
The procedure is the same for solid and liquid substrates.To the mixture of alcohol (1 mmol) and various amounts of solvent (CH 2 Cl 2 , MeOH, EtOH, i-PrOH, TFE, HFIP, HCOOH, AcOH and H 2 O 3-300 mmol) iodine (3 mol%) was added in 5 mL conical reactor and the reaction mixture stirred at 25 °C from five minutes to 360 hours.Isolation and purification procedure was the same as described above.Conversions were determined by 1
4. Determination of Hammett Reaction
Constant ρ + for the I 2 -catalyzed Dimerization of 1-phenylethanols 1-phenylethanols (0.5 mmol) (1a, 1o, 1p, 1q, 1r, 1s) were separately placed in the conical reactors, transformation was induced by iodine (0.015 mmol, 3.8 mg, 3 mol %) at 55 °C.Alcohols 1o and 1p were stirred for two hours, and alcohols 1q, 1r and 1s were stirred for three hours.The transformation was stopped by cooling, the reaction mixture was analyzed by 1 H NMR spectroscopy and relative rate constants calculated from the equation 59 ), derived from the Ingold-Shaw relation 60 where A and B are the amounts of starting material and X and Y the amounts of products derived from them.The relative rate factors thus obtained, collected in Figure 1, are the averages of at least two measurements, giving a good reproducibility, deviation of k rel ranged (± 3%).The reaction of reference substrate 1-phenylethanol 1a was quenched separately after 2 h and 3 h and relative rate constants were obtained by means of 1 H NMR spectroscopy utilising internal standard 1,1-diphenylethene.
Results and Discussion
Most of the published I 2 -catalyzed transformations of alcohols were conducted in relatively diluted solution, in a concentration range of 23 mol to 157 mol of solvent per mol of alcohol. 61,62In contrast, we examined the behavior of substituted benzylic alcohols in highly-concentrated Jereb and Vražič: Iodine-catalyzed Transformation of Aryl-substituted ... reaction medium, that contained only 3 mol of solvent per mol of the reactant.Variously substituted secondary and tertiary benzylic alcohols 1a-l, possessing different structural features were selected as substrates (Scheme 1).Groups Ar, R 1 , and R 2 having electron-releasing substituents should enhance the stability of electron-deficient intermediates, whereas β-halogen atom R 2 in 1f and 1g should increase the acidity of the methylene protons.The results of the iodine catalyzed transformation under SFRC and HCRC with alcohols 1a-l are summarized in Table 1 indicating that tertiary alcohols underwent dehydration, while secondary alcohols predominantly dimerized into ether derivatives 2. The aggregate state of alcohols played an important role; reaction mixtures in the case of solid alcohols 1 became pasty, which proved to be essential for the reaction progress.On the other hand, liquid alcohols were less challenging in terms of their aggregate state.1-Phenylethanol 1a yielded exclusively ether 2a under both types of the reaction conditions (entries 1 and 2, A=SFRC, B=HCRC-CH 2 Cl 2 ).Introduction of an additional phenyl group at C-2 (1b) increased the reactivity affording dimer 2b as the major product (81%), and trans-stilbene (19%) as the sole dehydration product (entry 3).Enhanced reactivity of 1b could be ascribed to the stabilizing effect of the additional phenyl group.A remarkable decrease in reactivity was observed for fluoro-substituted analogues of 1a: 1-(2,3,4,5,6-pentafluorophenyl)ethanol and 1-phenyl-2,2,2-trifluoroethanol remained intact after three days at 85 °C under SFRC.Tertiary alcohols 1c, 1d and 1e only underwent dehydration into alkene 3 (Table 1, entries 5-10).A mixture of (Z)-and (E)-1,2-diphenylpropene was formed from 1c, with the latter being the major product.The transformation of 1c was accompanied by the formation of 2,3-diphenyl-1-propene (entries 5 and 6).1c remained intact without iodine under conditions A and B, signifying the role of iodine.The role of acidity of the hydroxyl group and the C-2 hydrogen atom of C-2 halogenated tertiary alcohols 1f and 1g in I 2 -catalyzed transformation was examined (Table 1, entries 11-14).
Reactivity was diminished in both cases, but fluoro derivative 1g (entry 13) reacted considerably more sluggishly than the bromo analogue 1f (entry 11), suggesting the ease of the proton removal from C-2 not being the most crucial in the process of dehydration, but the electron-accepting properties of the halomethyl group on the stability of the electron-deficient reaction intermediates.The introduction of methoxy group to the aromatic ring did not alter the reaction pathway in the case of 1-(4-methoxyphenyl)ethanol 1h, but dimerization was remarkably faster than with 1-phenylethanol 1a (entries 15 and 16).A similar enhancement of reactivity, induced by methoxy group, was observed also in the case of 1-(4-methoxyphenyl)-2-phenylethanol 1i; dimerization was the major process, the proportion of the dehydration grew to 27% (Table 1, entry 17) when compared with 1b (entry 3) under SFRC.Interestingly, 1i gave dimeric ether 2i as the sole product under HCRC (entry 18).Tertiary alcohols 1j and 1k underwent dehydration, while position of the methoxy group (p-MeO vs. o-MeO) did not play a substantial role on the type of transformation and reaction rate (entries 19-22).The substantially lower reactivity of triaryl-substituted alcohol 1l could be ascribed to the fact that substrate has the highest melting point and the lowest solubility among alcohols 1, and the reaction mixture required a longer time to become pasty under SFRC (31% conversion, entry 23).In spite of the poor solubility of 1l in CH 2 Cl 2 , enhanced molecular migration was achieved, reflecting in a considerably higher degree of conversion under HCRC (entry 24).In general, the reactivity of the secondary and the tertiary alcohols differs drastically regardless on the conditions SFRC or HCRC.The secondary alcohols have a strong tendency of dimerization into ethers, while the tertiary alcohols underwent dehydration into alkenes.Such a clear-cut might be surprising; however, it could be somehow anticipated. 31One of the reasons for smooth dehydration of the tertiary-, in comparison with the secondary alcohols, might be the formation of the thermodynamically more stable alkenes.In addition, dimerization of the sterically hindered tertiary alcohols is disfavored.It could be concluded that substantially higher reactivity of methoxy-substituted alcohols (Table 1, entries 15-24) is likely a consequence of stabilization of the intermediates involved -the electron deficient species.Furthermore, we examined the role of geometry, ring size and heteroatom on I 2 -catalyzed transformation of the dibenzo-substituted tertiary alcohols 4 under HCRC-CH 2 Cl 2 ; the results are collected in Table 2.
Fluorene derivative 4a yielded 9-ethylidenefluorene 5a (Table 2, entries 1 and 2) but phenyl-substituted derivative 4b was not reactive under these conditions (entries 3 and 4).The low reactivity of 4a and 4b might be associated with the geometry and formation of the potential anti-aromatic fluorenyl carbocation. 31,63,64Dibenzosuberan deriva-tives 4c and 4d are not planar and were considerably more reactive than 4a an additional phenyl group did not enhance the reactivity of 4d.The substitution of a CH 2 CH 2 group in the dibenzosuberan derivative with an O-atom decreased the reactivity of 9-ethyl-xanthen-9-ol 4e, while phenyl derivative 4f was more reactive than 4e.The results showed the importance of the geometry of the structure of the electron-deficient intermediates in dehydration reactions under HCRC.
Further, we examined the role of nucleophilic, protic solvents (possessing different acidity, ionizing power, hydrophobicity, and solubility of 1 and I 2 ) on iodine-catalyzed transformations of secondary benzylic alcohols under HCRC (Table 3).
Three reaction pathways were operative: dimerization, dehydration and substitution.The important role of solvent added on the type of transformation was demonstrated on 1,2-diphenylethanol 1b (entries 1-4).No reaction took place in MeOH, in the presence of HCOOH, only substitution occurred yielding 7ba (entry 3); in contrast, in the presence of H 2 O, dimerization was the dominant process (entry 4).Substrate 1b is considerably hydrophobic and does not possess a strong electron-donating group, which is reflected in its relatively low reactivity.Introduction of methoxy group to the para position of the phenyl ring remarkably enhanced the reactivity and selectivity of reaction of 1-(4-methoxyphenyl)-2-phenylethanol 1i under HCRC (entries 5-8); contrary to 1b, in the case of MeOH, the methyl ether 6ia was obtained.Another surprising difference was established in the presence of Jereb and Vražič: Iodine-catalyzed Transformation of Aryl-substituted ... HCOOH, where dimerization was the main process (entry 7), while transformation without I 2 furnished a mixture of 2i and 7ia in reversed ratio (23/77).The contrasting result suggests that iodine activated 1i.Results of reactivity of 1i (entries 5-8) suggest that iodine activated 1i which dimerized predominantly in the absence of good nucleophiles (entries 5, 7 and 8).In the presence of MeOH, a methoxy ether 6ia was the major product (entry 6), while a small extent of dehydration was observed in the cases of 1,2-diaryl-substituted alcohols only.1-(4-methoxyphenyl) ethanol 1h, the least hydrophobic and sterically-hindered in this series, was the most reactive (entries 9-12), but with altered selectivity.In the presence of CH 2 Cl 2 and H 2 O, dimerization took place (entries 9 and 12), while substitution was the main process in the presence of MeOH and HCOOH, giving 6ha and 7ha, respectively (entries 10 and 11).1h was esterified with HCOOH under SFRC without iodine, 32 thus signifying the role of pK a , and iodine has little influence on reaction of 1h with HCOOH (entry 11).The reactivity pattern of 1h is similar to 1i, where substitution predominantly took place in the presence of relatively good nucleophiles, whereas dimerization is prevalent in their absence.
The alcohols and alkenes substituted with electron rich-aromatic groups might be sensitive to polymerization and are known to undergo different types of transformation.Indeed, 4-methoxybenzyl alcohol 1m proved to be the right target in this regard (Table 4); under SFRC, dimerization giving 2m was the main process (entry 1), ipso-substitution also took place, however polymerization completely prevailed after 200 minutes producing tar material only.Similar product distribution was obtained under HCRC-CH 2 Cl 2 (entry 2).No other alkylation of the aromatic ring was noted.
The third reaction channel was substitution; it occurred in the presence of MeOH giving 6ma, and a small proportion of dimer 2m was also formed, but no polymerization was noted, even after 190 hours (entry 3).Dimerization was the main process in the absence of a nucleophile, and ipso-substitution appeared as minor, but additional reaction channel.Considering that ipso-substitution is often related with cationic intermediates, 65 it could be assumed that formation of 8 is another suggestion of involvement of electron-deficient intermediates.
Next, we studied the transformation of sterically hindered and hydrophobic tertiary alcohol with the electron-rich aromatic ring, 1-phenyl-2-(4-methoxyphenyl)-2-propanol 1n in the presence of a catalytic amount of iodine (Table 5).
1n is a substrate of choice because it possesses an activated aromatic ring for good reactivity and it could form a well stable potential intermediate to study its fate under SFRC and HCRC.Reaction mixture after 30 minutes at room temperature under SFRC contained at least three products.The major product was easily identified as the Zaitsev-type product, (E)-2-(4-methoxyphenyl)-1-phenyl-1-propene 3na (entry 1).However, the two other products had very similar physicochemical properties, reflecting in almost identical retention factors; the molecular mass of 448 indicated that dimerization occurred.The structures of these two alkenes were elucidated on the basis of 1D and 2D NMR spectra and identified as (E)-1,5diphenyl-4-methyl-2,4-bis(4-methoxyphenyl)pent-1-ene 9a and its (Z)-isomer 9b.The explanation of the formation of these two alkenes is presented in Table 5.The results suggest that iodine likely induced the formation of tertiary electron-deficient intermediate or related species, probably similar to the intermediate A; its subsequent dehydra- tion predominantly led to the mixture of (Z)-and (E)-2-(4-methoxyphenyl)-1-phenyl-1-propene 3na and 3na' , the Hofmann type dehydration furnished 2-(4-methoxyphenyl)-3-phenyl-1-propene 3nb.However, the latter 3nb was not stable under the studied conditions and further attacked primarily formed species A, resulting in a cationic-like intermediate B or a related species, and removal of the benzylic proton furnished the isomeric alkenes 9a and 9b.Continuing, we examined the effect of 3 mmol of CH 2 Cl 2 on the transformation of 1 mmol of 1n.The added solvent had no significant impact on the type of transformation (entry 2).Alkene 3nb was isolated and treated in an independent experiment with 3 mol% of I 2 in dichloromethane until the full consumption of 3nb.Alkenes 3na, 3na', 9a and 9b were formed in this process potentially via A'.A could furnish 3na and 3na' or it could add to the rest of 3nb producing 9a and 9b.In contrast, an independent transformation of a mixture of the isolated alkenes 3na and 3na' with 3 mol% of I 2 failed, since 3na and 3na' remained intact.Alkenes 3na and 3na' are thermodynamically more stable than 3nb and were not be activated by iodine.In contrast, a larger amount (30 mmol) of CH 2 Cl 2 suppressed addition of the species A to alkene 3nb and favored the formation of the Zaitsev type product 3na (entry 3).Interestingly, 1n remained unreacted in a highly diluted solution of 300 mmol of dichloromethane (entry 4).It is obvious that the vicinity of the reacting species is of the prime importance, demonstrating a crucial role of the concentration.In the presence of 3 mmol of MeOH, dehydration and substitution processes were observed, giving alkenes 3na, 3na' and 3nb and methoxy ether 6na (entry 5).Methanol blocked the addition of A to alkene 3nb, the Jereb and Vražič: Iodine-catalyzed Transformation of Aryl-substituted ... selectivity Zaitsev vs. Hofmann decreased (entry 5) in comparison with the entries 1-3.Turnover in transformation occurred in the presence of a 10-fold higher amount of MeOH, and only ether 6na was obtained (entry 6); no reaction took place in the presence of 300 mmol of methanol (entry 7).In the presence of 3 mmol of ethanol (entry 8), the same reaction pathways were observed as in the presence of methanol (entry 5).In the presence of i-PrOH and (CF 3 ) 2 CHOH (HFIP), no substitution occurred (entries 9 and 10); the Zaitsev alkene was more favored than in EtOH (entry 8), and finally reached 72% in the presence of HFIP, where alkene 3nb was further transformed to 9a and 9b (entry 10).It could be concluded that dehydration and further reaction of the formed intermediates took place under SFRC and HCRC in the presence of a non-nucleophilic solvent (CH 2 Cl 2 ) and HFIP.The latter solvent is known to stabilize the carbocationic intermediates, 66,67 and this could be an indication that our intermediates may be similar.The competition between dehydration and substitution took place under HCRC (3 mmol of alcohol, entries 5 and 8) in a nucleophilic solvent (MeOH and EtOH), while in the presence of 30 mmol of methanol, substitution took place exclusively.It is noteworthy to say that certain processes take place under SFRC and HCRC, but not under classical diluted conditions in a solution -the formation of 9a and 9b is such an example.4-methoxybenzyl alcohol 1m proved to be very reactive substrate under the studied conditions, and it tended to yield insoluble, probably polymerized products after prolonged reaction time.Consequently, we decided to explore the reactivity of exceedingly acid sensitive 9H-xanthene-9-ol 10 in the presence of catalytic amount of I 2 (Table 6).
Alcohol 10 was found to be very reactive; the reaction under SFRC was accomplished in 15 minutes at room temperature in spite of a solid reactant and catalyst.To our surprise, disproportionation took place giving the product 11a and 11b as the only products (entry 1).Similar observation could be made in detritylation of ethers using I 2 in methanol. 68We published a detailed iodine-catalyzed disproportionation of ethers under SFRC. 39Disproportionation took place also in the presence of dichloromethane under HCRC, and it was even faster probably due to the higher migration of the reactants (entry 2).Transformation of 10 under HCRC in the presence of MeOH yielded the related methoxy ether 11c, a very acid sensitive compound, too (entry 3).
In order to obtain information about the role of geometry (cyclic 9-xanthhydrol vs. acyclic diphenyl methanol) and substituents, the transformation of diphenyl methanol and bis(4-methoxyphenyl)methanol was studied under HCRC-CH 2 Cl 2 .Only dimerization took place, and no disproportionation was noted.Dimerization of bis(4-methoxyphenyl)methanol to bis[bis(4-methoxyphenyl)methyl] ether occured in five minutes, while bis(diphenylmethyl) ether was obtained in 71% yield after two days at room temperature.In MeOH, substitution took place, and bis(4-methoxyphenyl)methyl methyl ether was formed in 77% yield.Diphenyl methanol yielded the corresponding methyl ether as the main product, and a small amount of bis(diphenylmethyl)ether.Bis(pentafluorophenyl)methanol was found inert in the I 2 -catalyzed reaction; no conversion was noted after two days at 85 °C under SFRC.It can be concluded that reactivity is essentially dependent on the structure and geometry of the alcohol; the electron-accepting groups tend to disfavor the transformation.
4-Methoxyphenyl-substituted alcohols 1m and 1h were proved very sensitive to the reaction conditions; for that reason, we investigated the role of pK a of alcohols added under HCRC, Table 7.
Dimerization of 4-methoxybenzyl alcohol 1m was the main process in the absence of a good nucleophile (HCRC-CH 2 Cl 2 ), while ipso-substitution took place as well (Table 4, entry 2).The addition of alcohols extensively retarded transformation of 1m (Table 7, entries 1-3); the proportion of substitution is decreasing with the growing sterical hindrance and the reducing nucleophilicity of the solvent.A noteworthy modulation of the reactivity was noted in the case of more acidic and low nucleophilic 2,2,2-trifluoroethanol (TFE) and HFIP.Starting 1m displayed a strong tendency of polymerization in the latter two alcohols, and after too long reaction time, the tar material was only isolated.The reaction time was consequently limited to one hour and dimerization and ipso-substitu-tion were the only processes (entries 4 and 5).Both alcohols are poor nucleophiles, and no substitution took place.
An additional methyl group contributed to the substantially higher reactivity of 1-(4-methoxyphenyl)ethanol 1h in comparison with 1m, dimerization and substitution became the only reaction channels.Dimerization was the exclusive transformation in the absence of a good nucleophile (HCRC-CH 2 Cl 2 ) (Table 3, entry 9).In the presence of MeOH, EtOH and i-PrOH substitution and dimerization took place (Table 7, entries 7, 9 and 11), exhibiting a similar reactivity pattern as in the case of 1m.It is evident that Jereb and Vražič: Iodine-catalyzed Transformation of Aryl-substituted ... the dimeric ether 2 is a kinetically controlled product (entries 6-11), and iodine could catalyze its transetherification. 69Transformations of 1h were faster in presence of the fluorinated solvents; in the case of a better nucleophile TFE, substitution almost completely prevailed (entries 12 and 13), while in the presence of HFIP dimerization was the only process (entry 14).It could be concluded that reactivity patterns of 1m and 1h in the presence of iodine under SFRC and HCRC are similar.Dimerization of both alcohols is the key process in the absence of a good nucleophile, while substitution took place predominantly in the presence of a good nucleophile.It is evident in the Table 7 that dimerization is followed by transetherification, and we decided to further investigate this rather unexplored process, Table 8.
Functionalization of 2h in the presence of methanol under HCRC yielded the corresponding methyl ether 6ha (97%) and 3% of the alcohol 1h (entry 1).This is an indication that relation between 1h and 2h is reversible.The conversion roughly corresponds with the nucleophilicity of the alcohols (entries 1-3); in the case of the most sterically hindered and least nucleophilic i-PrOH the lowest conversion was achieved.A surprising turning point was observed in CF 3 CH 2 OH (entry 4).Although considerably more acidic and worse nucleophile than ethanol, the highest conversion was achieved in CF 3 CH 2 OH.The result reflects the much stronger stabilization of the reaction intermediates in comparison with the simple alkyl alcohols.Transformation of 2h in the presence of acetic acid yielded the corresponding acetate ester 6he (entry 5), demonstrating the carboxylic acids are suitable nucleophiles in this reaction.Products 6 were considerably more stable than 2h, and remained intact in the presence of iodine.
The Hammett correlation 70,71 is a convenient tool for the estimation of the nature of the reaction intermediates and the type of bond cleavage, and in the case of ionic in-termediates, the degree of the charge developed.It is determined under homogenous conditions in diluted solution; however, we decided to examine the relative reactivity of the substituted 1-phenylethanols in I 2 -catalyzed dimerization under SFRC (Figure 1).The SFRC conditions are challenging, and therefore the Hammett correlation has been rarely studied. 72he relative reactivity of 1-phenylethanol 1a toward its substituted 4-F 1o, 3-Me 1p, 3-MeO 1q, 4-Cl 1r and 4-Br 1s derivatives was studied at 55 °C, all the alcohols are liquid at given temperature.In all cases, dimeric ethers 2a and 2o-s were formed and good Hammett correlation (r 2 = 0.98) was obtained utilizing σ + substituent constants.The slope ρ + = −2.8suggests the transition state involving electron-deficient intermediates with a partial developed charge in a rate-determining step.A similar value of ρ = −2.76 was obtained in I 2 -catalyzed dihydroperoxidation of benzaldehydes in acetonitrile at 22 °C. 73It can be summarized that iodine has a remarkable feature of generation of species that would normally require the use of a strong acid.
In order to demonstrate the role of iodine, reactivity of different catalytic systems were examined on an exceedingly acid-sensitive substrate 9H-xanthene-9-ol 10, giving 11a and 11b smoothly, 74 Table 9.
Entries 1 and 2 were added from Table 6 for easier comparison.Transformation of 10 in the presence of phosphomolybdic acid hydrate under SFRC was much less effective in comparison with the I 2 -catalyzed reaction (entry 3), while reaction in the presence of methanol yielded the methoxy ether 11c with 84% selectivity (entry 4).Expectedly, disproportionation of 10 in the presence of 57 % aqueous solution of HI was the only process 75 (entry 5), whereas in the presence of methanol 80% of 11c was formed (entry 6); displaying similar reactivity in the presence of heteropoly acid and HI (entries 3-6).Reaction of ).In reactions in entries 3 and 10 an unidentified product appeared, seemingly a dimeric ether of 10.There is often speculated, though not experimentally proven, that the in-situ formed HI is the actual catalyst in the iodine-catalyzed transformations. 77A potential formation of HI would probably result in a loss of the reaction selectivity (comparison of entries 2 and 6).
The results indicate that iodine was the active catalyst, where complexation changed the reaction pathway considerably.Additionally, iodine was titrated with a standard solution of Na 2 S 2 O 3 after the end of the disproportionation of 10.The entire amount of iodine was present at the end of the reaction.Similar observation was made in the case of dimerization of a secondary alcohol and substitution reaction with methanol, strongly indicating iodine as the active catalyst in these reactions.A tentative explanation of the reaction pathways is presented on Scheme 2. The driving force in all cases is presumably polarization of the reactants by iodine.We proposed such halogen bond 78 activation in disproportionation of ethers under SFRC 39 , which is in agreement with recent computational 37 and experimental studies. 38A simultaneous TS-1 or two separated activation processes TS-2, including carbenium ion TS-3 could be proposed as the key steps in the dimerization process.In the absence of a better nucleophile, the starting alcohol took over a role of an attacking nucleophile, affording the dimer 2. The dehydration process of the tertiary alcohols might be initiated by polarization of the starting alcohol as shown in TS-4.The substitution step is suggested as a concomitant activation TS-5 or a divided activation TS-6 or by carbenium ion TS-7.
In the presence of added stronger nucleophile, substitution products 6 and 7 substantially prevailed over the dimerization products 2.
Conclusions
To summarize, we have studied iodine-catalyzed transformations of aryl-substituted alcohols under SFRC and under HCRC, the concentration was proved to have an exceptional impact on the transformation.Achieving a pasty aggregate state of solid substrates in the presence of I 2 was of vital importance for the reaction progress.Primary and secondary alcohols underwent two main transformations, depending on the reaction conditions.Dimerization took place in absence of the good nucleophiles under SFRC and HCRC, while substitution prevailed in presence of the good nucleophiles.The tertiary alcohols exhibited a strong tendency of dehydration into alkenes, which is in sharp contrast with the reactivity of primary and second-ary alcohols.The difference in thermodynamic stability of the alkenes, derived from the tertiary and the secondary alcohols, is supposedly a driving force for the observed selectivity.Substitution was another process observed in the presence of the hydroxylic solvents; their acidity, nucleophilicity and hydrophobicity were important parameters for studying the reactivity of those alcohols.4-Methoxyphenyl-substituted alcohols possessed higher reactivity than phenyl analogues; their pentafluorophenyl counterparts were unreactive under the studied conditions.The results indicated the electron-deficient intermediates to be likely involved in these processes, the geometries of the molecule and heteroatom share an important part in reactivity.4-Methoxybenzyl alcohol yielded its dimeric ether and bis(4-methoxyphenyl)methane, a product derived via the I 2 -catalyzed ipso-substitution.4-Methoxybenzyl alcohol exhibited higher reactivity in TFE and HFIP than in EtOH and in i-PrOH under HCRC, thus indicating stronger stabilization of the reaction intermediates in the fluorinated alcohols.A tertiary benzylic alcohol 1n was demonstrated to possess a special reactivity.It appears that upon its dehydration all three possible alkenes were obtained.The thermodynamically less stable alkene unexpectedly reacted with the initially formed intermediate, furnishing two dimeric alkenes.It is worth mentioning that certain processes take place under SFRC and HCRC, but not under the classical diluted conditions.This is an indication that reacting species have to be in close vicinity.Iodine catalyzed the disproportionation of 9H-xanthene-9-ol 10 under SFRC and HCRC, and in contrast, the substitution took place in the presence of MeOH.Iodine is a convenient catalyst for transetherification under mild conditions, it has a potential for interconversion of ether to ester.The Hammett correlation analysis of the I 2 -catalyzed dimerization of substituted 1-phenylethanols under SFRC (T = 55 °C) furnished straight-line ρ + = −2.8(r 2 = 0.98).This fact strongly suggests the involvement of the electron-deficient intermediates with a certain degree of the developed charge in the transition state.
a SFRC: 1 mmol of 1m and 0.03 mmol of I 2 , HCRC: 1 mmol of 1m, 3 mmol of solvent and 0.03 mmol of I 2 stirred at 25 °C.b Conversion and product distribution determined by 1 H NMR spectroscopy.c Dimerization vs. Ipso substitution vs. Substitution.
Table 6 .
The effect of the reaction conditions on iodine-catalyzed transformation of 10 of 10, 0.03 mmol of I 2 , T = 25 °C.b 1 mmol of 10, 3 mmol of solvent, 0.03 mmol I 2 , T = 25 °C.c Conversion and product distribution determined by 1 H NMR spectroscopy.
H NMR spectroscopy.Results are presented in Tables, Figures and Schemes.Isolation procedure is given for the best yield.
Table 1 .
The effect of the alcohol structure 1 and reaction conditions (SFRC vs. HCRC) on the type of iodine-induced transformation a A: SFRC; 1 mmol of 1 and 0.03 mmol of I 2 ; B: HCRC; 1 mmol of 1, 3 mmol of CH 2 Cl 2 and 0.03 mmol of I 2 .b Conversion and product distribution determined by 1 H NMR spectroscopy.c 4% of (Z)-isomer relatively to (E)-alkene, traces of the Hofmann alkene.d 5% of (Z)-isomer relatively to (E)-alkene, Zaitsev vs. Hofmann = 85/15.1c remained intact without iodine under conditions A and B. e Reaction temperature was 55 °C, in all other cases was 25 °C.
Table 2 .
The effects of geometry, ring size and substituents on the iodine-catalyzed dehydration of tertiary alcohols 4 a Reaction conditions: 1 mmol of 4, 3 mmol of CH 2 Cl 2 and 0.03 mmol of I 2 stirred at 25 °C.b Determined by 1 H NMR spectroscopy.
Table 3 .
The effect of hydroxy-substituted solvent on the iodine-catalyzed transformations of alcohols 1 under HCRC a 1 mmol of 1, 3 mmol of solvent and 0.03 mmol of I 2 stirred at 25 °C.b Conversion and product distribution determined by 1 H NMR spectroscopy.c Dimerization vs. Dehydration vs. Substitution.d A ratio 2/7 without I 2 was 23/77.
Table 4 .
The effect of the reaction conditions on the iodine-catalyzed transformation of 4-methoxybenzyl alcohol
Table 5 .
The effect of the reaction conditions on the iodine-catalyzed transformations of 1n a SFRC; 1 mmol of 1n and 0.03 mmol of I 2 , HCRC; 1 mmol of 1n, 3, 30 or 300 mmol of solvent and 0.03 mmol of I 2 stirred at 25 °C for 30 min.b Conversion determined by 1 H NMR spectroscopy.c Data refer to the sum of 3na and 3na', with (E)/(Z) = 95/5 (entries 1, 5, 8-10) and (E)/(Z) = 90/10 in entries 2 and 3. d Methoxy ether 6na in the case of MeOH and ethoxy ether 6nb in the case of EtOH were formed.
Table 7 .
The effect of the HCRC on the transformation of 1m and 1h 3 mmol of R 2 OH, 0.03 mmol of I 2 , T = 25 °C.b Conversion and product distribution determined by 1 H NMR spectroscopy.
Table 8 .
The effect of the hydroxy-substituted solvent on the conversion of 2h under HCRC a Reaction conditions: 1 mmol of 2h, 3 mmol ROH, 0.03 mmol of I 2 , T = 25 °C, R. t. (reaction time).b Conversion and product distribution determined by 1 H NMR spectroscopy.c The product is 1-(4-methoxyphenyl)ethyl acetate 6he.
product (entry 9).Additional complexation of I 2 with Bu 4 NI almost completely suppressed disproportionation, suggesting that formation of triiodide was a key (entry 10).Reaction of 10 with I 2 /Bu 4 NI in the presence of methanol yielded the methoxy ether 11c only, while no disproportionation took place (entry 11 Jereb and Vražič: Iodine-catalyzed Transformation of Aryl-substituted ... sole | 7,603.4 | 2017-12-12T00:00:00.000 | [
"Chemistry"
] |
Good Timing Matters: The Spatially Fractionated High Dose Rate Boost Should Come First
Simple Summary The administration of X-rays with therapeutic intent (radiotherapy) can cause severe unwanted adverse effects in tissues other than those that were the intended radiation target, such as tissues located in the path of the beam or close to the target region. The results of small animal studies suggest that the risk for adverse effects may be significantly reduced if the X-ray dose is administered extremely fast, as the so-called high dose rate radiotherapy. Microbeam irradiation and pencilbeam irradiation are two new experimental concepts of high dose rate radiotherapy with spatial dose fractionation at the micrometre range. The results of our studies show how the inclusion of these concepts into a conventional broad beam radiotherapy schedule could improve cancer radiotherapy for patients with malignant brain tumours. Abstract Monoplanar microbeam irradiation (MBI) and pencilbeam irradiation (PBI) are two new concepts of high dose rate radiotherapy, combined with spatial dose fractionation at the micrometre range. In a small animal model, we have explored the concept of integrating MBI or PBI as a simultaneously integrated boost (SIB), either at the beginning or at the end of a conventional, low-dose rate schedule of 5x4 Gy broad beam (BB) whole brain radiotherapy (WBRT). MBI was administered as array of 50 µm wide, quasi-parallel microbeams. For PBI, the target was covered with an array of 50 µm × 50 µm pencilbeams. In both techniques, the centre-to-centre distance was 400 µm. To assure that the entire brain received a dose of at least 4 Gy in all irradiated animals, the peak doses were calculated based on the daily BB fraction to approximate the valley dose. The results of our study have shown that the sequence of the BB irradiation fractions and the microbeam SIB is important to limit the risk of acute adverse effects, including epileptic seizures and death. The microbeam SIB should be integrated early rather than late in the irradiation schedule.
Introduction
High dose rate radiotherapy is attracting increasing attention in the field of experimental radiotherapy. Observations that X-rays delivered at dose rates of ≥40 Gy/s cause only minimal adverse effects in the normal tissue environment and in organs at risk have been made in small animal models. This phenomenon, termed the FLASH effect [1,2], was observed in both brain [3][4][5] and lung tissue [1,6]. Data suggest that there is a differential response between the tumour and normal tissue and that ultra-fast dose deposition causes less inflammatory reaction in normal tissue than a comparable dose deposited at conventional dose rates [7,8]. In clinical broad beam radiotherapy, typical dose rates are 6-20 Gy/min. The FLASH effects have been achieved by working with electrons at modified clinical linear accelerators (LINACSs) [9,10] and with photons at synchrotron facilities [11].
FLASH radiotherapy usually designates a broad beam (BB) irradiation technique utilizing dose rates ≥ 40 Gy/s [1]. Taking high dose rate radiotherapy even a step further, two irradiation concepts are under development in which FLASH dose rates are combined with spatial dose fractionation at the micrometre range. Both monoplanar microbeam irradiation (MBI) and pencilbeam irradiation (PBI) have been developed at synchrotron beamlines dedicated to biomedical research. While the first manuscript highlighting the therapeutic potential of MBI was already published in 1998 [12], PBI has been explored primarily for its tissue-sparing effects [13,14].
In both monoplanar MBI and PBI, the normal brain tissue tolerance appears to be remarkably high [13,15] and the memory function appears to be well preserved [16,17]. Both MBI and PBI are characterized by an inhomogeneous dose distribution with periodically alternating high dose (peak dose) and low dose (valley dose) zones in the targeted tissue. The valley dose was defined as the dose between the paths of the microbeams, at a width of 350 µm where the individual width of each microbeam is 50 µm and the centre-to-centre spacing is 400 µm. In PBI, a much smaller tissue volume is directly traversed by the microbeams, compared to MBI with both equal microbeam width and centre-to-centre spacing [13]. In normal tissue, very few, if any cell bodies survive in the paths of the microbeams delivered at doses of several hundred Gy. Assuming comparable valley doses, PBI could be an approach to minimize the morphological damage and result in a better preservation of tissue function, at the same rate of tumour cell destruction as that seen with monoplanar MBI.
Similar to the broad beam FLASH, MBI affects the tumour tissue differently than normal tissue [18,19]. When focused on a macroscopic tumour, one single fraction of MBI, alone or included in a conventional radiotherapy schedule, can control the tumour much better than conventional radiotherapy alone [20,21]. When two fractions in a conventional radiotherapy schedule were replaced by MBI and the valley dose was equal to the conventional single fraction dose (orthovolt range) in a model of young adult Fisher rats bearing a highly malignant brain tumour (F98 glioma), a significantly increased recurrence-free survival interval and a significantly longer overall survival were achieved, compared to animals treated with conventional broad beam radiotherapy alone [21]. In that study, the peak dose acted as simultaneously integrated boost (SIB).
The inclusion of spatially fractionated high dose rate radiotherapy into a low dose rate radiotherapy schedule could be a suitable approach to increase the tumour response at clinically acceptable normal toxicity levels for patients with multiple brain metastases or multifocal glioblastoma multiforme. Following up on an earlier study where a monoplanar MBI SIB was included in a conventional, low dose rate whole brain radiotherapy (WBRT) protocol, we have now designed a study to also explore PBI as SIB in an otherwise low dose rate BB WBRT protocol. In a small animal model, we compared high grade acute normal tissue toxicity, specifically the occurrence of epileptic seizures and death, between animals irradiated in an exclusively low dose rate irradiation protocol, animals with a microbeam SIB included at either the beginning or the end of the low dose rate irradiation protocol and a control group of non-irradiated animals. The aim of this study was to assess the relevance of the timing for the microbeam SIB. Assuming that the tumour cell destruction in the irradiation target is caused by both direct damage as result of the ionizing irradiation (enhanced in the paths of the microbeams) and the bystander effects, an accompanying in vitro study using F98 glioma cells was conducted to assess the tumoricidal potential in irradiation schedules similar to those used in the small animal study.
Technical Setup Broad Beam Irradiation (Low Dose Rate)
Broad beam (BB) irradiation at a conventional low dose rate was conducted at ambient temperature (22.7 • C) using an X-ray Generator (Philips, Amsterdam, The Netherlands) located at the biomedical beamline ID 17 of the European Synchrotron Radiation Facility (ESRF) in France. The X-ray generator, working in the kV (orthovoltage) range, was operated with a 0.2 mm copper filter at an energy of 200 keV. Dose rates between 0.9245 and 0.942 Gy/min were measured in a water phantom at 1 cm depth. The duration to deliver the target dose of 4.0 Gy was between 4.26 and 4.32 min.
Technical Setup Synchrotron Irradiation (High Dose Rate)
The high dose rate irradiation experiments were conducted at the biomedical beamline ID 17 of the ESRF. The incident photon beam at ID 17 was modified by a wiggler set to its minimum gap of 24.8 mm, to benefit from the maximum available photon flux and passed through a set of Cu and Al filters. The spectrum used for the microbeam studies at this beamline is typically 50-350 keV, with a maximum intensity at approximately 105 keV [22].
Both the monoplanar microbeam irradiation (MBI) and pencilbeam irradiation (PBI) are microbeam irradiation techniques. As a basis for both, an array of quasi-parallel microbeams with an individual beam width of 50 µm spaced at a centre-to-centre distance of 400 µm was generated by inserting a fixed-space tungsten multislit collimator (UNT, Morbier, France) with an individual microbeam width of 50 µm, spaced at a 400 µm centreto-centre distance into the incident beam [23]. In the irradiation target, this produces an inhomogeneous dose distribution characterized by a repetitive pattern of high (peak) dose zones and low (valley) dose zones. At the synchrotron, other than in clinical radiotherapy, the position of the synchrotron beam is fixed. To cover irradiation fields that are larger than the incident beam, the irradiation target needs to be moved through the beam. Since the maximum achievable synchrotron beam at the irradiation position was only a few millimetres high, the sample was moved vertically through the beam to cover the target. The dose deposition was regulated by modifying the beam height and the speed of the vertical movement through the synchrotron beam, while the multislit collimator was in a fixed position. A fast shutter system [24], positioned upstream from the multislit collimator and synchronized with the vertical translation of the goniometer, allowed a precise selection of the irradiation field and the pre-calculated speed under consideration of the decreasing machine current.
For the monoplanar MBI, the irradiation target was moved vertically through the beam in a continuous movement at constant speed (19 mm/s). For PBI, in addition to the multislit collimator, the target was moved stepwise through a set of three 1200 µm high horizontal slits to generate a grid of 50 µm × 50 µm pencilbeams spaced laterally and vertically at a centre-to-centre distance of 400 µm (Figure 1).
A high peak-to-valley dose ratio (PVDR) is essential for a good normal tissue preservation. Thus, a fast dose deposition is required, in order to preserve a steep dose decrease at the microbeam edges and limit the dose blurring at the beam edges through a physiologic movement, such as a heartbeat or breathing. The dose rate in our study was measured by a semiflex ion chamber (PTW, Freiburg, Germany), scanning vertically through a 2 × 2 cm field at 2 cm depth in solid water, at a speed of 100 mm/s. At machine storage ring currents between 152 mA and 198 mA, dose rates of approximately 70 Gy/s/mA were achieved at the irradiation position. For the monoplanar MBI, the irradiation target was moved vertically through the beam in a continuous movement at constant speed (19 mm/s). For PBI, in addition to the multislit collimator, the target was moved stepwise through a set of three 1200 µm high horizontal slits to generate a grid of 50 µm × 50 µm pencilbeams spaced laterally and vertically at a centre-to-centre distance of 400 µm (Figure 1).
(a) (b) Figure 1. Scaled schematic of the dose distribution in a given target region seen in an upstream-todownstream direction, shown for the monoplanar MBI with 50 µm wide microbeams (a) and PBI generated with the same multislit collimator used for the monoplanar MBI and additional horizontal fractionation, resulting in a grid of 50 µm × 50 µm pencilbeams (b). The centre-to-centre distance is 400 µm for both MBI and PBI.
Dose Calculation and Simulation
The dose distribution in the tissue and in the cell layer of the in vitro experiment (equivalent to 1 cm and 1 mm depth in water) was calculated using Monte Carlo simulations in the toolkit GEANT4 (version 10.4.2). The Livermore low energy physics libraries were used for these simulations, the range cut-offs for the electrons and photons were set to 1 µm. The simulations were performed in their semi-adjoint form [25] and the source model was adapted from [26]. The field sizes were 8.5 × 18 mm 2 for the small animal study (mouse) and 38 × 38 mm 2 for the in vitro experiment. The microbeams hit the water phantom of 40 mm thickness and the energy was scored at a mesh size of 1 × 1 × 0.005 mm (MBI) and 1 × 0.01 × 0.01 mm (PBI) with the highest resolution in the direction of the spatial fractionation. A total number of 10 9 photons were simulated following the ESRF preclinical spectrum [22]. The collimator leakage with a harder X-ray spectrum was also taken into account [26]. Figure 2 shows the simulated microbeam dose profile for the in vitro exposures in the MBI and PBI techniques. Scaled schematic of the dose distribution in a given target region seen in an upstream-todownstream direction, shown for the monoplanar MBI with 50 µm wide microbeams (a) and PBI generated with the same multislit collimator used for the monoplanar MBI and additional horizontal fractionation, resulting in a grid of 50 µm × 50 µm pencilbeams (b). The centre-to-centre distance is 400 µm for both MBI and PBI.
A high peak-to-valley dose ratio (PVDR) is essential for a good normal tissue preservation. Thus, a fast dose deposition is required, in order to preserve a steep dose decrease at the microbeam edges and limit the dose blurring at the beam edges through a physiologic movement, such as a heartbeat or breathing. The dose rate in our study was measured by a semiflex ion chamber (PTW, Freiburg, Germany), scanning vertically through a 2 × 2 cm field at 2 cm depth in solid water, at a speed of 100 mm/s. At machine storage ring currents between 152 mA and 198 mA, dose rates of approximately 70 Gy/s/mA were achieved at the irradiation position.
Dose Calculation and Simulation
The dose distribution in the tissue and in the cell layer of the in vitro experiment (equivalent to 1 cm and 1 mm depth in water) was calculated using Monte Carlo simulations in the toolkit GEANT4 (version 10.4.2). The Livermore low energy physics libraries were used for these simulations, the range cut-offs for the electrons and photons were set to 1 µm. The simulations were performed in their semi-adjoint form [25] and the source model was adapted from [26]. The field sizes were 8.5 × 18 mm² for the small animal study (mouse) and 38 × 38 mm² for the in vitro experiment. The microbeams hit the water phantom of 40 mm thickness and the energy was scored at a mesh size of 1 × 1 × 0.005 mm (MBI) and 1 × 0.01 × 0.01 mm (PBI) with the highest resolution in the direction of the spatial fractionation. A total number of 10 9 photons were simulated following the ESRF preclinical spectrum [22]. The collimator leakage with a harder X-ray spectrum was also taken into account [26]. Figure 2 shows the simulated microbeam dose profile for the in vitro exposures in the MBI and PBI techniques. In the centre of the monoplanar MBI irradiation field, the maximum peak dose in the in vitro and in vivo experiments was 174 Gy. The maximum valley dose was 3.5 Gy and 4.4 Gy in the in vitro and in vivo exposures. In the centre of the PBI irradiation field, the maximum peak dose was 1500 µm and 1980 µm in the in vitro and in vivo experiments, respectively. The respective average valley dose, in the centre between four adjacent beams was 4.3 and 4.7 Gy. The valley doses varied substantially across the radiation field and were around 30% lower at the field edges.
The comparison between the irradiation techniques resulting in a highly inhomogeneous X-ray dose distribution, such as monoplanar microbeam irradiation and PBI with BB exposures, the concept of the equivalent uniform dose (EUD) was used, as originally defined by [27]. The EUD is the homogeneous dose that leads to the same cellular survival as an inhomogeneous dose distribution, assuming that the cells react independent of each other to the local dose they receive, according to the linear quadratic model (LQM). The LQM parameters α and β were assumed as 0.1 Gy −1 and 0.05 Gy −2 [28][29][30] and the EUD was retrieved by equating the homogeneous and inhomogeneous survival using For the in vivo MBI study, the EUD in a 1 cm depth (approximately the position of the brain) was 4.7 Gy and in the in vitro MBI exposures 6.0 Gy. The EQD2 of the entire course of the fractionated treatment for the BB only (5 × 4 Gy), the in vivo MBI SIB + BB (4 × 4 Gy + 6 Gy) and in vitro MRT SIB + BB (4 × 4 Gy + 4.7 Gy) was 30, 32 and 36 Gy.
For PBI, the EUD was 6.7 in vitro and 7.2 in vivo. The EQD2 of the entire fractionation schedule was 38.6 and 40.6 Gy for the in vivo and in vitro, respectively.
Small Animal Study
The experiments were conducted at the biomedical beamline ID17 of the ESRF in France (permit number 14ethax210 of the ESRF Ethics Committee, ETHAX 113, authorisation 28 May 2015).
Sixty young adult C57 BL/6J mice (Charles River, France) were used for this study. The animals were housed and cared for in a temperature-regulated animal facility exposed to a 12-hr light/dark cycle. For all irradiation procedures, the animals were under general anaesthesia, induced by inhalation of 1.5-2% Isoflurane in compressed air and In the centre of the monoplanar MBI irradiation field, the maximum peak dose in the in vitro and in vivo experiments was 174 Gy. The maximum valley dose was 3.5 Gy and 4.4 Gy in the in vitro and in vivo exposures. In the centre of the PBI irradiation field, the maximum peak dose was 1500 µm and 1980 µm in the in vitro and in vivo experiments, respectively. The respective average valley dose, in the centre between four adjacent beams was 4.3 and 4.7 Gy. The valley doses varied substantially across the radiation field and were around 30% lower at the field edges.
The comparison between the irradiation techniques resulting in a highly inhomogeneous X-ray dose distribution, such as monoplanar microbeam irradiation and PBI with BB exposures, the concept of the equivalent uniform dose (EUD) was used, as originally defined by [27]. The EUD is the homogeneous dose that leads to the same cellular survival as an inhomogeneous dose distribution, assuming that the cells react independent of each other to the local dose they receive, according to the linear quadratic model (LQM). The LQM parameters α and β were assumed as 0.1 Gy −1 and 0.05 Gy −2 [28][29][30] and the EUD was retrieved by equating the homogeneous and inhomogeneous survival using For the in vivo MBI study, the EUD in a 1 cm depth (approximately the position of the brain) was 4.7 Gy and in the in vitro MBI exposures 6.0 Gy. The EQD2 of the entire course of the fractionated treatment for the BB only (5 × 4 Gy), the in vivo MBI SIB + BB (4 × 4 Gy + 6 Gy) and in vitro MRT SIB + BB (4 × 4 Gy + 4.7 Gy) was 30, 32 and 36 Gy.
For PBI, the EUD was 6.7 in vitro and 7.2 in vivo. The EQD2 of the entire fractionation schedule was 38.6 and 40.6 Gy for the in vivo and in vitro, respectively.
Small Animal Study
The experiments were conducted at the biomedical beamline ID17 of the ESRF in France (permit number 14ethax210 of the ESRF Ethics Committee, ETHAX 113, authorisation 28 May 2015).
Sixty young adult C57 BL/6J mice (Charles River, France) were used for this study. The animals were housed and cared for in a temperature-regulated animal facility exposed to a 12-h light/dark cycle. For all irradiation procedures, the animals were under general anaesthesia, induced by inhalation of 1.5-2% Isoflurane in compressed air and upheld by an intraperitoneal injection of a Ketamine and Xylazine cocktail (Ketamine 1 mg/10 g, Xylazine 0.1 mg/10 g). To assure a reproducible position, the anaesthetized mice were placed on a special positioning device in the prone position, with their front teeth hooked around a fixed wire. The animals were distributed into six experimental groups (n = 10/group, Table 1): The animals in groups 4-6 received four fractions of 4 Gy WBRT and, in addition, a WBRT microbeam SIB in either the monoplanar MBI or PBI mode. SIB concepts are frequently used in clinical radiotherapy, to increase the biological efficacy and to shorten the overall treatment time to increase the patient's quality of life. The peak doses were calculated based on the daily BB fraction dose of 4 Gy serving as the valley dose. Thus, all animals, except those of the control group, received 5 × 4 Gy delivered to the entire brain.
Group 3 and 4 received a WBRT SIB in the uniaxial MBI technique, either at the beginning (Group 3) or at the end (Group 4) of the irradiation schedule.
Group 5 and Group 6 received a WBRT SIB in the PBI technique, either at the beginning (Group 5) or at the end (Group 6) of the irradiation schedule.
The mice were positioned prone, on top of a 3-axis Kappa-type goniometer (Huber, Rimsting, Germany) with three prosilica cameras (Allied Vision Technologies GmbH, Stadtroda, Germany) supporting the reproducible positioning of each animal. The microbeam irradiation of the entire skull was performed by a vertical translation of the rat through the beam.
The conventional, low dose-rate irradiation in the broad beam technique was delivered from above, in the dorsal-to-ventral direction. The microbeam irradiation in the MBI and PBI techniques was conducted in the right-to-left lateral direction.
To assure that the entire brain was inside the irradiation target and to spare other tissue as much as possible, a 2D X-ray image was obtained prior to the high dose rate SIB, after which the target position was corrected, if necessary. The animals were sacrificed at 48 h and 7 days after the administration of the last irradiation fraction. The brains were carefully extracted from the skull, fixed in 10% phosphate-buffered formalin for 24 h and then stored in 1x PBS for later processing.
In Vitro Model
To assess the tumouricidal potential of the tested irradiation schedules in the glioma cells, we conducted an in vitro study using the commercially available F98 glioma cell line (CRL-2397, ATCC, USA, rodent origin). Due to this cell line's characteristics, such as a high proliferation rate and invasive growth into normal brain structures, F98 glioma cells are frequently used to simulate the malignant human brain tumor glioblastoma multiforme [31]. F98 glioma cells are considered highly radioresistant and are therefore well suited to assess the therapeutic potential of new radiotherapy techniques. The cell line is well established in our laboratory for both in vitro and in vivo studies to follow up in vitro experiments with an in vivo study. The cells were cultivated in growth medium containing DMEM (31966-21, Gibco), 10% fetal bovine serum and 1% penicillin/streptomycin mixture, and harvested after aspirating the growth medium and incubating for approximately 20 min in a calciumand magnesium-free medium, in a standard incubator.
The exponentially growing F98 cells were split into groups to match the irradiation conditions of the in vivo study.
Analysis of the Experimental Data
Cell proliferation: The F98 glioma cells were seeded in 30 mm diameter Petri dishes three days before the first irradiation, harvested and counted at 12 and 72 h after the last irradiation, using a hemocytometer. The cell numbers were plotted in the logarithmic mode using GraphPad Prism software.
Bystander effects in the tumour cell cultures: For this study, the growth medium of the irradiated cells was collected at 12 h after irradiation and added to non-irradiated glioma cell cultures. The work hypothesis was that the proliferation of the tumour cells which are not directly irradiated, nevertheless might be decreased when exposed to the growth medium which had been in contact with the irradiated cells. Prior to the irradiation, all growth medium was aspirated, leaving only a thin fluid film on the cultures during the irradiation. Immediately following the irradiation, fresh (non-irradiated) growth medium was added to all cultures. This medium was collected 12 h later. Then, 1 mL of this medium, which has been exposed to irradiated cells, was added to the non-irradiated cell cultures already submerged in fresh growth medium. In other words, the medium exposed to the irradiated cells was added to the naïve cells on top of, not instead of, the fresh growth medium. The cells were harvested and counted at 72 h after adding the medium exposed to the irradiated cells (bystander medium).
Clonogenic assay: Twenty-four hours prior to the first day of irradiation, 200 F98 glioma cells were seeded into T25 culture flasks, taking care to achieve a homogenous distribution of the single cells across the bottom of each flask. Thus, each viable cell could generate its own colony. These samples were submitted to the same irradiation schedules, as described for the in vivo study. Seven days after the last irradiation, the colonies were fixed with a 10% buffered formaldehyde solution and stained with 1% Cresyl violet. Each colony with a size of 50 cells or more was counted, assuming that each colony had arisen from one single glioma cell. The data were analyzed using the unpaired t-test (GraphPad Prism software).
Immunohistochemistry: The formalin-fixed and in paraffin embedded brains were sectioned 5.0 µm thick and mounted on microscope slides (SuperFrost ® Plus, R. Langenbrinck, Germany) for gamma H2AX immunostaining, as described previously [14]. Briefly, the tissue sections were deparaffinized and rehydrated by passing them through a series of alcohol and xylene washes, followed by vapour-based heat epitope retrieval in a citrate solution at pH6 (Target retrieval solution, Dako, Germany) at a temperature of 95 • C for 40 min. The tissue sections were then blocked with 100 µL of 1× PBS, 5% goat serum, and 0.3% triton X-100 buffer for 60 min at room temperature, followed by the incubation with gamma H2AX (Abcam 22551, Cambridge, UK), as the primary antibody at a dilution of 1:100 for 1 h at room temperature. Finally, the tissue sections were incubated with Alexa Fluor 488 at a dilution of 1:200 (Thermo Fisher Scientific, Waltham, MA, USA) as a secondary antibody and DAPI for 1 h at room temperature in the dark. Following the thorough rinsing with PBS, the slides were cover-slipped with Dako Fluorescent Mounting Medium (Dako North Amerika Inc., Carpinteria, CA, USA) and microphotographs were obtained using a fluorescence microscope (BZ-X, Keyence Deutschland GmbH, Neu-Isenburg, Germany) with a camera and computer link. For the immunofluorescence of the gamma H2AX stain, the excitation wavelength was 544 nm with an emission at 488 nm. The immunostaining utilizes antibodies against the histone 139, which is only accessible after DNA-double-strand-breaks, such as those developing after irradiation. We have shown previously that the gamma H2AX antibody with a DAPI nuclear counterstain is reliable for the assessment of the DNA damage after MRT [14].
Statistic Analysis
A non-parametric One-Way ANOVA test (GraphPad Prism 6, GraphPad Software, Inc., La Jolla, CA, USA). was used to assess the statistical significance of the data in the in vitro study (cell numbers and colony counts).
Results
The results of this study suggest a peak dose-dependent normal tissue toxicity for microbeam radiotherapy. Furthermore, this toxicity is dependent on the sequence of low dose rate BB and high dose rate microbeam irradiation fractions.
Health Status, Neurologic Signs and Acute Adverse Effects In Vivo
In the animals receiving either 5 × 4 Gy low dose rate irradiation only or low dose rate irradiation combined with an either early or late SIB of monoplanar MBI with peak doses of 174 Gy, only one adverse effect occurred: starting with the third day, the animals required more anaesthetic to be reliably positioned during irradiation. No signs of increased brain pressure, such as circling or inactivity, were observed.
In the animals receiving a PBI SIB with peak doses of 1980 Gy, at the end of the low dose rate radiotherapy schedule, four out of 10 animals died within 2 h after irradiation. Two of the animals were observed to have a generalized epileptic seizure and stopped breathing afterwards. All animals had woken up from anaesthesia, walked around their cage and started eating before, so an anaesthesia mishap can be excluded. No high grade acute adverse effects were observed in the animals which received the PBI SIB at the beginning of the low dose radiotherapy schedule.
The valley dose was approximately 4 Gy in both the monoplanar MBI and in the PBI schedules, in order to have all animals receive 4 Gy as a WBRT dose on all treatment days, with the microbeam peak doses acting as SIB. Therefore, the PBI peak dose must have been detrimental. Even higher PBI peak doses had been administered in the same mouse strain and with the same microbeam width and spacing in WBRT before without causing acute adverse effects; however, this had been carried out on healthy, not-pre-irradiated mice [13]. The fact that no adverse effects were seen in the animals which received the PBI SIB on the first irradiation day agrees with those findings. In the pre-irradiated animals, however, the low dose rate irradiation had already caused vascular damage, resulting in cerebral edema. In this setting, the radiosurgical high dose PBI beams caused an acute increase of vascular damage, followed by increased intracranial pressure sufficient to cause generalized seizures and subsequent death.
The beam geometry is reflected in the microphotographs of the immunostained brain tissue (Figure 3).
Tumour Cell Destruction In Vitro
Similar to the results of the in vivo study, the results of the in vitro study also that a spatially fractionated high dose rate SIB should be included into a low d schedule early rather than late. The cell proliferation assay conducted with cells r the high dose rate SIB on the last day of irradiation shows that cell death occurs ously during the first 72 h after irradiation (Figure 4). While the number of non-ir control cells increases in this period, the number of irradiated cells decreases. Wh is no significant difference between both of the high dose rate irradiation techniq difference between the low dose rate irradiation only and the low dose rate irr plus the high dose rate SIB is significant at 72 h after irradiation (p < 0.0001).
Tumour Cell Destruction In Vitro
Similar to the results of the in vivo study, the results of the in vitro study also suggest that a spatially fractionated high dose rate SIB should be included into a low dose-rate schedule early rather than late. The cell proliferation assay conducted with cells receiving the high dose rate SIB on the last day of irradiation shows that cell death occurs continuously during the first 72 h after irradiation (Figure 4). While the number of non-irradiated control cells increases in this period, the number of irradiated cells decreases. While there is no significant difference between both of the high dose rate irradiation techniques, the difference between the low dose rate irradiation only and the low dose rate irradiation plus the high dose rate SIB is significant at 72 h after irradiation (p < 0.0001).
The addition of the bystander medium (exposed for 12 h to the irradiated cells, conditioned medium) to the non-irradiated tumour cell cultures again shows a significant difference between the low dose rate and the high dose rate techniques but none between the individual high dose rate techniques ( Figure 5). Figure 4. Cell counts after the late microbeam SIB. A significantly increased cell destruction, compared to the non-irradiated controls, was seen in all irradiated groups. The differences in the cell destruction were also statistically significant between the low dose rate and the high dose rate irradiation groups but not between the two high dose rate irradiation groups.
The addition of the bystander medium (exposed for 12 h to the irradiated cells, conditioned medium) to the non-irradiated tumour cell cultures again shows a significant difference between the low dose rate and the high dose rate techniques but none between the individual high dose rate techniques ( Figure 5). Figure 5. Secondary cell counts (with conditioned medium). Single fraction high dose rate, as well as high dose rate SIBs are significantly more effective than low dose rates only. There was no significant difference between the high dose rate techniques.
Based on the data shown and considering a significantly higher risk of death associated with PBI, a monoplanar MRT SIB seems to be the preferable one of the two tested spatially fractionated high dose rate irradiation techniques for inclusion as SIB into a low . Cell counts after the late microbeam SIB. A significantly increased cell destruction, compared to the non-irradiated controls, was seen in all irradiated groups. The differences in the cell destruction were also statistically significant between the low dose rate and the high dose rate irradiation groups but not between the two high dose rate irradiation groups. Figure 4. Cell counts after the late microbeam SIB. A significantly increased cell destruction, compared to the non-irradiated controls, was seen in all irradiated groups. The differences in the cell destruction were also statistically significant between the low dose rate and the high dose rate irradiation groups but not between the two high dose rate irradiation groups.
The addition of the bystander medium (exposed for 12 h to the irradiated cells, conditioned medium) to the non-irradiated tumour cell cultures again shows a significant difference between the low dose rate and the high dose rate techniques but none between the individual high dose rate techniques ( Figure 5). Figure 5. Secondary cell counts (with conditioned medium). Single fraction high dose rate, as well as high dose rate SIBs are significantly more effective than low dose rates only. There was no significant difference between the high dose rate techniques.
Based on the data shown and considering a significantly higher risk of death associated with PBI, a monoplanar MRT SIB seems to be the preferable one of the two tested spatially fractionated high dose rate irradiation techniques for inclusion as SIB into a low dose rate radiotherapy schedule.
The colony formation assay shows the same trend for the late microbeam SIB. However, for the early SIB, it shows a trend towards a more pronounced tumour cell inhibition after PBI, compared to the monoplanar MBI ( Figure 6). In the samples from the two groups in which the high dose rate SIBs were administered early in the radiotherapy schedule, Figure 5. Secondary cell counts (with conditioned medium). Single fraction high dose rate, as well as high dose rate SIBs are significantly more effective than low dose rates only. There was no significant difference between the high dose rate techniques.
Based on the data shown and considering a significantly higher risk of death associated with PBI, a monoplanar MRT SIB seems to be the preferable one of the two tested spatially fractionated high dose rate irradiation techniques for inclusion as SIB into a low dose rate radiotherapy schedule.
The colony formation assay shows the same trend for the late microbeam SIB. However, for the early SIB, it shows a trend towards a more pronounced tumour cell inhibition after PBI, compared to the monoplanar MBI ( Figure 6). In the samples from the two groups in which the high dose rate SIBs were administered early in the radiotherapy schedule, highly significant differences are seen between each of the three irradiation schedules (low dose rate only, MBI SIB + low dose rate and PBI SIB plus low dose rate). highly significant differences are seen between each of the three irradiation schedules (low dose rate only, MBI SIB + low dose rate and PBI SIB plus low dose rate). . Colony counts after low dose rate irradiation with and without SIB. Only a small additional decrease in the number of colony counts was seen after the late SIB, but a highly significant additional decrease was achieved with an early spatially fractionated high dose rate SIB, compared to low the dose rate irradiation alone. The difference between the monoplanar MRT SIB and PBI SIB was also statistically significant. The cells were irradiated in the flasks directly. Error bars represent SEM. Asterisks are used to highlight statistically significant differences compared to the other two groups in the early SIB experiment.
Discussion
High dose rate irradiation techniques with photons are almost exclusively developed at synchrotron facilities. Thus, access is currently limited by the competition for experimental time. However, efforts to construct synchrotron-independent compact sources to produce the necessary photon flux are under way [32,33]. At this stage, high dose rate radiotherapy promises an extremely good preservation of the normal tissue function for human patients, even at single fraction doses far higher than those typically used in conventional radiotherapy. While BB FLASH radiotherapy can technically be delivered in unlimited numbers of subsequent fractions, spatially dose-fractionated techniques such as MBI and PBI are limited to one single fraction: A precise repositioning with the micrometre precision required for irradiation on subsequent days is technically impossible in human patients, at least with currently available techniques. As a consequence, an exact dose prescription in the target zone would be possible only for one single MBI or PBI SIB fraction.
SIB concepts have gained popularity in conventional radiotherapy because they increase the biologically effective dose and thus increase the tumour control [34,35]. They shorten the radiotherapy schedule, which improves the quality of life for the patient. An improvement of the tumour control is desirable for both patients with multiple brain metastases and for patients with multifocal glioblastoma multiforme. The interval for the tumour recurrence after a course of conventional radiotherapy is, on average, less than a year for both tumour entities. While WBRT is accepted as a therapeutic concept for patients with multiple brain metastases, it is rejected on the grounds of a high risk for neurological adverse effects in patients with malignant primary brain tumours. In a typical 14 × 2.5 Gy course of WBRT for patients with multiple brain metastases, the BED would be lower than in a 13 × 3 Gy course for glioblastoma extended to the entire brain (43,75 vs. Figure 6. Colony counts after low dose rate irradiation with and without SIB. Only a small additional decrease in the number of colony counts was seen after the late SIB, but a highly significant additional decrease was achieved with an early spatially fractionated high dose rate SIB, compared to low the dose rate irradiation alone. The difference between the monoplanar MRT SIB and PBI SIB was also statistically significant. The cells were irradiated in the flasks directly. Error bars represent SEM. Asterisks are used to highlight statistically significant differences compared to the other two groups in the early SIB experiment.
Discussion
High dose rate irradiation techniques with photons are almost exclusively developed at synchrotron facilities. Thus, access is currently limited by the competition for experimental time. However, efforts to construct synchrotron-independent compact sources to produce the necessary photon flux are under way [32,33]. At this stage, high dose rate radiotherapy promises an extremely good preservation of the normal tissue function for human patients, even at single fraction doses far higher than those typically used in conventional radiotherapy. While BB FLASH radiotherapy can technically be delivered in unlimited numbers of subsequent fractions, spatially dose-fractionated techniques such as MBI and PBI are limited to one single fraction: A precise repositioning with the micrometre precision required for irradiation on subsequent days is technically impossible in human patients, at least with currently available techniques. As a consequence, an exact dose prescription in the target zone would be possible only for one single MBI or PBI SIB fraction.
SIB concepts have gained popularity in conventional radiotherapy because they increase the biologically effective dose and thus increase the tumour control [34,35]. They shorten the radiotherapy schedule, which improves the quality of life for the patient. An improvement of the tumour control is desirable for both patients with multiple brain metastases and for patients with multifocal glioblastoma multiforme. The interval for the tumour recurrence after a course of conventional radiotherapy is, on average, less than a year for both tumour entities. While WBRT is accepted as a therapeutic concept for patients with multiple brain metastases, it is rejected on the grounds of a high risk for neurological adverse effects in patients with malignant primary brain tumours. In a typical 14 × 2.5 Gy course of WBRT for patients with multiple brain metastases, the BED would be lower than in a 13 × 3 Gy course for glioblastoma extended to the entire brain (43,75 vs. 50.7). However, with increasing survival times of patients with multiple brain metastases, due to improved systemic therapy, the risk for neurological deficits as late adverse effects also increases. High dose rate radiotherapy generally preserves the normal tissue function far better than low dose rate radiotherapy [1]. Further, high dose rate irradiation destroys tumour cells equally or even better than does low dose rate irradiation. Therefore, a high dose rate SIB with a spatially fractionated irradiation technique may also yield an improved tumour control and a limited risk of neurological deficits.
The comparison of the X-ray doses delivered in a microbeam geometry with conventional broad beam doses is challenging and extremely complex. Characteristic parameters of microbeam irradiation, such as a high peak dose, dose delivery in one single irradiation session and the inhomogeneous dose distribution with periodically alternating high dose (peak dose) and low dose (valley dose) zones in the target tissue, are hard to match with the parameters of a uniform, seamless broad beam irradiation, administered in a temporally fractionated series of normofractionated low dose rate radiotherapy. To solve this problem, a larger amount of quantitative biological data should be generated in preclinical microbeam studies, to be compared to the biological responses seen in matching the clinically fractionated irradiation schedules, both testing the same biological system. We are not aware of publications listing detailed results of such experiments.
FLASH radiotherapy with electrons at a modified clinical LINAC and the monoplanar MBI at the synchrotron have both already advanced to the stage of veterinary studies [36][37][38]. It remains questionable whether the timely sequence of the conventional irradiation and high dose rate radiotherapy boost is of any consequence. The data of an earlier study suggested that it might be wise to include a MBI SIB into a conventional WBRT schedule early rather than late, based on the better tumoricidal effect seen in an accompanying in vitro study [39]. The results of the current study strongly support this recommendation, from the aspects of both tumor cell destruction and of patient safety. Although the statistical power might be limited by the number of animals in each experimental group, the agreement with our earlier study reporting on the inclusion of a MBI SIB [39] in favour of including the SIB at the beginning rather than at the end of a conventional, low dose rate radiotherapy schedule, is encouraging.
While no adverse effects were observed in the MBI study, the radiogenic lethality was 40% after PBI with a comparable valley dose when the PBI SIB was administered at the end of the conventional radiotherapy schedule. However, no fatalities occurred with the PBI SIB as the first irradiation fraction. The death of the animals receiving a PBI SIB at the end of the radiotherapy schedule was surprising: considerably higher in-beam PBI doses had been administered-without dramatic side effects-by a WBRT concept in an earlier study [13] The reason for the death of the animals in the present study is most likely a more pronounced acute increase of intracranial pressure, due to intracerebral edema within the first two hours after the PBI SIB, in animals which had been pre-irradiated with conventional BB WBRT. This would fit the clinical picture of the observed epileptic seizures in the animals immediately before death. The damage to the tumour-supplying blood vessels (neovasculature) and the development of vasogenic cerebral edema after WBRT has been described [40]. Thus, the advantages of a high dose rate SIB, scheduled early rather than late, consist in a lower risk of death and in a significantly higher percentage of cell destruction, compared to the low dose rate radiotherapy. To take advantage of this, it might be worth considering the inclusion of an MBI SIB into a prophylactic clinical low dose rate WBRT schedule. With a valley dose of 2 Gy, which is typical for prophylactic WBRT, the toxicity would be significantly lower than in our study. The resulting MRT peak dose would also by significantly lower than in our study, below 100 Gy. However, considering the increase of the biological effective dose achievable and seeing the therapeutic success with comparably low peak doses in the veterinary MRT study currently conducted at the ESRF, it might be worthwhile to test such a concept.
Conclusions
The results of this study support the work hypothesis that the integration of a high dose rate microbeam SIB into a conventional, low dose rate schedule of whole brain radiotherapy, increases the tumour cell destruction without producing inacceptable acute adverse effects if the boost is administered at the beginning of the radiotherapy schedule. The sequence of conventional radiotherapy and high dose rate boost in a spatially fractionated technique is highly important for the outcome, not only for a better tumour control but also as an aspect of patient safety. Given equal integrated doses, the normal tissue toxicity increases with increasing doses in the paths of the microbeams. Compared to PBI, the monoplanar microbeam irradiation (MBI) appears as the safer option for a SIB integrated in a conventional BB WBRT schedule. When incorporated into a conventional, low dose rate BB WBRT schedule in the scenario described in this study, the microbeam SIB should be administered early, rather than late. | 10,582.6 | 2022-12-01T00:00:00.000 | [
"Medicine",
"Physics"
] |
Public-private partnership as a mechanism for effective management of state property
This article examines the global experience of using public-private partnerships and its impact (positive and negative) on economic growth. Factors influencing sustainable technological progress and state policy on opportunities and economic benefits of Ukraine due to the expansion of public-private partnership in the context of globalization are identified. A methodical set of tools is used, which allows public administration representatives not only to realistically assess the resource potential of state property, but also to be able to effectively manage it in conditions of rapid technology development, limited funding and impossibility and / or inexpediency of state property. The study proves that public-private partnership in Ukraine has not gained much popularity, although changes in all areas of activity encourage a new perception and attitude to public-private partnership through: attractiveness of investments in public sector development for foreign companies and sole proprietors; competitiveness, transparency and the ability to have the best partners among private partners; risk parity that can be placed on the partner, which can reduce it more effectively; profitability, preferences and reduction of the tax burden on business; high quality and prompt project development, thanks to the experience of a private partner in a particular field; the ability of the public partner to control the development and implementation of the project; support for new business ideas, innovative approach and maximizing the experience of private partners.
Introduction
The implementation of large-scale modernization projects in various sectors of the economy requires significant investment resources, a powerful source of which can be private business. At the same time, in the conditions of post-crisis development, business interest in state support is growing, which will reduce the risks of private investment, increase the reliability of investment projects for credit institutions.
The intensity of society's development, due to current trends and challenges, creates new preconditions for establishing communication between the state and business, namely, publicprivate partnership.
Thanks to this form of cooperation, competition (or even confrontation) is replaced by constructive dialogue and the establishment of partnerships, which can be a new step towards achieving common goals and a guarantee of important changes in the state and society.
In addition, public-private partnerships involve pooling and coordinating efforts, resources, equal participation of each party, and shared responsibility for performance to address specific challenges.
The purpose of the thesis is to study the nature and problems of public-private partnership in Ukraine and to develop practical recommendations for improving the mechanism of effective management of state property.
Material and methods
Theoretical research in the category of "partnership" makes it possible to state that very often the concepts of "partnership" and "cooperation" are identified. Despite the fact that they have much in common, they are not just different concepts, they are different systems both in terms of institutional and organizational structure, and in terms of structural and functional purpose.
The analysis of research on the category of "public-private partnership" proves that publicprivate partnership is a legally established form of cooperation between the state and the private sector to address socio-economic problems and to achieve the goals in which stakeholders; which is applied, first of all, in relation to realization of investment projects in capital-intensive branches of national economy for which development the state should be responsible.
Based on the study of public-private partnership as a form of partnership, it can be argued that the main factors that determine the form of public-private partnership in specific projects are: features of national legislation; investment risk allocation schemes; experience in organizing contractual relationships necessary for partnership; industry affiliation of the project or type of activity; determination of the payer (s) for the services of the object and the consequences for him (them) of the chosen form.
The analysis of research on the categories of "partnership" and "cooperation" suggests that cooperation is a process where there is a priority of tactical benefit in the individual strategic goals of the parties, and partnership is a relationship in which the strategic goal takes precedence over individual interests (Andres N. at al., 2009;Delmon J., 2009;Eggers W., Startup T., 2006;Mandri-Perrott C., 2009).
The analysis of research on the category of "public-private partnership" should be noted, it is a legally established form of interaction between the state and the private sector to address socio-economic problems and to achieve the goals in which stakeholders; which is applied, first of all, in relation to realization of investment projects in capital-intensive branches of national economy for which development the state should be responsible.
Results and discussion
Public-private partnership is a set of public relations that are at the junction of public and private law, and are governed by branches of law.
Based on the study of public-private partnership as a form of partnership, it can be argued that the main factors determining the form of public-private partnership in specific projects are the features of national legislation on the scheme of distribution of investment risks; experience in organizing contractual relations necessary for partnership; industry affiliation of the project or type of activity.
The intensity of society's development, due to current trends and challenges, creates new preconditions for establishing communication between the state and business, namely, publicprivate partnership. Thanks to this form of cooperation, competition (or even confrontation) is replaced by constructive dialogue and the establishment of partnerships, which can be a new step towards achieving common goals and a guarantee of important changes in the state and society.
In addition, public-private partnerships involve pooling and coordinating efforts, resources, equal participation of each party, and shared responsibility for performance to address specific challenges.
Partnerships present the parties involved with complex negotiations and specific problems (complex goals, levels of compromise, areas of responsibility, subordination and succession, ways of assessing and sharing success, etc.) that need to be addressed in order to reach an agreement.
Once an agreement has been reached, the partnership is usually enforced in accordance with civil law, and partners who wish to make their agreement clearly articulated and enforceable usually add partnership articles to the agreement.
Given that not everything can be foreseen and written in a partnership agreement, trust and pragmatism are also the key to good governance in the long run.
Today, the main areas of partnership are business, politics (or geopolitics), knowledge, individual trajectories (Table 1).
Direction
Features of the direction Business two or more companies form a joint venture or consortium to: work on a project (for example, industrial or research) that would be too difficult or too risky for one company; join forces to create a stronger market position; compliance with certain rules (for example, in some developing countries, foreigners can only invest in the form of partnerships with local entrepreneurs. In this case, the alliance can be structured in a process comparable to a merger and acquisition agreement Politics (or geopolitics) in what is commonly called an alliance, governments can work together to achieve their national interests, sometimes against allied governments with opposing interests (World War II, the Cold War, etc.) Knowledge accreditation agencies are more likely to rate schools or universities for the quality of their partnerships with local or international counterparts and various other sectors of society Individual some partnerships arise on a personal level (when two or more agree to live together), while other partnerships are not only personal but also private, known only to the parties involved It should be noted that while a business partnership strengthens mutual interests and accelerates success, some forms of cooperation may be considered ethically problematic, for example, when a politician enters into a partnership with a corporation to promote the latter's interests in exchange for some benefit. conflict of interest and, as a consequence, the public good may suffer. And although this practice is technically legal in some jurisdictions, it is generally seen as corruption.
Dominant areas and forms of partnership have certain features in the most common counterparties in the world and depend on the legal framework and ethical standards of business conduct of business entities.
Summarizing the above research, we note that before choosing a form of business as a partnership, you should pay attention to the real assessment of your own business and its potential, make sure that the partner has its own resources, ideas, experience, strong team and positive business reputation. prospects for development in the relevant market.
Research by global and domestic institutions shows that there are different classifications of government-business partnerships, and the choice of form of partnership with private capital depends on the goals set by the government / municipality or the body managing the property and acting as a customer when placing an order. from the amount of property rights transferred by the state to the business.
The World Bank, in turn, has its own views on the structuring of public-private partnership projects and the subsequent classification of PPP (Table 2).
In the case of concession agreements of all types, it is already a question of partial transfer of some property rights from the state to the private partner (usually powers of use, possession and management). Each project of the state and business, as a rule, is temporary, so it is created for a certain term for the decision of a concrete task.
The main differences of the considered forms of PPP are systematized and are presented in (Table 3).
An integral condition for the normal functioning of a market economy is the constructive interaction of business and government agencies, the methods and specific forms of which may differ significantly depending on their maturity and national characteristics of market relations.
In addition, it should be noted that the state is never free from performing its socially responsible functions related to national interests, and business, in turn, always remains the source and engine of development and increase of social wealth.
Until 2010, seven main types of concession agreements were considered in Ukraine, however, in connection with the inclusion in a number of international agreements of Ukraine of certain provisions of the International Bank for Reconstruction and Development, the World Bank, adaptation of Ukrainian legislation aimed at preparation and accession to the WTO (2009) and the adoption in 2010 of the Law of Ukraine "On Public-Private Partnership" in Ukraine may be concluded: • concession agreement; • property management agreement (only if the agreement concluded within the framework of a public-private partnership stipulates investment obligations of a private partner); • agreement on joint activities; • other agreements.
In addition, an agreement concluded within the framework of a public-private partnership may contain elements of various agreements (mixed agreement), the terms of which are determined in accordance with the civil According to Article 3 of the Law of Ukraine "On Public-Private Partnership" the basic principles of public-private partnership include: • equality before the law of public and private partners; • prohibition of any discrimination against the rights of public or private partners; • coordination of interests of public and private partners for the purpose of mutual benefit; • ensuring higher efficiency of activity than in the case of such activity by a state partner without the involvement of a private partner; • invariability during the entire term of the agreement concluded within the framework of public-private partnership, purpose and form of ownership of objects that are in state or communal ownership or belong to the Autonomous Republic of Crimea, transferred to a private partner; • recognition by public and private partners of the rights and obligations provided by the legislation of Ukraine and determined by the terms of the agreement concluded within the framework of public-private partnership; • fair sharing between public and private partners of the risks associated with the implementation of public-private partnership agreements; • determination of a private partner on a competitive basis, except as provided by law. The variety of forms of public-private partnership allows extensive use of private capital in solving many problems by the state in the areas of production of public goods and public services, production of natural monopolies.
In these areas, the state cannot give up its presence and uses public-private partnerships to resolve the contradictions between the limited capabilities of the state budget and the need to invest capital to ensure the reproduction and development of strategic and social significance of these areas.
It should be noted that from the first version of the Law "On Public-Private Partnership" (2010) to the existing one, significant changes have taken place (mainly in 2015 and 2020), both in terms of scope and scope. project, property rights, etc.
Yes, according to Art. 4 of the Law of Ukraine "On Public-Private Partnership" PPP is used in various areas of economic activity, taking into account their specific features (Fig. 1).
Source: suggested by the author
In these areas, the state cannot give up its presence and to resolve the contradictions between the limited capacity of the state budget and the need to invest capital to ensure the reproduction and development of strategic and social importance of these areas uses public-private partnership in various areas of economic activity. their specific features.
Thus, for the period from 2010 to 2020, mechanical engineering remained a priority for the state; water collection, purification and distribution; health care; tourism, recreation, recreation, culture and sports; ensuring the functioning of irrigation and drainage systems; production, distribution and supply of electricity. In 2015 and 2019 (mostly) the priorities of the state changed and, in accordance with the law, excluded such areas as prospecting, exploration of mineral deposits and their extraction; production, transportation and supply of heat and distribution and supply of natural gas; construction and / or operation of motorways, roads, railways, runways at airfields, bridges, overpasses, tunnels and subways, sea and river ports and their infrastructure.
The vector of other spheres has also changed: • waste treatment (specified for waste management, except for collection and transportation); • real estate management (specified for the production and implementation of energysaving technologies, construction and overhaul of residential buildings, completely or partially destroyed as a result of hostilities on the territory of the antiterrorist operation; • installation of modular buildings and construction of temporary housing for internally displaced persons).
The scope of application of public-private partnership in the field of social services, management of a social institution, institution has been expanded; educational and health services (donation of blood and / or blood components, procurement, processing, testing, storage, distribution and sale of donor blood and / or blood components); management of architectural monuments and cultural heritage.
It should be noted that according to Article 7 of the Law of Ukraine "On Public-Private Partnership" the transfer to a private partner of a public-private partnership, including its further reconstruction, restoration, overhaul and technical re-equipment by a private partner, does not transfer of ownership of this object to a private partner and does not terminate the right of state or municipal ownership of such object, and after the termination of the relevant agreement in the manner prescribed by the agreement concluded in public-private partnership, such objects are return to the state partner.
Regarding the general understanding of certain provisions of the status of objects of public-private partnership, it should also be noted that: the objects of public-private partnership are reflected on the balance sheet of the private partner and are separated from his property, and the private partner applies separate accounting to such property; the objects of public-private partnership cannot be the objects in respect of which the decision on privatization has been made; public-private partnership objects cannot be privatized during the whole term of publicprivate partnership implementation; the use of land for public-private partnership is regulated by Article 8 of the Law of Ukraine "On Public-Private Partnership" and must comply with regulations of Ukraine.
The main sources of public-private partnership funding according to Article 9 of the Law of Ukraine "On Public-Private Partnership" are the financial resources of the private partner; financial resources borrowed in the prescribed manner; funds from the state and local budgets and other sources not prohibited by law.
Summing up the study of the state of regulatory and legal support of public-private partnership in Ukraine, we note that: During the period of the Law of Ukraine "On Public-Private Partnership" (2010-2020) there were significant changes (mainly in 2015 and 2019), which relate to the scope, form, object, property rights, etc.; cooperation between partners can be carried out within different structures, with different competence, with different set of tasks and sources of funding; there are a large number of different models of public-private partnership for the purpose of qualitative economic and social changes.
Among the forms of public-private partnership represented in Ukraine are also corporatization (corporatization) and creation of joint ventures, and the degree of freedom of the private sector in making administrative and economic decisions is determined by its share in the share capital (Table 4). The dynamics of change in the number of projects concluded and implemented in Ukraine on the basis of public-private partnership for 2012-2020 states the continuity of the existing trend: contracts are concluded, but most of them are not implemented (Fig.2). The most active among the projects related to water collection, treatment and distribution is Mykolaiv region; construction and / or operation of highways, roads, railways, runways at airfields, bridges, overpasses, tunnels and subways, sea and river ports and their infrastructure is the Odessa region; production and transportation of natural gas is the Transcarpathian region.
This situation is due to the fact that the decisions on the feasibility of public-private partnership projects in previous years did not have a feasibility study on the effectiveness of such projects in the form of concessions in this regard, it was not possible to assess the financial capacity of the concessionaire. to implement the project (Lviv-Brody project, where the concessionaire turned out to be insolvent and it was impossible to replace it).
Among the concluded public-private partnership projects in Ukraine for 2012-2020 the vast majority belongs to the concession (Fig.3). Among the agreements that are not implemented (in 2019) 153 (of which: water collection, treatment and distribution (30 projects); production, transportation and supply of heat (6); construction and operation of highways, roads, bridges, overpasses, tunnels, sea and river ports and their infrastructure (16), tourism, recreation, culture and sports (1), search, exploration and extraction of minerals (1), waste treatment (112), production, distribution and supply of electricity (5), real estate management (2) and others (13) Mykolaiv -15, Odessa -14 and Kyiv -11.
Among the 52 agreements on the basis of public-private partnership that are being implemented, the vast majority covers Mykolayiv, Odesa, Kyiv, Donetsk and Lviv regions.
Most active projects are implemented in the field of sewerage and water supply (50%), transport infrastructure (22%) and production, distribution and supply of electricity (7%) (Fig.4). In this regard, it should be noted that the World Bank, which has its own methodology for accounting for private investment projects, proves that in Ukraine, since independence, only 20 full-fledged public-private partnership projects have been implemented for a total of 2271.3 million USD, among which the most attractive sectors for attracting private funding were electricity (75% of all projects), seaports (12%), water supply and sewerage (9%), information and communication technologies (4%).
Conclusions
Based on the analysis of public-private partnership in Ukraine, it can be argued that for the implementation of public-private partnership in Ukraine formed an institutional environment that has elements of hierarchy, where each level of hierarchy has its participants, represented by regulatory institutions that determine the rules of relations. rights and obligations of participants. But, despite the positive developments, the state and pace of project implementation is unsatisfactory. One of the main reasons for this state of affairs is the lack of trust in the state as a business partner due to the ongoing instability of public finances, variability of legislation, high level of corruption, including at the middle level, which is responsible for public-private partnership projects. Thus, according to the World Bank in Ukraine, despite the developed regulatory framework, public and non-profit institutions to promote public-private partnership infrastructure development, there is almost no experience of successful implementation of large public-private partnership projects compared to leading countries.
Summarizing the above research, we note that before choosing a form of business as a partnership, you should pay attention to the real assessment of your own business and its potential, make sure that the partner has its own resources, ideas, experience, strong team and positive business reputation. prospects for development in the relevant market. | 4,754.4 | 2021-09-29T00:00:00.000 | [
"Business",
"Economics",
"Political Science",
"Law"
] |
Towards “Born-Accessible” Educational Publishing
This paper reports on how accessibility is being slowly implemented in the current editorial and production workflows of Australian educational publishers. The findings follow from an online questionnaire commissioned by the Australian Publishers Association completed by 65 educational publishers. The paper shows that many publishers have started working on accessibility implementation, but some of them are still at the scoping stage. While many of the participants believe that the quality of “born-accessible” publication is better for all users, they are concerned about the amount of work and financial cost involved. Overall, publishers understand the need for accessibility implementation, but require further practical support and training. Publishers are also interested in working out the best workflows, timing and processes, and most cost-effective way of implementing accessibility.
Introduction
Changing legal requirements, and growing industry interest all point to accessibility's urgency and importance, however studies from Australia [1], Canada [2] and the European Union (EU) [3] indicate that knowledge and skills remain a challenge for publishers. Already in 2005, Frederick Bowes called on "publishers to develop and implement informed operating policies and protocols that assure that on an ongoing basis its products and services meet applicable accessibility requirements and thus can fully compete in an increasingly demanding marketplace" [4]. The commercial logic is compelling: if inclusive publishing increases the number of students able to access (that is, use) textbooks, it also increases potential sales. In other words, creating accessible educational resources makes good business sense, opening 1 3 opportunities to serve a substantial, under-serviced market segment and helping build publishing industry capability and resilience. The critical question remains, however, what is the marginal cost of every additional user, which may explain why the decisions about investments in changing publishing processes happen at the margin and why change is somewhat fragmented.
In 2015, Australia ratified its adherence to the 2013 Marrakesh Treaty to Facilitate Access to Published Works for Persons Who Are Blind, Visually Impaired or Otherwise Print Disabled [5] and in 2016, revised its national (government) procurement rules, requiring public libraries and educational institutions to procure digital products, services and content that meet accessibility requirements. Following this, the NSW Department of Education, along with other state school systems, began reviewing their accessibility implementations and updating their procurement procedures [1]. In short, educational publishers that fail to comply with incoming requirements, will at some point become unable to sell their resources.
Further, in 2017, Australia amended its Copyright Act 1968 to legally buttress access to copyrighted material for persons with a disability and allowing producers to convert published materials (including textbooks) into accessible formats. Nonetheless, because current conversion processes remain time and resource intensive [6], resulting delays in receiving suitable class materials continue to constrain educational outcomes for students with print disabilities, creating and perpetuating disadvantage [7].
Given these rafts of changes and their surrounding issues, the need to further research implementation frameworks and develop practical planning, execution and evaluation tools, is ever more urgent, and hence, one key question presses this project forward: What is the most workable path for publishers to meet their obligations within a heterogenous industry where commitment to change, and resource/capacity for change vary significantly.? Answering this, entails understanding: • current states of accessibility implementation among educational publishers • types of software and other tools used at different stages of the publishing process • supports needed to embed accessibility policies and inclusive publishing workflows.
Beginning with an outline of this project's background and a review of available research, this paper then presents quantitative and qualitative data gathered from a recent online survey of educational publishers. Subsequent discussion examines key details regarding respondents' accessibility implementations, taking in: methods of approach, organizational leadership, activities, challenges and perceived knowledge and skills gaps. It concludes with a look to the future and several recommendations for the Australian Publishers Association (APA). Afterward, it attends to the study's limitations and considerations for further research.
Background
Interest in accessibility implementation in the publishing industry is part of a broader accessibility revolution sweeping society that is also flowing into many fields of scholarship and industry. The genesis of this global cultural and societal phenomenon is closely tied to the 1948 Universal Declaration of Human Rights [8] and the 2006 UN Convention on the Rights of Persons with Disabilities [9]. These international instruments have influenced national legislative, administrative and judicial practices around the world and are transforming the way books are published. The Marrakesh Treaty was a pivotal point in the challenge of improving access to books for persons with print disabilities. Apart from globally facilitating access to existing libraries of accessible content, the Marrakesh Treaty also heightened interest in accessibility implementation in the publishing industry.
In 2019, the EU's European Accessibility Act (EAA) [10] made adopting inclusive publishing practices more urgent, by requiring member states to implement it by 28 June 2022, with enforcement slated to start on 28 June 2025. In contrast to the exception-based Marrakesh Treaty, this EU law requires publishers to produce digital publications in accessible formats for the European market, and the entire supply chain to deliver content through accessible services. While the directive has unavoidable implications for European publishers, it also affects any non-European organization seeking to sell books to European markets.
Several research projects have investigated the accessibility implementation in the publishing industry to date. The UK-based ASPIRE project reviewed the state of e-book accessibility in 2016 [11] and the state of "accessibility information" in relation to e-books and platforms in 2018 [12]. The global DAISY Consortium has periodically surveyed publishers since 2018, but its results inevitably skew towards publishers with an active interest in inclusive publishing. DAISY's 2020 survey revealed a promising trend towards awareness building and born-accessible content creation, and widespread adoption of the app Ace by DAISY for testing. At the same time, the cost and time required to produce good quality alt text and implement other accessibility related practices, especially in math, chemistry and scientific materials were reported as key challenges [13].
The research project carried out in Canada by the Association of Canadian Publishers and eBOUND Canada and published in 2020 focused on three areas: reviewing Canada's English-speaking landscape and potential publishing market for accessible digital books; investigating standards and certification programs; and developing industry awareness and training strategies in relation to accessible books [2]. Based on surveys, focus groups and interviews of people with print disabilities, publishers, librarians, book distributors, alternative format suppliers and leaders in accessibility initiatives, the final report overviewed Canada's accessible e-books and audiobooks supply chain. The study's focus on publisher perspectives aimed to identify and offer accessibility-related recommendations regarding: "barriers to production, distribution and discovery … best practices for marketing and selling … [and] market-led [creation] incentives". Canadian publishers reported a lack of accessibility awareness and various production, distribution, discoverability and cost concerns, making particular reference to highly illustrated books [2]. A subset of this report's extensive list of recommendations is particularly relevant to publishers, focusing on the need for more education and training in creation of alt text, accessible e-book files and workflows used to produce them, and the need to carry out accessibility audits of files, workflows and websites [2].
A research project carried out in Australia in 2020 [1] focused on understanding what publishers and alternative content producers were doing in terms of producing accessible content, their motivations and challenges to lessen the duplication of effort in the short term, and transform the production of accessible content in the long term. The findings from the survey of publishers showed that despite much goodwill and the fact that digital book production was almost the norm, accessible production was not. Publishers articulated ethical, legal and creative motivations to produce accessible e-books, and saw the return on investment to be of lesser importance. As in Canada, skills and knowledge deficits and limited awareness were cited as key barriers and challenges to accessible e-book production. This research noted several opportunities to explore born-accessible content production, along with the suggestion that supply chain stakeholders ought to collaborate more [1]. The second survey aimed at staff in disability organizations, alternative format providers and educational institution disability support services sections focused on accessible-format conversion processes and key challenges. Its primary recommendation was for publishers to respond to requests more quickly and provide updates on their processing, fast-track access to suitable files, such as Adobe InDesign, Illustrator, EPUB, Microsoft Word or editable PDFs (free of DRM restrictions or watermarks). It also recommended that publishers have on their websites clearly defined and accessible content policy requests and procedures-which seems an easily attainable goal [6].
Several more recent industry surveys have also been carried out in Europe. A Supporting Inclusive Digital Publishing through Training (SIDPT) investigation surveyed the current state of accessibility implementation and industry training readiness to support and develop the Inclusive Publishing in Practice platform (see Braillenet [3] and [14]). In 2021, the DAISY Consortium investigated EU member implementation preparations for the EAA, including the extent to which they had engaged government ministries and stakeholder platforms. Consistent with other surveys, this effort identified awareness, training and clarity shortfalls [15]. In early 2022, the UK's Accessibility Action Group launched a survey seeking to monitor progress and identify "gaps in solutions, knowledge resources and guidance" [16].
The above investigations informed the questionnaire design of this project, with adaptations to suit the specifics of Australia's educational publishing sector. Unsurprisingly, the number of educational publishers in Australia is relatively modest. At the time of writing, of the 87 APA member organizations producing substantive materials for the education sector, fewer nominated educational publishing as their major revenue source (namely, 10 in scholarly and journal publishing, 27 in school educational publishing and 12 in tertiary and professional publishing) [17]. While not all educational publishers or educational technology companies in Australia are members of the APA, most key players are. APA members range from small to large, local to multinational, start-up to well-established and employ a variety of business models, and this diversity was reflected in this survey's responses.
Methods
This paper reports findings from a survey within a broader research enquiry into accessibility implementation in editorial and production workflows of educational publishers. The survey's instrument proposed a set of closed and open-ended questions designed to reveal current publishing practices, software usage, production outputs, knowledge and skill levels and the industry needs to support the production of born-accessible educational materials. Of further and particular interest, was understanding the nature and extent of staff accessibility awareness over a variety of working roles and functions.
An invitation to participate was circulated directly via email to all members of the APA's Schools Educational Publishers Committee and indirectly via newsletter to the wider APA membership. A link to the questionnaire was also shared using social media, including via LinkedIn and Twitter. To cover the range of relevant roles-in acquisitions, editorial, rights and permissions, production, marketing and management-all staff of educational publishers were invited to participate, anonymously and voluntarily.
The questionnaire instrument was created using Qualtrics and structured to capture limited demographic information that might prove material to understanding a possible variety of approaches. Respondents were asked for example, to note the size and nature of their organization, addressable market segments (primary and secondary school, higher education, vocational and educational training) and individual function or role.
Respondents were asked to begin by providing informed consent, and to finish by supplying an email address if they wished to be notified of results or would consent to a follow-up interview. The latter option was disaggregated from the responses to the main questionnaire.
After testing with a small sample of publishing professionals, the questionnaire was disseminated as noted above, and available for responses from 15 April to 31 May 2022. The data was subject to qualitative (free text) analysis using Microsoft Excel and quantitative analysis (multiple choice questions, as well as re-coded free text responses) using a combination of SPSS and Microsoft Excel.
Findings
The survey of educational publishers received 65 responses from staff in management, editorial, acquisition, commissioning, project management and other areas of business. These respondents predominantly represented independent publishers (54%), then global publishing groups (42%), and individual professionals from an academic publisher, a government publisher and an education technology company.
3
The entities represented ranged in size from several having fewer than 10 employees, to one with 500 (see Fig. 1).
The survey permitted multiple choice responses to the question of organizational target market segments, and 80% of respondents said they publish for primary schools, 51% for secondary schools, 42% for higher education and 26% for vocational and educational training.
The publishers reported using a staggering variety of software packages across the various stages of the publishing process, which was predominantly carried out inhouse.
At the acquisition and product conception stage, Microsoft Office was considered key, followed by Adobe Acrobat and Adobe Creative Cloud. Respondents also noted using different systems for more specific functions in this stage however, namely for: project and data management, file sharing and collaboration, market research, user testing, cloud-based e-signature services, non-Adobe graphics and web design, software development and simulation, and (Zoom) video conferencing. Authoring processes and content development were managed using Microsoft Office (mainly Word), followed by Adobe Acrobat, Adobe Creative Cloud and Google Docs. Three respondents mentioned using a "proprietary XHTML authoring/content development/editing/proofing/production platform (vendor-owned)", and another two noted a third-party platform for digital content. Again, respondents also mentioned a variety of more function-specific platforms in use, in relation to e-learning, digital assessment solutions, video editing, file sharing and FTP transfer, project management, non-Adobe PDF editing, and software development and simulators.
Workflows, projects and business processes were largely managed using Microsoft Office, followed by the Google Suite, but again, other platforms were useful, for example, in relation to time tracking and file sharing.
Design and layout were mostly reported as involving Adobe Creative Cloud: especially InDesign but also Illustrator, Photoshop and Acrobat. Two respondents noted using third-party platforms for digital, and one, the aforementioned vendorsupplied XHTML platform. One respondent reported entirely outsourcing design and layout.
For proofing, most respondents reported using Adobe Acrobat, followed by Microsoft Office applications-typically Word and occasionally Excel. Again, respondents also noted that more specific tasks and functions required correspondingly specific platforms and applications, either relating to or involving: Adobe software (InDesign/InCopy), HTML editors in different online platforms as well as e-book specific platforms; learning, assessments, and training management tools, video editing platforms, and a proprietary XHTML platform mentioned above. Two respondents reported that they outsourced proofing.
In quality assurance and testing, respondents reported using Adobe Acrobat, Microsoft Word, EPUB checker and Ace by DAISY. Respondents also referenced a variety of third-party e-book and digital resource platforms, app development software, video editing platforms, digital assessment solutions and collaboration tools. One respondent outsourced this stage.
Regarding sales and marketing, a variety of customer relationship management (CRM) systems were in high use. Respondents also referenced Microsoft Office, Adobe Creative Cloud, Google Apps, online stores (including one digital content specific distribution platform), social media and collaboration platforms. One respondent noted using an AI-driven marketing tool.
To distribute products, respondents relied on their own organizational websites as well as more specialist software and systems relating to e-learning, accounting, customer relations and business management-where again, Microsoft Office remained in use.
Among the 42 respondents who responded to the question about the views of their organisation with "regards" to accessible publishing (respondents were asked to choose all answers that apply): • 64% considered that meeting accessibility requirements would improve quality and user experience across all their digital products • 60% considered it to be a social and moral responsibility • 40% were concerned about the amount of work and financial cost involved • 29% were aware of accessibility requirements but had not yet taken steps to integrate them into publication workflows • 21% said they wanted to produce accessible digital publications, but were not sure how to do so • 10% claimed to be unfamiliar or unsure • 7% had no capacity (knowledge, resources or otherwise) to initiate change • 7% saw no financial or other benefit for their business • 5% were worried that it would erode publications quality • 5% viewed it to be the government's responsibility.
On the question of progress, and more specifically, whether organizational action had begun on accessibility implementation, a majority of respondents (59% of n = 41) reported having started to integrate accessibility into their publishing workflow, while 17% had not and 24% remained unsure. Table 1 shows that larger companies were far more likely than smaller ones to have started such engagements (n = 40).
Respondents (n = 41) nominated a variety of ways their entities were approaching accessibility implementation: • 32% had tasked oversight to an individual or team • 27% monitored their products for compliance on a regular basis • 24% provided awareness training to their employees • 20% embedded accessibility into product conception and authoring processes • 17% regarded accessibility as integral to their policies • 17% employed service providers and freelancers to comply with requirements • 12% always checked compliance before publishing • 10% provided skills training to employees • 5% involved people with print disabilities in design and development processes.
At the same time, 32% of respondents had not yet taken measures to make their publications accessible and 17% were not sure. In one organisation, "Accessibility is seen as critical and is being implemented but there is a huge cost imperative, and we are still trying to work out the best workflows and timing and processes and most cost-effective way of implementing it." In contrast, another respondent wrote that, "In previous years, accessibility was an afterthought, something that was added retrospectively and content had to be remediated. Now, accessibility is designed well upstream from the point of content creation. It was originally something only for digital products, but now we are considering accessibility requirements in print too." The process has been driven by an accessibility/diversity and inclusion working party/task force at four publishers (out of 12 respondents), in two by production teams, in another two by learning designers, in one by editorial manager, and in one by the UK content team. In one company a mandatory accessibility awareness and training program was globally rolled out for all staff. Five of the respondents mentioned the support of senior management.
Five respondents (n = 14) reported organizational embedding of accessibility at the production stage. Two embedded it (especially for digital products) at the point of content creation, and another two claimed to be overall actively working towards inclusive publishing practices. Four respondents were investigating accessibility implementation, but said they needed more training and advice. One respondent noted that organizational engagement with accessibility was limited to providing files to a "university support team".
Barely a quarter of publishers reported running accessibility quality assurance (26%), adding accessibility metadata (26%), or making multimedia accessible for students who are blind (26%). The inclusion of alternative text is one of the key elements of making content accessible and typically authors (44%), development editors (44%) or subject specialists (31%) are responsible for its creation, with some of the 16 publishers outsourcing the task to local (18%) or overseas vendors (25%). Editors (80%) or proofreaders (53%) are typically responsible for checking the alternative text provided.
Fewer than half of respondents (38%, n = 38) had undertaken accessibility production training, whether for print, digital or both. Those who had, did so via online webinars (for example, as provided by the DAISY Consortium), internal online workshops and peer-to-peer sharing. Respondents generally indicated clear needs for: • increased expertise and capacity to enable further accessibility improvement • unified, streamlined and simplified standards, contacts, processes and services • instructional materials designed "to guide scoping and development decisions".
As one respondent added, "We are concerned about finding the best training to help us with implementation. We would like to know about expert services that can help guide internal teams, or even where we can outsource certain tasks like alt-text development." In terms of training required, there is a clear appetite for practical sessions with accessibility "best practice". Other specific training needs which topped the list include: images and text alternatives (illustrations, maps, infographics, graphs, etc.), tables (format best practice, alternative text, etc.), graphic and layout (contrast, use of colour, responsive design, etc.), interactive elements. There is less interest in training on accessibility policy, business context and legislation, which shows that publishers understand the need for accessibility implementation, but need further practical support.
Discussion
Despite having their progress slowed by the COVID-19 pandemic [18], a notably higher proportion of respondents (almost 60%) indicated engagement with accessibility compared with Australia's (40%) broader publishing industry result in 2020 [1]. Overall, this is not surprising, given educational publishers' stronger legal, moral and commercial imperatives. While many of the participants believed the quality of born-accessible publication to be better for all users, they remained concerned about the volume of work required and financial cost involved. The question of cost was also raised by publishers in Canada.
A small proportion of respondents reported having either no capacity for accessibility or seeing no benefits in its implementation. Few remained concerned that implementing accessibility might adversely affect publication quality. This suggests a certain lack of understanding of the principles of inclusive design, which accrue positively beyond the needs of students with print disabilities.
This survey found that larger publishers were more likely than smaller ones to already be working towards producing accessible materials. Some respondents were uncertain about their organization's progress in starting work. If this was because they were not directly involved in the process, that may also point to poor companywide policy communication, especially in smaller organizations, but in any case, it indicates a lack of accessibility engagement. This differs from the findings of the 2020 survey, where publishers of all sizes have been able to produce accessible content. As educational publications are generally more complex than other kinds of text, they require greater investments to make them accessible, and therefore, the need for greater human, organizational and financial resources is not surprising.
Embedding accessibility in the production stage is somewhat more common than adopting inclusive publishing practices, but it could be a transitional stage as further two respondents reported exploring embedding accessibility into the whole publishing process. As some organizations are exploring this more positive direction however, it seems plausible to expect that in the future, this current lag may prove to have been a transitional delay. At some organisations the process is being driven by teams tasked specifically with focusing on accessibility or diversity and inclusion. In others, it is managed by functional teams (such as production or learning design). A whole-company approach is rare, with only one respondent reporting an all-staff mandatory training on accessibility awareness.
Respondents commonly considered accessibility in content structure, graphic design, alternative text descriptions, editing and proofreading. Fewer reported thinking about accessibility in terms of metadata, multimedia and quality assurance. This latter point is inconsistent with the DAISY Consortium's 2020 report of widespread adoption for Ace by DAISY app for testing. Even so, neither DAISY's app, nor accessibility checkers such as those available in Adobe PDF and Microsoft Word obviate the need for manual accessibility review and testing of, but this step was missing from most respondents' workflows. It is also worth noting here, that Canadian work exploring certification program feasibility endorsed Benetech's Global Certified Accessible program (GCA) as being able to "increase publisher awareness, confidence and capability" [2].
Interestingly, educational publishers seem to rely less on outsourcing production than the industry average in Australia, noted in the 2020 survey. Respondents reported using a remarkable variety of software systems, packages and platforms across the various publishing processes, producing a corresponding diversity of educational resources and formats. Adobe InDesign remains key to the sector. Unfortunately, EPUB files created using Adobe InDesign lack several important accessibility features, including: accessibility metadata, page lists and number locators, ARIA roles, ability to include extended image descriptions, add structured code and language tagging. Moreover, it is difficult to create sections and landmarks in InDesign, and the resulting file contains needlessly complex code [19]. Remediating EPUB files created using Adobe InDesign is thus necessary, but demands extra work and cost, in addition to the further expense in creating alternative text.
While publishing workflows are already highly digitized, print remains the most common production output. Still, almost all respondents reported producing resources in digital formats. The popularity of PDFs is unsurprising but ensuring this format's accessibility remains challenging. Given the EPUB3 format was released over a decade ago (in 2011) and that it is natively more accessible than its predecessor-offering richer navigation, being human-and machine-readable and containing support for multimedia and MathML-the ongoing prevalence of EPUB2 is surprising, and concerning [20].
Respondents demonstrated healthy appetites to learn about best practice, workflows, timing and processes, in order to cost-effectively implement accessibility. They were less interested in undertaking training in matters of policy, business context and legislation, which publishers no doubt more broadly understand. In sum, the standout training need in relation to accessibility implementation, is for more practical support.
Conclusions
A range of contemporary global and local technological developments, cultural emphases, legislative enactments and industry commitments point to a continuing intense focus on the need to implement more inclusive publishing practices. This project's findings reveal a sector with complex digital workflows, which is still very much in transition toward making "born-accessible" publishing a reality, with larger publishers reporting being further along the path toward producing accessible publications.
Educational publishers in Australia are at least aware of, if not engaged at some level with accessibility implementation, and generally supportive of the idea that natively accessible educational resources would be better for all. Nonetheless publishers have caveats, or at least questions, concerning required volume of work and financial costs. At present, publishers typically adopt accessibility at the production stage, where it is somewhat hampered by (among other things) publisher reliance on Adobe InDesign, with its inadequate support for accessibility. Although publishers have been focusing on ensuring the correct content structure, accessible graphic design, and the inclusion of alternative text descriptions, few have addressed the automatic and manual quality assurance processes needed to check accessibility compliance.
While far more resources on inclusive publishing practices are available now than in 2020, educational publishers in Australia lack structured experience, tailored guidelines and practical training on best workflows, timing, and cost-effective means of delivery. Here, perhaps the industry's peak body, the APA-which has already committed to supporting "educational publishers to meet the mandatory requirements for accessible learning materials" [21] -might further prioritize equitable sector-wide capacity-building, taking care not to neglect smaller publishers. There is a clear need to continue the work of the Australian Inclusive Publishing Initiative [22] in leading the education of the publishing sector, tracking progress in accessibility implementation, and working with other stakeholders in the book supply chain. With this and other accessible solutions in sight, in spite of undoubtable challenges-including that many educational publishers have a long way to go-it seems safe to predict that, in the not-too-distant future, "born-accessible" educational resources will become the publishing norm.
Limitations
Care should be taken not to apply these results overly strictly as representative of Australia's entire education publishing sector due to the possible introduction of respondent self-selection bias in the project methodology, and the instrument's low sample size. Moreover, while an online questionnaire is a useful tool for gathering preliminary data, it does not allow fuller exploration that would deliver more nuanced reasoning, attitudes and opinions. For example, it would be interesting to know why publishers are still producing EPUB2 files and what processes they use to remediate accessibility in files produced using Adobe InDesign. Further qualitative research incorporating person-to-person interviews could thus investigate the motivations, challenges and practices of individual publishers in more detail, as well as better explore appetites and feasibilities for an industry-endorsed certification program. | 6,503 | 2022-11-07T00:00:00.000 | [
"Computer Science"
] |
Physiological implications of arginine metabolism in plants
Nitrogen is a limiting resource for plant growth in most terrestrial habitats since large amounts of nitrogen are needed to synthesize nucleic acids and proteins. Among the 21 proteinogenic amino acids, arginine has the highest nitrogen to carbon ratio, which makes it especially suitable as a storage form of organic nitrogen. Synthesis in chloroplasts via ornithine is apparently the only operational pathway to provide arginine in plants, and the rate of arginine synthesis is tightly regulated by various feedback mechanisms in accordance with the overall nutritional status. While several steps of arginine biosynthesis still remain poorly characterized in plants, much wider attention has been paid to inter- and intracellular arginine transport as well as arginine-derived metabolites. A role of arginine as alternative source besides glutamate for proline biosynthesis is still discussed controversially and may be prevented by differential subcellular localization of enzymes. Apparently, arginine is a precursor for nitric oxide (NO), although the molecular mechanism of NO production from arginine remains unclear in higher plants. In contrast, conversion of arginine to polyamines is well documented, and in several plant species also ornithine can serve as a precursor for polyamines. Both NO and polyamines play crucial roles in regulating developmental processes as well as responses to biotic and abiotic stress. It is thus conceivable that arginine catabolism serves on the one hand to mobilize nitrogen storages, while on the other hand it may be used to fine-tune development and defense mechanisms against stress. This review summarizes the recent advances in our knowledge about arginine metabolism, with a special focus on the model plant Arabidopsis thaliana, and pinpoints still unresolved critical questions.
Introduction
Plant growth is often limited by the availability of nutrients. In many cases nitrogen is the limiting essential element. Nitrogen shortage causes detrimental effects on agricultural productivity, yet excessive nitrogen fertilization accounts for negative economic and environmental impacts. Improving nitrogen use efficiency represents a main challenge for agriculture, and it becomes increasingly important to investigate the mechanisms of nitrogen uptake, storage and recycling and to understand the interplay of these processes with the regulation of plant development and stress defense.
Due to the highest nitrogen to carbon ratio among the 21 proteinogenic amino acids, arginine is a major storage and transport form for organic nitrogen in plants in addition to its role as an amino acid for protein synthesis, a precursor for polyamines and nitric oxide (NO) and an essential metabolite for many cellular and developmental processes. In seed proteins of different plant species 40-50% of the total nitrogen reserve is represented by arginine (VanEtten et al., 1963;King and Gifford, 1997), and this amino acid accounts for 50% of the nitrogen in the free amino acid pool in developing embryos of soybean (Micallef and Shelp, 1989) and pea (de Ruiter and Kollöffel, 1983). Arginine is often a major nitrogen storage form also in underground storage organs and roots of trees and other plants (Nordin and Näsholm, 1997;Bausenwein et al., 2001;Rennenberg et al., 2010). Therefore, arginine metabolism plays a key role in nitrogen distribution and recycling in plants (Slocum, 2005). Slocum (2005) reviewed those genes that have been identified as encoding enzymes involved in arginine synthesis in Arabidopsis (Arabidopsis thaliana) and presented the current state of their characterization, including subcellular targeting, gene expression, available mutants and cDNAs of each enzyme. Over the past 10 years, research mainly on Arabidopsis as model plant has generated significant progress in our understanding of arginine metabolism, whereas several crucial questions remain unanswered. The present review highlights challenges for future research on plant arginine metabolism by summarizing recent advances about biosynthesis, distribution and catabolism of arginine and its contribution to polyamine and NO synthesis.
Arginine Biosynthesis
The biosynthetic pathway of arginine can be divided in two processes. First, ornithine is synthesized from glutamate either in a cyclic or a linear pathway, followed by the synthesis of arginine from ornithine.
Cyclic and Linear Pathways for Ornithine Synthesis
Ornithine is synthesized from glutamate via several acetylated intermediates (Figure 1). In the first step, N-acetylglutamate synthase (NAGS) uses acetyl-coenzyme A (Acetyl-CoA) to transfer an acetyl moiety to glutamate forming N-acetylglutamate (Slocum, 2005). N-acetylglutamate is then phosphorylated at the C5 position by N-acetylglutamate kinase (NAGK). The next step, the formation of N-acetylglutamate-5-semialdehyde (NAcGSA), is catalyzed by N-acetylglutamate-5-P reductase (NAGPR). In the fourth step, an amino group is transferred from a second glutamate molecule to N-acetylglutamate-5-semialdehyde by N 2 -acetylornithine aminotransferase (NAOAT), yielding N 2 -acetylornithine. Subsequently, ornithine is released by transferring the acetyl residue to glutamate by N-acetylornithine:N-acetylglutamate acetyltransferase (NAOGAcT), giving this enzyme the key role of conserving the acetyl group for the next cycle of ornithine synthesis. NAOGAcT is found in non-enteric bacteria (Cunin et al., 1986), fungi (Davis, 1986), and plants (Shargool et al., 1988).
Escherichia coli and other enterobacteria, as well as yeast, synthesize ornithine in a linear pathway due to the presence of N-acetylornithine deacetylase (NAOD), which hydrolyses N-acetylornithine to ornithine and acetate as the final step (Vogel and Bonner, 1956;Meinnel et al., 1992;Crabeel et al., 1997). Plants were considered as unable to use this pathway, since NAOD activity has not been demonstrated in plants so far (Slocum, 2005;Page et al., 2012;Frémont et al., 2013). Recently, Molesini et al. (2015) revealed the first hint of NAOD activity in Arabidopsis using T-DNA insertion lines (see below).
Arginine Synthesis from Ornithine
Arginine is synthesized from ornithine by the enzymes of the linear "arginine pathway" (Micallef and Shelp, 1989;Slocum, 2005). Ornithine transcarbamoylase (OTC) delivers the third N-atom by carbamoylation of the δ-amino group of ornithine, forming citrulline. This reaction requires carbamoyl phosphate, which is generated from ATP, bicarbonate and the δ-amino group of glutamine by carbamoyl phosphate synthetase (CPS). The fourth N-atom of arginine is derived from aspartate, which is ligated to citrulline by argininosuccinate synthase (ASSY). As substrates of OTC and ASSY, the amino acids aspartate and glutamine are additional essential precursors for arginine synthesis. Finally, argininosuccinate lyase (ASL) splits off fumarate, generating the final product arginine (Slocum, 2005; Figure 1).
The enzymes of plant arginine biosynthesis have been partly characterized biochemically (Shargool et al., 1988), but still little is known about the genes encoding these enzymes, and many steps of arginine biosynthesis remain poorly characterized in plants.
Genes and Enzymes of Arginine Biosynthesis and their Regulation NAGS
The first, and presently the only, characterized plant NAGS was isolated from tomato (Kalamaki et al., 2009). SlNAGS1 is a single copy gene and the SlNAGS protein shows a high level of similarity to two predicted Arabidopsis NAGS proteins, NAGS1 (At2g22910) and NAGS2 (At4g37670). A plastid transit peptide is predicted for SlNAGS1 and the plastid localization is supported by the expression of SlNAGS1 in all aerial organs, whereas no expression was detected in roots (Kalamaki et al., 2009). Transgenic Arabidopsis plants overexpressing SlNAGS1 showed a significant accumulation of ornithine in the leaves, and a higher tolerance to salt and drought stress compared to wild type plants. The improved tolerance to salt stress of SlNAGS1 overexpressions was attributed to the elevated levels of ornithine, citrulline and arginine, since these amino acids have been reported to accumulate together with proline in higher plants under salinity stress (Mansour, 2000;Ashraf and Harris, 2004).
NAGS activity is a target of feedback regulation by arginine in prokaryotes and a similar mechanism is proposed for plant NAGS (Kalamaki et al., 2009;Sancho-Vaello et al., 2009). Sancho-Vaello et al. (2009 showed regulation of Pseudomonas aeruginosa NAGS activity by arginine, being an activator at low arginine concentration as well as an inhibitor at higher arginine concentration. The effects of arginine on NAGS activity were mediated by altering domain interactions within NAGS.
NAGK
The localization of Arabidopsis NAGK (At3g57560) in chloroplasts was predicted by sequence analysis and was experimentally demonstrated by Chen et al. (2006). Feedback regulation of NAGK mediated by the plastidic PII protein was first described in the cyanobacterium Synechococcus elongatus (Heinrich et al., 2004;Maheswaran et al., 2004) and in Arabidopsis (Burillo et al., 2004). PII proteins are among the most highly conserved, widely distributed and ancient signal transduction proteins known in bacteria, archaebacteria, cyanobacteria, eukaryotic algae and higher plants. They are involved in sensing the carbon/nitrogen balance and the energy status of the cell. Targets include signal transduction proteins, key metabolic enzymes and transporters involved in nitrogen assimilation and uptake (Slocum, 2005;Feria Bourrellier et al., 2009). The PII protein has been shown to interact tightly with NAGK, inducing a conformational change of its T-loop and leading to decreased feedback inhibition of the enzyme complex by arginine (Heinrich et al., 2004;Slocum, 2005;Llacer et al., 2008;Feria Bourrellier et al., 2009). Interaction of NAGK and the PII protein under conditions of high nitrogen availability strongly increases the catalytic efficiency of NAGK and decreases significantly the sensitivity of the enzyme complex to arginine, resulting in high arginine production. Limitation of nitrogen prevents the formation of the NAGK/PII complex, resulting in decreased enzyme activity of NAGK and increasing the feedback inhibition of the complex by arginine (Heinrich et al., 2004;Maheswaran et al., 2004;Slocum, 2005;Chen et al., 2006;Llacer et al., 2008). Feria Bourrellier et al. (2009) demonstrated that the interaction of NAGK and the PII protein was counteracted by α-ketoglutarate/2-oxoglutarate, the carbon skeleton used to form glutamate during nitrogen assimilation and a lownitrogen abundance signal in plants (Lancien et al., 2000), as well as by arginine and glutamate. The flux through the arginine biosynthetic pathway depends on the balance between energy status and nitrogen and carbon availability for nitrogen assimilation via the glutamine synthetase (GS)/glutamate synthase (GOGAT) pathway in the plastids. Schneidereit et al. (2006) showed a threefold decrease in the intracellular α-ketoglutarate level induced by high nitrogen conditions in plants, due to a rapid NH 4 + assimilation by the GS/GOGAT cycle. This suggests that a high nitrogen status will be sensed by the PII protein through a low level of α-ketoglutarate, and thus under these conditions PII-NAGK complex formation will be favored leading to arginine synthesis and nitrogen storage, as well as an increase in arginine and glutamate concentrations, which are expected to limit arginine accumulation by inhibition of NAGK (Feria Bourrellier et al., 2009). Chellamuthu et al. (2014) identified an additional PII-mediated regulatory mechanism, by which high nitrogen availability activates NAGK and thus promotes arginine synthesis. Glutamine binding alters PII conformation, promoting the interaction with and activation of NAGK. This mechanism appears to be conserved from algae to flowering plants with the exception of the Brassicaceae, including Arabidopsis.
NAGPR, NAOAT, and NAOGAcT
Since Slocum (2005) NAGPR has not been characterized further in Arabidopsis. Rice NAGPR was crystallized and characterized (Nonaka et al., 2005). The crystal structure of a putative NAGPR from Arabidopsis (At2g19940) was deposited in the protein data bank (www.rcsb.org/pdb; PDB accession number #1XYG; Levin et al., 2007). However, the details of these structures have not been reported so far.
The NAOAT encoding gene Arg9 was identified and characterized in the green alga Chlamydomonas reinhardtii (Remacle et al., 2009). Plastidial localization of NAOAT was demonstrated by complementation studies as well as immunoblot analysis. The TUMOR PRONE5 (TUP5, At1g80600) gene of Arabidopsis was demonstrated to encode a NAOAT (Frémont et al., 2013). Characterization of the gene and its mutant lines showed a strongly reduced free arginine content in the chemically-induced recessive mutant tup5, suggesting that the biosynthesis of amino acids that are produced downstream of the NAOAT enzymatic reaction is impaired in this mutant. Consistently, tup5 showed a short root growth phenotype, restorable by supplementation with arginine and its metabolic precursors. A yeast NAOAT mutant was complemented by TUP5. Two null alleles of TUP5 showed a reduced viability of gametes and embryo lethality, possibly caused by insufficient arginine supply from maternal tissue. A TUP5-green fluorescent protein was localized in chloroplasts (Frémont et al., 2013). TUP5 expression is positively regulated by light, and tup5 showed a unique light-dependent short root phenotype. The roots of tup5 seedlings of different ages cultivated in darkness immediately stopped growth when they were shifted into light. Frémont et al. (2013) attributed this phenotype to a blue light-dependent switch from indeterminate growth to determinate growth with arresting cell production and an exhausted root apical meristem and, thus, a critically dependence of root growth on arginine in the presence of light.
No experimental analysis of the putative Arabidopsis NAOGAcT (At2g37500) has been described yet. Expression of the poplar NAOGAcT homolog was not altered in response to putrescine overproduction in a transgenic line (Page et al., 2012).
NAOD
NAOD activity has never been demonstrated in plants, although many putative NAOD-like genes have been identified (Slocum, 2005). Molesini et al. (2015) analyzed the NAOD-activity in Arabidopsis after downregulation of the putative NAOD gene (At4g17830) by using RNA silencing and T-DNA insertion mutants. All analyzed NAOD-suppressed plants showed consistently reduced ornithine content compared with wild-type plants, suggesting that in addition to NAOGAcT action, NAOD contributes to the regulation of ornithine levels in plant cells. Ornithine depletion was associated with increased putrescine and decreased spermine concentrations, and the reduced AtNAOD expression resulted in developmental alterations, namely early flowering and impaired seed setting. A connection between ornithine levels or metabolism and reproductive development had already been proposed by Trovato et al. (2001), who observed early flowering and enhanced flower formation in tobacco plants overexpressing ornithine cyclodeaminase (RolD) from Agrobacterium rhizogenes (see below). showed an increased sensitivity to exogenous ornithine, which was attributed to reduced OTC expression, potentially due to problems in mRNA 3 ′ -end formation (Quesada et al., 1999). The chemically-induced Arabidopsis mutants ven3 and ven6, where the small subunit (At3g27740) and the large subunit (At1g29900) of CPS were affected, showed increased ornithine and decreased citrulline levels, respectively, suggesting a disrupted conversion of ornithine to citrulline because of reduced carbamoyl phosphate availability (Mollá-Morales et al., 2011). We could not find any further recent publication reporting on the characterization of plant OTC or ASSY (At4g24830).
OTC, CPS, ASSY, and ASL
A characterization of the Arabidopsis ASL (At5g10920) is also still missing. The rice ASL mutant osred1 showed a short root phenotype like the Arabidopsis NAOAT mutant tup5-1, supporting the suggestion that arginine is essential for normal root growth in different plant species (Frémont et al., 2013;Xia et al., 2014a,b). Expression analysis revealed two alternatively spliced transcripts of OsASL1, OsASL1.1, and OsASL1.2, coding for two ASL isoforms with slightly different N-termini. OsASL1.1 was expressed throughout the entire growth period in most Frontiers in Plant Science | www.frontiersin.org July 2015 | Volume 6 | Article 534 organs, whereas OsASL1.2 was expressed mainly in the roots. In contrast to the plastid-localized OsASL1.1, OsASL1.2 was localized in the cytosol and nucleus. Only OsASL1.1 showed ASL activity in a yeast complementation study. The short-root phenotype of the osred1 mutant was rescued by external arginine supply but not by a NO donor, supporting the hypothesis that arginine is required for normal root growth independently of its function as putative NO precursor (Xia et al., 2014a,b). The poplar ASL homolog was the only gene, among 17 analyzed genes of arginine metabolism in poplar, whose expression was higher in response to putrescine overproduction in a transgenic line. Page et al. (2012) hypothesized a biochemical regulation of arginine biosynthesis involving substrate concentrations or co-factors rather than a regulation at the transcriptional level.
Arginine Transport Long Distance Transport
Long distance transport of arginine to nitrogen storing organs or seeds occurs probably in the vascular tissue and is presumably dependent on amino acid transporters of the AAP family of amino acid/proton co-transporters. Especially important for long distant arginine transport seem to be AAP3 (At1g77380) and AAP5 (At1g44100), which are involved in loading and unloading the vascular tissue (Fischer et al., 1995(Fischer et al., , 2002Okumoto et al., 2004;Svennerstam et al., 2008;Tegeder, 2014). AAP5 transports arginine and lysine with high affinity (Svennerstam et al., 2008) and seems to have an important role in the uptake of basic amino acids by roots (Svennerstam et al., 2011). An additional function of AAP5 in the transport of arginine within plants is supported by its expression throughout the entire vascular system of Arabidopsis (Fischer et al., 1995(Fischer et al., , 2002Svennerstam et al., 2008). AAP3 also displays high affinity for basic amino acids (Fischer et al., 2002;Taylor et al., 2015) and was shown to be expressed in the phloem, predominantly in roots (Okumoto et al., 2004). AAP transporters have been localized to the collection phloem of legumes and they are predicted to play a major role in amino acid loading of this tissue. In Arabidopsis, the assignment of clear-cut physiological function to individual AAPs has not been reported so far (Tegeder, 2014). Dündar and Bush (2009) identified and characterized a bidirectional amino acid transporter (BAT1, At2g01170) in Arabidopsis. Both direct measurement of amino acid transport and yeast growth experiments demonstrated transport activity of BAT1 for alanine, arginine, glutamate and lysine. BAT1 is a single copy gene in the Arabidopsis genome and its mRNA is ubiquitously produced in all organs. Promoter-GUS analysis localized BAT1 expression in the vascular tissue, suggesting that BAT1 may function in amino acid export from the phloem into sink tissues (Dündar and Bush, 2009).
Intracellular Transport
Arginine metabolism is distributed over the three cellular compartments cytosol, plastids and mitochondria. Newly synthesized arginine can be used for protein synthesis directly in plastids or, after intracellular transport, in the cytosol and mitochondria. This generates a need for transport systems for arginine, as well as for synthesis and degradation intermediates. Very little is known about the transport of amino acids into or out of chloroplasts. Members of the preprotein and amino acid transporter (PRAT) family were proposed to mediate transport of amino acids across the inner envelope membrane (Murcha et al., 2007;Pudelski et al., 2010). So far, experimental evidence is only available for the function of PRATs in protein import (Rossig et al., 2013).
The prevalent group of carrier proteins in mitochondria is the mitochondrial carrier family (MCF) with 58 putative members in Arabidopsis (Picault et al., 2004;Haferkamp and Schmitz-Esser, 2012). Two members of the MCF were identified as basic amino acid transporters (BAC1 and BAC2) which mediate the transport of arginine, ornithine and lysine with decreasing affinity and were postulated to be localized in the mitochondrial inner membrane (Hoyos et al., 2003;Palmieri et al., 2006). BAC1 (At2g33820) and BAC2 (At1g79900), together with BOU (a bout de souffle, Lawand et al., 2002), form a sub-group of MCF proteins distinct from other Arabidopsis mitochondrial carriers regarding sequences and function (Catoni et al., 2003;Hoyos et al., 2003;Picault et al., 2004;Toka et al., 2010).
BAC1 and BAC2 were identified as basic amino acid transporters by complementation of the yeast mutant arg11. This mutant is defective in mitochondrial ornithine/arginine transport due to a loss-of-function mutation in the ORT1 carrier (Catoni et al., 2003;Hoyos et al., 2003;Palmieri et al., 2006). ORT1 is an antiporter for ornithine, arginine or lysine and is important for ornithine export from mitochondria, an essential step for arginine biosynthesis in Saccharomyces cerevisiae (Palmieri et al., 1997(Palmieri et al., , 2006. The transport characteristics of BAC1 and BAC2 resemble each other, they were inactivated by the same inhibitors and their Km and Vmax values were very similar for their most efficiently transported and preferred substrate arginine (Palmieri et al., 2006).
Arabidopsis bac2 mutants showed a conditional phenotype as they grew more slowly than the wild-type on arginine as sole source of nitrogen, while BAC2 overexpressing plants showed the opposite phenotype. Presumably, the expression of BAC2 is a limiting factor for mitochondrial arginine transport in vivo and therefore for the mobilization of nitrogen from arginine (Toka et al., 2010). This is consistent with the higher expression levels of BAC2 in wild type seedlings growing on arginine as sole source of nitrogen (Catoni et al., 2003).
Since bac2 mutants did not show any phenotypical difference to the wild type when growing on soil, other pathways or transporters seem to compensate the lack of BAC2 during vegetative growth (Toka et al., 2010). BAC2 expression was induced during stress and senescence (Toka et al., 2010) and bac2 mutant seedlings recovering from hyperosmotic stress showed significantly reduced leaf growth (Planchais et al., 2014). Probably BAC2-dependent arginine import into mitochondria is required during stress conditions and for recovery of growth after stress (Planchais et al., 2014). The functional redundancy between BAC1 and BAC2 and the expression patterns indicate that BAC1 is sufficient for mitochondrial arginine import during normal plant growth (Catoni et al., 2003;Hoyos et al., 2003;Toka et al., 2010). So far, no information about bac1 knock-out mutants or phenotypes of BAC1 overexpressing plants is available, leaving this point speculative.
Like in many other plant species, storage of nitrogen in the form of arginine in seeds is also likely in Arabidopsis, since the total arginase activity, which initiates the release of nitrogen from arginine, increased strongly up to 6 days after germination accompanied by increases in free arginine and urea levels (Zonia et al., 1995). In order to degrade arginine stored in seeds for nitrogen remobilization, large amounts of arginine have to be transported into mitochondria by the BAC carriers during early seedling development making it accessible to the mitochondrial localized arginase (Flores et al., 2008). However, only low levels or no expression of BAC2 were found in seeds and seedlings of Arabidopsis (Hoyos et al., 2003;Toka et al., 2010). This finding argues against a prominent function of BAC2 in storage mobilization, indicating that another transporter, probably BAC1, mediates the import of arginine into mitochondria during early seedling development. This suggestion is supported by RT-PCR analysis of BAC1, which is highly expressed in seedlings, contrary to BAC2, which is mostly expressed in stamens and pollen grains of flowers (Hoyos et al., 2003;Palmieri et al., 2006;Toka et al., 2010;Monné et al., 2015).
Amino acid analysis revealed accumulation of proline and alanine in bac2 mutants (Planchais et al., 2014). BAC2 overexpressing plants showed low arginine levels and simultaneously high levels of ornithine, urea and citrulline, all products of arginine catabolism. Thus, BAC2 is able to increase arginine availability for degradation inside mitochondria, especially under stress conditions (Toka et al., 2010;Planchais et al., 2014;Monné et al., 2015).
Arginine Catabolism and Arginine-Derived Metabolites Arginine Catabolism
After the import of arginine by BAC1 and BAC2 into mitochondria, arginine catabolism starts with degradation of arginine to ornithine and urea by arginase (Figure 2). Urea is exported to the cytosol, where it is further degraded to ammonia by urease (Witte, 2011;Polacco et al., 2013). Ornithine could be transported back into plastids to re-enter arginine biosynthesis as in the mammalian urea cycle. However, cycling between ornithine and arginine is unlikely to occur in a single cell or tissue in plants, as it would constitute a waste of energy and assimilated nitrogen. Ornithine degradation proceeds by transfer of the δ-amino group to α-ketoglutarate, catalyzed by ornithine δ-aminotransferase (δOAT), yielding GSA/P5C, and glutamate. GSA/P5C is subsequently converted to a second molecule of glutamate by P5C dehydrogenase (P5CDH). Glutamate is either exported from mitochondria as an anabolic precursor for multiple pathways, or it is further degraded inside mitochondria to α-ketoglutarate, ammonium and NADH by glutamate dehydrogenase (GDH). NADH can be used to fuel respiratory ATP production, while α-ketoglutarate can be fed into the citric acid cycle or can be used to re-assimilate ammonium in the GS/GOGAT system. Like arginine synthesis, arginine catabolism is also regulated in accordance with the overall nutritional status of the plant cell. Arginine utilization seems to be coordinated with the availability of carbohydrates, since sugar starvation caused a substantial increase of enzyme activities of arginase and urease, as well as arginine decarboxylase (ADC), starting polyamine synthesis (see below) in yellow lupin (Lupinus luteus L.; Borek et al., 2001).
In the following sections, we will firstly summarize the available data on the single steps in the degradation of arginine to glutamate and secondly we will discuss alternative metabolic routes using arginine as a precursor.
Arabidopsis contains two arginase genes, ARGAH1 (At4g08900) and ARGAH2 (At4g08870), which probably arose by a recent gene duplication (Krumpelman et al., 1995;Brownfield et al., 2008). The predicted mature proteins show 91% sequence identity, whereas the predicted mitochondrial transit peptide share only 39% sequence identity. GFP-fusion proteins showed that both ARGAH1 and ARGAH2 are mitochondrial proteins (Palmieri et al., 2006;Flores et al., 2008). The formation of homoand hetero-oligomers of the two Arabidopsis arginase isoforms has been demonstrated, while the precise oligomeric state and three-dimensional structure remain to be resolved (Winter, 2013).
Seedling arginase activity increases sharply during germination in Arabidopsis (Zonia et al., 1995), loblolly pine (King and Gifford, 1997) and other plant species (Splittstoesser, 1969;Kollöffel and van Dijke, 1975;Kang and Cho, 1990;Goldraij and Polacco, 1999). The analysis of T-DNA insertion mutants demonstrated that roughly 85% of the arginase activity in Arabidopsis seedlings depends on ARGAH2 whereas no developmental defects of argah2 or argah1 mutants were reported (Flores et al., 2008). Infection of mature Arabidopsis plants with the necrotrophic fungus Botrytis cinerea or the protist Plasmodiophora brassicae, causing the agriculturally important clubroot disease, resulted in an upregulation of ARGAH2 expression. Consistently, argah2 mutants showed an increased sensitivity toward clubroot disease (Brauc et al., 2012;Gravot et al., 2012).
Conversely to increased pathogen susceptibility, argah1-1 and argah2-1 T-DNA insertion mutants as well as the double mutant argah1argah2 showed increased tolerance to abiotic stress. Higher tolerance to water deficit, salt stress and freezing was accompanied by increased NO and polyamine accumulation (Flores et al., 2008;Shi et al., 2013). Consistently, overexpression of arginase in Arabidopsis decreased the resistance and defense against abiotic stress (Shi et al., 2013). No developmental defects were reported for the argah1argah2 double mutants under normal growth conditions.
Interestingly, mutation of the single copy arginase gene in rice caused a strong decrease in growth and fertility, affecting both grain size and the rate of seed setting, whereas overexpression of arginase improved yield under nitrogen-limiting conditions (Ma et al., 2012).
Urea and Urease
Arginase activity is the main source for endogenous urea in higher plants, and recycling of urea seems to be especially important under stress conditions. The available information about metabolism and transport of urea in plants has recently been reviewed by Witte (2011) and Polacco et al. (2013). Urease (urea amidohydrolase) is the only known Ni-containing enzyme in plants (Dixon et al., 1975) and catalyzes the hydrolysis of urea to ammonia and carbamic acid; the latter spontaneously hydrolyzes to ammonia and bicarbonate (Figure 2). The functional assembly of Arabidopsis urease (At1g67550) requires at least three accessory proteins (Witte et al., 2005;Witte, 2011). In Arabidopsis, the production of urea is induced by jasmonic acid by upregulation of ARGAH2 expression (Brownfield et al., 2008).
The rapid hydrolysis of urea by urease could cause localized alkalinisation, which, in turn, could further stimulate arginase which has a pH optimum ≥9.5 (Jenkinson et al., 1996;Polacco et al., 2013). The alkalinization of the cytosol by induction of arginase and urease activity might constitute an active component of pathogen defense mechanisms (Polacco et al., 2013).
There is a large flux of nitrogen from arginine to ammonia pools due to arginase and urease activity, especially during germination. The increases in free urea levels during germination are generally rather moderate, indicating that urea export from mitochondria and urease are not limiting for nitrogen re-mobilization (Polacco et al., 2013). In addition to germination, urease plays a key role in recycling of nitrogen stored as arginine during senescence or during seasonal changes.
Ornithine δ-Aminotransferase
The second product of arginine hydrolysis is ornithine, which is catabolized by δOAT to GSA and glutamate (Figure 2). δOAT transfers the δ-amino group of ornithine to α-ketoglutarate and the equilibrium of the δOAT reaction has been found far on the GSA + glutamate side (Adams and Frank, 1980).
A direct contribution of δOAT to stress-induced proline accumulation, which would require an unknown exit route of mitochondrial GSA or P5C to the cytosol, where P5C reductase (P5CR) is localized, is controversial (Stránská et al., 2008). Proline production by the reverse reaction of proline dehydrogenase (ProDH) is energetically unfavorable. Furthermore, due to the chemical instability of GSA/P5C (Williams and Frank, 1975) and its toxicity when accumulating (Deuschle et al., 2004), export from mitochondria to the cytosol and thus contribution to proline synthesis seems unlikely, but cannot be fully excluded. Roosens et al. (1998) hypothesized that Arabidopsis δOAT (At5g46180) plays an important role in proline accumulation during osmotic stress in plants, because of increased free proline content, δOAT activity and δOAT mRNA in young plantlets under salt stress conditions. This hypothesis was supported by the analysis of transgenic Nicotiana plumbaginifolia plants overexpressing Arabidopsis δOAT, which synthesized more proline than the control plants and showed a higher biomass and a higher germination rate under osmotic stress conditions (Roosens et al., 2002). The exclusive targeting of δOAT to mitochondria in Arabidopsis and unchanged proline accumulation in salt-stressed δoat knockout mutants provided strong evidence against a direct contribution of δOAT to stress-induced proline accumulation (Funck et al., 2008). However, δOAT activity was correlated to proline accumulation in salt-stressed cashew plants and ornithine application strongly enhanced proline accumulation (da Rocha et al., 2012). Overexpression of δOAT in rice resulted in higher proline levels and activated the antioxidant defense, rendering the plants more stress-tolerant (You et al., 2012).
Silencing or deletion of δOAT in tobacco and Arabidopsis, respectively, compromised non-host pathogen defense and pathogen-induced ROS formation (Senthil-Kumar and Mysore, 2012). Further research is needed to understand how δOAT contributes to stress defense and whether GSA produced by δOAT can be used directly for proline synthesis or is obligatorily converted to glutamate by P5CDH.
P5C Dehydrogenase
GSA produced by δOAT inside mitochondria is most probably further converted to glutamate by mitochondrial P5CDH (Deuschle et al., 2001;Funck et al., 2008). Due to the mentioned chemical instability and toxicity of GSA/P5C, the formation of a reversible enzyme complex of δOAT and P5CDH seems likely, channeling GSA to P5CDH without releasing it to the mitochondrial matrix (Elthon and Stewart, 1982;Funck et al., 2008). Substrate channeling from ProDH to P5CDH has been reported for the bifunctional enzyme PutA from Geobacter sulfurreducens and recently also for the monofunctional enzymes from Thermus thermophilus (Singh et al., 2014;Sanyal et al., 2015). A similar co-operation between ProDH, δOAT and P5CDH in plants might explain the relatively low affinity of isolated P5CDH for P5C (around 0.5 mM) as opposed to the fivefold higher affinity of P5C reductase (Forlani et al., 1997a,b;Giberti et al., 2014). Consistently, two P5CDH isoforms have been detected in Nicotiana plumbaginifolia, that may be specifically involved in the oxidation of P5C deriving from either proline or arginine (Forlani et al., 1997a). The Arabidopsis genome contains a single P5CDH gene (At5g62530) and knockout mutants were hypersensitive to external supply of arginine or ornithine (Deuschle et al., 2004). In tissues with a high energy demand, glutamate produced by δOAT and P5CDH may be further degraded by GDH to fuel mitochondrial energy production. However, since GDH also releases ammonia, which needs to be re-assimilated in an energy-demanding process, a direct recycling of glutamate for anabolic pathways seems more likely.
Arginine as Precursor for Proline Biosynthesis
Feeding of plants with arginine or ornithine resulted in elevated proline levels and radiotracer experiments demonstrated that both 3 H and 14 C from arginine can be recovered as proline (Adams and Frank, 1980;da Rocha et al., 2012). The physiological relevance and the biochemical pathway of the conversion of arginine to proline in plants remain unclear. The most prominent hypothesis is that ornithine, derived from arginine catabolism, is converted by δOAT to GSA/P5C, which then serves as substrate for proline synthesis by P5CR. This model has been doubted, since Arabidopsis δOAT was found to be exclusively localized in mitochondria, while P5CR is localized in the cytosol (Funck et al., 2008(Funck et al., , 2012. Isolated corn mitochondria incubated with proline or ornithine released very little P5C and this release was strongly pH dependent and stimulated upon swelling of the mitochondria (Elthon and Stewart, 1982). Direct export of P5C from mitochondria and conversion to proline by P5C reductase were postulated as part of a reactive oxygen species-producing proline-P5C cycle (Miller et al., 2009). Direct evidence for the transport of P5C and the operation of this cycle under physiological conditions is still missing. Inside mitochondria, GSA/P5C is further converted to glutamate by P5CDH (see above). Export of glutamate from mitochondria has been demonstrated and could be the basis for proline synthesis via the glutamate pathway (Linka and Weber, 2005;Di Martino et al., 2006).
Removal of the α-amino group of ornithine and conversion of the resulting pyrroline-2-carboxylate to proline has also been proposed (Mestichelli et al., 1979), but the required enzymes were not described in plants to date. Another alternative pathway from ornithine to proline would be via ornithine cyclodeaminase (OCD), which is found in bacteria and is transferred into plants with the T-DNA of Agrobacterium rhizogenes (Trovato et al., 2001;Mattioli et al., 2008). In the Arabidopsis genome, a homolog (At5g52810) of bacterial OCDs and mammalian µ-crystallins has been identified. However, plants with a decreased expression of the putative OCD had higher rather than lower proline levels and the analysis of the recombinant protein yielded no evidence for OCD activity (Sharma et al., 2013).
Polyamine Synthesis from Arginine
Polyamines (putrescine, spermidine, and spermine) are essential for development and stress responses of plants. Embryogenesis, organogenesis, particularly flower initiation and development, fruit setting and ripening, as well as leaf senescence all require polyamines (Page et al., 2012;Majumdar et al., 2013;Pathak et al., 2014). In addition, the role of polyamines for abiotic stress tolerance and the regulation of nitrogen assimilation is well established in plants. Accumulation of polyamines in large amounts in the cell points toward roles in metabolic regulation of ammonia toxicity, NO production, and balancing organic nitrogen metabolism in the cell (Maiale et al., 2004;Marco et al., 2011;Gupta et al., 2013;Guo et al., 2014;Minocha et al., 2014;Pathak et al., 2014;Tiburcio et al., 2014).
Ornithine decarboxylase (ODC) represents an alternative way for putrescine synthesis by decarboxylation of ornithine. ODC homologs were identified and analyzed in different plant species including datura (Michael et al., 1996), tomato (Alabadí and Carbonell, 1998), tobacco (Lee and Cho, 2001), and chilli (Zainal et al., 2002). The Arabidopsis genome probably lacks a recognizable ODC (Hanfrey et al., 2001). This is in apparent contradiction to the findings of Molesini et al. (2015), who demonstrated that a lowered ornithine level in Arabidopsis NAOD insertion mutants did not influence arginine content but affected the levels of polyamines. In turn, this is consistent with the findings of Majumdar et al. (2013), who analyzed the regulation of ornithine and ornithine-related pathways (arginine and polyamines) in Arabidopsis by diversion of ornithine from arginine biosynthesis, via the overexpression of a mouse ODC. The removal of large amounts of ornithine did not negatively impact arginine biosynthesis itself or the production of polyamines from arginine by ADC, neither by limiting the availability of its substrate arginine nor via feedback inhibition of ADC by excess putrescine. Furthermore, arginine levels were not altered differently in Arabidopsis arginase overexpressing and arginase insertion mutant lines (Shi et al., 2013), pointing toward multiple mechanisms regulating arginine and polyamine biosynthesis.
Arginase T-DNA insertion mutants of Arabidopsis showed much higher expression levels of polyamine biosynthesis enzymes (ADC1 and ADC2, AIH, NLP1) as well as higher putrescine and spermine contents, whereas arginase overexpressing lines had significantly lower mRNA levels of the analyzed enzymes compared to the wild type (Shi et al., 2013). This is consistent with another report demonstrating that reduced arginase activity in Arabidopsis transgenic lines led to a significant increase in putrescine concentration (Brauc et al., 2012).
Polyamines were found to be non-competitive inhibitors for δOAT (Stránská et al., 2010) in Pisum sativum. It seems that increased polyamine concentrations can significantly reduce the activity of pea δOAT in vivo and Stránská et al. (2010) hypothesized that this would result in slowing down arginine catabolism. Since polyamines are involved in diverse physiological responses, it could be advantageous for plants to slow down arginine catabolism in favor of polyamine synthesis if necessary.
NO Production Involving Arginine
As a neutral and lipophilic gaseous molecule of small dimensions, NO can easily cross membranes, diffuse into the cytosol and bind to its soluble targets to act as a multifunctional signaling molecule. Beside implication of NO in plant growth and development from germination to fruit ripening and flowering, it is also generated in response to a wide range of biotic stresses, such as biotrophic and necrotrophic pathogens, as well as to a number of abiotic stresses, such as heavy metal, drought and salt stress (Neill et al., 2008;Moreau et al., 2010;Yu et al., 2014;Zhang et al., 2014;Domingos et al., 2015).
In plants, there are several sources of NO, including arginine and nitrite-dependent pathways, as well as non-enzymatic NO generation (Leitner et al., 2009;Froehlich and Durner, 2011). The role of arginine as a precursor for NO is increasingly apparent, although the molecular mechanism of NO production from arginine remains elusive. Biochemical assays and the effects of inhibitors of animal NO synthase (NOS) support the existence of NOS-like enzymes in plants, converting arginine into NO and citrulline (Zhang et al., 2014;Domingos et al., 2015). However, all attempts to identify such enzymes in higher plants failed so far, whereas a NOS protein was described in the green alga Ostreococcus tauri (Foresi et al., 2010). Based on enzymatic assays or staining techniques, NOS-like activity in plants has been associated with mitochondria, chloroplasts and peroxisomes (Froehlich and Durner, 2011). Flores et al. (2008) proposed that mitochondrial arginase activity competes with arginine-dependent NO production in Arabidopsis, while Shi et al. (2013) showed that both Arabidopsis arginase isoenzymes are able to negatively regulate polyamine synthesis as well as NO synthesis. Leaving the existence of a NOSlike activity in plants speculative, other researches showed that polyamines could induce rapid biosynthesis of NO in root tips and primary leaves of Arabidopsis seedlings (Tun et al., 2006;Siddiqui et al., 2011;Wimalasekera et al., 2011;Tiburcio et al., 2014).
Both polyamines and NO function in regulation of plant development and as signaling molecules mediating a range of responses to biotic and abiotic stresses (Zhao et al., 2007;Neill et al., 2008;Lozano-Juste and León, 2010;Gupta et al., 2011;Wang et al., 2011). Further research is needed to decipher how arginine, arginine degradation and arginine-derived NO and polyamines influence each other to orchestrate development and plant defense response to stress.
Conclusion
The present review gives an update on the recent advances in research on arginine metabolism in higher plants, mainly derived from work with Arabidopsis. Experimental evidence indicated that both pathways for ornithine biosynthesis, the well characterized cyclic pathway and the linear pathway using NAOD, are present in plants. Arginine levels in plant tissues seem to be regulated by a multitude of mechanisms, since most of the experimental manipulations of arginine biosynthesis or catabolism did not alter arginine concentrations. The role of arginine as important amino acid for nitrogen storage in plants is complemented by arginine catabolism mobilizing stored nitrogen and fine-tuning the production of NO, polyamines and potentially proline. While several regulatory mechanisms of arginine biosynthesis were identified, understanding of the regulation of the different pathways for arginine utilization requires further research.
Detailed biochemical and physiological characterization of the enzymes mediating arginine metabolism and the regulatory mechanisms allocating arginine-derived nitrogen to signaling, growth, reproduction or defense to stress will provide a better understanding of the role of arginine metabolism in nitrogen use efficiency in plants. Knowledge about essential intermediates and developmental switches between nitrogen storage and remobilization may help to improve crop plants or cultivation conditions. Optimized nitrogen use efficiency of crop plants will be crucial to reduce detrimental effects of nitrogen shortage on productivity and will help to avoid negative economic and environmental impact of excessive nitrogen fertilization in agriculture. | 9,224.6 | 2015-07-30T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Construction of a cellulase hyper-expression system in Trichoderma reesei by promoter and enzyme engineering
Background Trichoderma reesei is the preferred organism for producing industrial cellulases. However, a more efficient heterologous expression system for enzymes from different organism is needed to further improve its cellulase mixture. The strong cbh1 promoter of T. reesei is frequently used in heterologous expression, however, the carbon catabolite repressor CREI may reduce its strength by binding to the cbh1 promoter at several binding sites. Another crucial point to enhance the production of heterologous enzymes is the stability of recombinant mRNA and the prevention of protein degradation within the endoplasmic reticulum, especially for the bacteria originated enzymes. In this study, the CREI binding sites within the cbh1 promoter were replaced with the binding sites of transcription activator ACEII and the HAP2/3/5 complex to improve the promoter efficiency. To further improve heterologous expression efficiency of bacterial genes within T. reesei, a flexible polyglycine linker and a rigid α-helix linker were tested in the construction of fusion genes between cbh1 from T. reesei and e1, encoding an endoglucanase from Acidothermus cellulolyticus. Results The modified promoter resulted in an increased expression level of the green fluorescent protein reporter by 5.5-fold in inducing culture medium and 7.4-fold in repressing culture medium. The fusion genes of cbh1 and e1 were successfully expressed in T. reesei under the control of promoter pcbh1m2. The higher enzyme activities and thermostability of the fusion protein with rigid linker indicated that the rigid linker might be more suitable for the heterologous expression system in T. reesei. Compared to the parent strain RC30-8, the FPase and CMCase activities of the secreted enzyme mixture from the corresponding transformant R1 with the rigid linker increased by 39% and 30% at 60°C, respectively, and the reduced sugar concentration in the hydrolysate of pretreated corn stover (PCS) was dramatically increased by 40% at 55°C and 169% at 60°C when its enzyme mixture was used in the hydrolysis. Conclusions This study shows that optimizations of the promoter and linker for hybrid genes can dramatically improve the efficiency of heterologous expression of cellulase genes in T. reesei.
Background
Limited fossil resources, growing economies and an everlasting burden on our environment have caused an increasing interest for alternative resources to produce fuels and chemicals. Efficient conversion of lignocellulosic biomasses, the largest renewable resource on earth, requires cost-effective enzyme systems to degrade the polysaccharides to monomeric compounds [1]. The current enzyme mixtures for the bioconversion of lignocellulose are not sufficiently efficient for an economic viable biorefinery of plant biomasses. The filamentous fungus Trichoderma reesei (teleomorph Hypocrea jecorina) [2] is by far the preferred organism for production of cellulases within industry [3,4]. To enable efficient degradation of cellulose, the co-operation of at least three types of enzymes is required: cellobiohydrolases, endoglucanases and β-glucosidases. Because cellobiohydrolase I (CBHI, EC 3.2.1.91) and cellobiohydrolase II (CBHII, EC 3.2.1.91) comprise nearly 85% of the total secreted proteins of T. reesei [5][6][7], the current commercial cellulase mixture used for biomass hydrolysis requires a cocktail consisting of cellulases produced by T. reesei, and β-glucosidase and new endoglucanases from other fungi or bacteria [8]. Another limitation of the cellulases produced by T. reesei is their relatively low thermostability [5,9]. Higher reaction temperatures associated with thermostable cellulases during the hydrolytic process may radically reduce substrate viscosity leading to higher reaction velocities and better substrate conversion at lower energy consumption [10][11][12]. To improve the cellulase mixture of T. reesei in its composition and thermostability, homologous or heterologous expression of cellulase genes other than cbh1 and cbh2 is necessary in T. reesei.
The promoter cbh1 of T. reesei is known to be a strong inducible promoter, and is therefore commonly used to construct high-efficient heterologous expression vectors in T. reesei and other fungi [13,14]. However, three putative carbon catabolite repressor binding sites are present in the region from -685 to -724 nt of the cbh1 promoter. They are considered to reduce transcripts of cbh1 when glucose is present in the fermentation medium [15,16]. The deletion of these repressor binding sites and introduction of multi-copy activator binding sites in cbh1 promoter not only eliminated the glucose repression effect, but also increased promoter activity and production levels of heterologous proteins in T. reesei Rut-C30 [17]. Except for the main repressor protein CREI, many other transcription factors (TF) of T. reesei have been identified, such as the repressor ACEI, and the positive regulators XYRI, ACEII and the HAP2/3/5 complex [18]. Within this study, we are testing the hypothesis that replacing the negative regulator binding sites of the cbh1 promoter for positive regulator binding sites may further improve the expression level of heterologous genes.
Even though bacteria contain cellulases with interesting properties, heterologous expression in T. reesei of genes originated from bacteria is often causing problems [19][20][21]. One of the most obvious reasons for low expression levels might be the degradation of the heterologous cellulases by the abundant proteases produced in the fungal host [22,23]. This issue can be solved by stabilizing the recombinant protein, for instance by creating a fusion with a native protein. The fusion protein will serve as a carrier to facilitate the translocation of the foreign protein in the secretory pathway and, thereby, protect the heterologous part from degradation [14,24].
The endoglucanase E1 (EC 3.2.1.4), secreted by thermophilic bacterium Acidothermus cellulolyticus, will be used as a case-study to test a novel heterologous expression system within T. reesei. The distinctive characteristics of this enzyme were shown to be of high potential for industry [21,25]. Besides its robustness due to extreme thermostability of endoglucanase E1, it also has a striking synergism with cellulases of T. reesei at high temperature [26]. Furthermore, the heterologous expression of e1 in corn has shown to facilitate conversion of pretreated corn stover (PCS) into glucose [21]. The catalytic domain of e1 was also successfully expressed in T. reesei when fused with the catalytic domain of cbh1, resulting in a 30% increase of PCS hydrolysis efficiency at 55°C [13]. However, whether the fused protein could improve thermostability of the complete cellulase complex from T. reesei is still unknown.
In this study, the three CREI binding sites in cbh1 promoter were replaced by the binding sites of positive regulator ACEII or HAP2/3/5 complex, and the efficiency of the modified promoters were quantified using the enhanced green fluorescent protein (EGFP) as reporter. A flexible neutral polyglycine linker and a rigid α-helix linker were used to fuse cbh1 from T. reesei and e1 from A. cellulolyticus. In order to make sure whether the fusion protein would result in an increase of CBHI thermostability, the intact cbh1 gene instead of the core region was used in the fusion gene. This expression system, with the novel cbh1 promoter and the different linkers between the intact CBHI and endoglucanase E1, was characterized for its cellulase activity, thermostability and hydrolytic efficiency against PCS at 50°C-75°C.
Results and discussion
Replacing the CREI binding sites for transcription activator binding sites in the promoter cbh1 increased its ability to express heterologous genes Expression of cbh1 is dramatically decreased when repressor CREI is bound to its promoter, especially in culture media containing glucose. The deletion of the three CREI binding sites and the repetition of multi-copy regions with activator binding sites resulted in an increase of cbh1 promoter efficiency and thus a higher expression level of heterologous proteins in T. reesei [17]. May the cbh1 repressor binding sites replaced by its activators recognizing-sites also improve its activity? In this study, two newly engineered cbh1 promoters were obtained by site specific mutagenesis: pcbh1m1, in which -724 CREI motif was changed to the binding site of transcription factor ACEII (5'-GGCTAA-3'), and pcbh1m2, in which the two other CREI motifs at -698 and -690 within the pcbh1m1 promoter were changed to the binding site of the HAP2/3/5 protein complex (5'-CCAAT-3') ( Figure 1). The specific-site mutageneses were confirmed by sequencing. In order to compare the strength of the wild type cbh1 promoter and its two mutants, the enhanced green fluorescent protein reporter gene (egfp) was placed behind each promoter. This resulted in three expression vectors pDHt/sk-pcbh1, pDHt/sk-pcbh1m1 and pDHt/sk-pcbh1m2.
After transformation and three consecutive subcultures for genetic stability, five mitotically stable transformants with single-copy of the fused genes were selected from each transformation. M0, M1 and M2 represented the transformants with vectors pDHt/sk-pcbh1, pDHt/sk-pcbhm1 and pDHt/sk-pcbhm2, respectively ( Figure 2). The promoter strengths of all selected transformants were assessed qualitatively by fluorescence microscopy. The mycelia of all transformants glowed with clear bright green fluorescence after 1 day of growth on inducing culture media, i.e. containing a mix of wheat bran and cellulose just as M0 shown in Figures 2B, D, F. In contrary, transformants with the wild cbh1 promoter radiated weak fluorescence during growth on repressing culture media (containing 2% glucose) (Figure 2A), while transformants with the modified promoter like M1 and M2 still showed bright fluorescence ( Figure 2C, E). Although Rut-C30 has a truncated CREI [27], the CREI-mediated carbon catabolite repression appeared to be not completely abolished in its derivative RC30-8.
The promoter strengths of M0, M1 and M2 were further quantitatively by real-time (RT) PCR based on the expression level of egfp ( Figure 2G). Relative to M0 grown on repressing medium, RT-PCR results showed that the expression level of egfp in transformant M1 was increased by 1.9-and 1.7-fold in inducing and repressing culture media, respectively ( Figure 2G). These observations imply that the first mutagenesis, in which -724 motif of cbh1 promoter was replaced with the ACEII binding site, did increase the strength of the promoter. However, the increase was not large and was similar to the deletion of this CREI motif, as shown in the study of Liu et al. [17]. The replacements of the other two motifs at -698 and -690 with the binding site of the HAP2/3/5 protein complex did result in a significant increase of promoter strength. The egfp expression level showed 7.4and 5.5-fold increase compared to M0 under the inducing and repressing condition, respectively ( Figure 2G). The strength of the mutated promoter pcbh1m2 was much stronger compared to the mutated promoter ΔpC in which all of the three CREI binding sites were deleted, and was also stronger than the mutated promoter Δp4C, in which four copies of ACEII and HAP2/3/5 complex binding sites were inserted in promoter ΔpC [17].
The cellulose-induced cumulative effect of positive regulatory factors was clearly observed in our study. Even during growth on the repressing media, the pcbh1m2 promoter resulted in a stronger expression of egfp than the original cbh1 promoter during growth on inducing medium. Although only a few transcripts of genes are independent of CREI in T. reesei, carbon catabolite repression (CCR) involves interaction of many other transcription factors [28,29]. In addition to CREI, three other proteins, CREII, CREIII and CREIV, participate in CCR. Furthermore, glucokinase (GLKI) and hexokinase (HXKI) are also involved in CREI-mediated CCR [18]. The level of derepression in Δglk1/Δhxk1 strains was higher compared to the Δcre1 mutant Rut-C30 [18]. Consequently, straightforward deletion of CREI binding sites cannot abolish CCR. Our results reveal that replacing repressor binding Figure 1 Schematic structure of cbh1 promoter and its mutants. There are three CREI binding sites located at -690, -698 and -724 in wild type cbh1 promoter. An ACEII binding site was replaced the CREI binding site at -724 in promoter pcbh1m1. Based on pcbh1m1, HAP2/3/5 complex binding sites were substituted for the remaining two CREI binding sites in promoter pcbh1m2. sites within promoters with a variety of activator binding sites is a powerful tool to enhance expression levels of heterologous proteins.
The linker design showed significant effects on the efficiency of heterologous expression of a bacterial cellulase gene in T. reesei The modified promoter pcbhm2 together with the signal sequence of cbh1 was used for transforming T. reesei RC30-8 with the intact ORF or the catalytic domain of endocellulase E1 from bacterium A. cellulolyticus. Unfortunately, no corresponding protein products were detected in 23 positive transformants via SDS-PAGE or western blotting (data not shown). Expressing and synthesizing bacterial cellulase genes directly in fungi requires overcoming several severe obstacles, such as compatibility of codon bias for correct transcription, stability of the bacterial mRNA, and misfolding or proteolysis after translation [19,20,30]. The e1 mRNAs were detected in those transformants by reverse transcription PCR (data not shown), demonstrating that e1 or its catalytic domain was transcribed by T. reesei. Deductively, E1 was most likely misfolded and then proteolyzed by endoplasmic-reticulum-associated protein degradation (ERAD).
Fusion of a heterologous gene with a native gene has been reported to stabilize the recombinant mRNA, facilitate translocation of the foreign protein in the secretory pathway, and avoid protein degradation [20]. To be able to express e1 in T. reesei, two types of linkers, a flexible neutral polyglycine linker (GGGGS) 4 and a rigid α-helix linker (EAAAR) 4 [31] were used to fuse the complete coding region of CBHI and the E1 catalytic domain. The two constructs, i.e. tce1-fle (with flexible neutral polyglycine linker) and tce1-rig (with rigid α-helix linker), were under control of the novel strong promoter pcbhm2 and contained a his-tag at the e1 catalytic domain ( Figure 3). When transformed to T. reesei RC30-8, the corresponding fusion proteins and the cleaved E1 catalytic domain were detected in the extracellular enzyme mixture after growth on the inducing media containing wheat bran and cellulose for all positive transformants by SDS-PAGE and Western blotting ( Figure 4). Two bands of approximately 97 and 40 kDa were detected on Western blots. The large band represented the complete fusion proteins TCE1-FLE or TCE1-RIG, while the smaller band represented the E1 catalytic domains. It means that a portion of the two linkers could be cleaved due to its kexin cleavage site (Lys-Arg) and thus E1 could be separated from the fusion proteins during the process of the protein secretion and purification. This strategy showed to be an effective way to protect the bacterial endoglucanase E1 from protein degradation within the endoplasmic reticulum due to misfolding [31][32][33].
The purified enzymes of the transformants with the fusion gene tce1-fle and tce1-rig, actually including the intact fusion protein and the cleaved E1 from some fusion protein (Figure 4), were tested for their activity against p-nitrophenyl-β-D-cellobioside (pNPCase or cellobiohydrolase activity), carboxymethylcellulose (CMCase or endoglucanase activity) and filter paper (FPase activity). The optimal temperatures were similar for both transformant sets: 70°C for pNPCase activity, 85°C for CMCase activity, and 60°C for FPase activity ( Figure 5). However, at the same optimal temperatures, the two fusion proteins possessed different maximum values for the three cellulase activities ( Figure 5A, B, C). In fact, compared to TCE1-FLE, TCE1-RIG had significantly higher activities for pNPCase, CMCase and FPase at the tested temperature range ( Figure 5A, B, C). For example, FPase activity of TCE1-RIG (0.95 U/mg protein) increased 70% compared to the activity of TCE1-FLE (0.55 U/mg protein) ( Figure 5C). Moreover, TCE1-RIG had a better thermostability for its pNPCase (Figure 5H, E) and FPase activity ( Figure 5I, F) at 60°C and 70°C after tracing the activities for incubation times up to 24 h at 60°C, 70°C and 85°C.
These above results demonstrated that the rigid α-helix linker was more suitable for the activity and stability of the fusion proteins. A possible explanation could be that the rigid α-helix linker provides enough space or specific physical adaptation between CBHI and E1, and therefore, maintained the high activities of the fusion protein [31,[34][35][36]. For instance, the fusion protein with the rigid linker had a higher hydrophobicity compared to the flexible linker (TCE1-RIG had 258 AILFWV amino acid residues, while TCE1-FLE had 246 such residues). The potential for ionic interactions was also higher in the fusion protein with the rigid linker (TCE1-RIG had 134 DEKR amino acid residues, while TCE1-FLE had 126 ones). Compared to the native CBHI, the advantage of TCE1-RIG in physical mechanisms was much more significant (increased 20.73% and 11.70% in AILFWV and DEKR amino acid residues, respectively). Another possible explanation could be that the rigid α-helix linker was more stable. The pNPCase activity of the purified native CBHI was drastically decreased after 30 min of incubation at 60°C (data not shown). It means that both the activity and thermostability of pNPCase of the purified proteins depended on the intact fusion proteins (CBHI and E1 interact together). The higher pNPCase activity and thermostability of the purified enzymes with rigid linker (Figure 5H, E) may imply that the rigid one is more stable than the flexible one. The salt bridges of (EAAAR) 4 present within the rigid linker were most likely involved in its higher stability [35].
The heterogolous expression of the fusion protein had a large impact on the secreted enzymes and its ability to hydrolyze PCS FPase activities were measured in culture filtrates from 23 positive transformants with tce1-rig constructs. The FPase activities of all transformants increased significantly (P < 0.05) by 10-30% compared to their parent strain RC30-8 at 60°C (data not shown). R1, R2 and R3 from the tested transformants were selected for further characterization based on their high FPase activities. The activities of FPase and CMCase are normally measured at 50°C, however, the activities of RC30-8 and the transformants had their optimum at 60°C. Besides, the increase in CMCase and FPase activities were much more significant between the transformants and their reference strain at 60°C. The transformants R1, R2 and R3 had an average increase of 26% (P < 0.001) in CMCase activity and 36% (P < 0.01) in FPase activity compared to the parent strain at 60°C (Figure 6). The most efficient transformant, R1, exhibited a 30% increase in CMCase activity and a 39% increase in FPase activity.
The activities of the enzyme set within the culture filtrate from the transformant with an over-expressed CBHI (transformation control indicated as TC) were completely similar to the parent strain RC30-8 after growth on a mixture of wheat bran and cellulose. Apparently the cellobiohydrolase activity is saturated in the enzyme set of RC30-8 and, therefore, over-expression of CBHI did not result in an increase of the FPase activity. These results demonstrated that the successful expression of the fusion proteins containing the bacterial thermostable endoglucanase E1 had contributed to an increased CMCase activity in the secreted enzyme mixture. As a result of this, the total cellulase activity (measured with filter paper) was also increased in the secreted enzyme mixture of the transformants.
To further analyze the contribution of the highlyexpressed fusion proteins, a saccharification experiment was performed by incubating PCS with the secreted enzyme set of T. reesei RC30-8 or its transformants at 50-75°C. The subsequent sugar analysis with HPLC detected only glucose and cellobiose in the hydrolysates from PCS ( Figure 7). According to the total reduced sugar concentrations (meaning the sum of glucose and cellobiose), the enzyme sets of T. reesei RC30-8 and the transformant TC showed a similar ability to hydrolyze PCS at the different temperatures. The enzyme sets of all strains showed higher efficiency of PCS hydrolyses at 55°C than that at 50°C. However, compared to T. reesei RC30-8, the enzyme set of transformant R1 showed an increase in reduced sugar concentrations in the PCS hydrolysate at 55°C of 40% (P < 0.001).
The cellobiose concentrations after hydrolyzing PCS at 60°C decreased sharply to near zero for the secreted enzyme set of all strains (Figure 7). This observation was likely explained by a more efficient hydrolysis of cellobiose at 60°C due to the optimal temperature at approximately 60°C of the β-glucosidase enzymes in T. reesei [37][38][39]. In comparison with that at 55°C or 50°C, CMCase and FPase activities of the secreted enzyme mixture from RC30-8 and TC were higher at 60°C ( Figure 6A, B), however, the total reduced sugar concentration of PCS hydrolysate at 60°C decreased significantly (Figure 7). The lower reduced sugar concentrations in the PCS hydrolysate of the reference strains at 60°C were probably explained by the different imcubation times used in the measurements of CMCase and FPase activity and the saccharification experiment. The incubation times in the measurement of in vitro enzyme activities were between 30 and 60 min, while the saccharification experiment lasted 24 h. Therefore, the low thermostability of the native enzymes from T. reesei RC30-8 and TC might lower the saccharification of PCS at 60°C. Moreover, other methodological differences such as optimal pH and substrate accessibility most likely influenced the hydrolytic efficiency [40,41]. In contrast, compared to 50°C, the reduced sugar concentration (mainly the glucose concentration) in the PCS hydrolysate of transformants R1 increased significantly at 60°C. The reduced sugar concentration resulting from transformant R1 was almost three-fold as much as that from the parent strain or TC at 60°C. At 70°C or 75°C, the enzyme set of RC30-8 or transformant TC almost completely lost the ability to hydrolyze PCS. The PCS-hydrolytic efficiency of R1 also decreased greatly at 70°C or 75°C, with the glucose concentration being about one-third of that at 60°C (Figure 7). However, the cellobiose concentration of transformant R1 increased slightly at 70°C or 75°C. This was likely due to a relative high thermostability of the fusion protein TCE1-RIG and E1 mixture at 70°C. These results demonstrated that heterologous expression of a thermostable endoglucanase e1 in T. reesei improved the overall quality of its cellulase mixture due to increased enzyme activities and a far better thermostability.
Conclusions
The direct engineering of the cbh1 promoter of T. reesei by replacing the three binding sites of the carbon catabolite repressor CREI for the binding sites of different transcription activators showed to be a highly efficient strategy to improve the strength of a promoter. This study also demonstrated that gene fusion with a rigid linker should be a successful approach for improvement of heterologous expression efficiency of bacterial cellulases within fungi. The high activity and improved thermostability of the fusion protein in this case-study indicated that supplementing the cellulase complex of T. reesei with thermostable cellulases, especially in form of fusion proteins, greatly improved the ability to release sugars from lignocellulosic biomass such as PCS.
Microbial strains, plasmids and primers
Escherichia coli DH5α served as the cloning host (Novagen, Gibbstown, NJ, USA). Agrobacterium tumefaciens AGL1 was used as a T-DNA donor to maintain the constructs and for fungal transformation [42]. The T. reesei strain RC30-8, which was screened from mutants of Rut-C30 and maintained in this laboratory, was used as the host for heterologous expression. A T-DNA binary vector, pDHt/sk, containing the hph coding for hygromycin B phosphotransferase (under control of the Aspergillus nidulans trpC promoter and terminator) was used to construct the transformation vectors. The thermostable endocellulase E1 was obtained from A. cellulolyticus 11B (ATCC ® Number: 43068) using primers EIGF, EIHF, EISF and EIR. Different forward primers were used to generate corresponding 5'-end to respective linker peptide coding sequences for overlapping (Table 1).
The construction and transformation of expression vectors for heterologous genes (egfp, e1) and the fusion genes of cbh1 and e1 The cbh1 promoter and its mutated fragments were double digested with SpeI and XbaI, and ligated into the SpeI/XbaI site of plasmid pDHt/sk to obtain vectors pDHt/sk-pcbh1, pDHt/sk-pcbh1m1, pDHt/sk-pcbh1m2. The reporter gene egfp was then introduced into the XbaI site of all four vectors after a PCR carried out with primers GFPF and GFPR and the egfp expression vectors were constructed.
The catalytic domain fragment of e1 (GenBank: U33212.1) was fused with the intact cbh1 coding region via a flexible neutral polyglycine linker (GGGGS) 4 and a rigid α-helix linker (EAAAR) 4 , using overlapping primers cbh1GR and cbh1HR ( Table 1). The generated fragments were named tce1-fle and tce1-rig. The catalytic domain of e1 was fused with the signal sequence of cbh1 using the primers cbh1F and E1R and named se1h. All fragments were also introduced into a 6 × His-tag coding region at its 3'-end. The original e1 fused with a his-tag was named e1h. After ligated into the XbaI site of plasmid pDHt/sk-pcbhm2, the heterologous expression vectors harboring tce1-fle, tce1-rig, se1 and e1h were constructed. The cbh1 fragment was also ligated into the same vector for transformation control. The construction of the fusion genes is shown in Figure 3.
All of the expression vectors for the heterologous genes were transformed into the recipient T. reesei RC30-8 using Agrobacterium-mediated fungal transformation [43].
Selection and culture of transformants or T. reesei RC30-8 Transformants were selected using hygromycin B (10 μg/ml) and cefotaxime (300 μM) on potato dextrose agar (PDA). Each positive transformant was used to create monoconidial cultures for genetic stability and confirmed single-copy integration of egfp using real-time PCR. All fungal strains including T. reesei RC30-8 were spread on PDA plates and were grown at 28°C for about 7 days and then stored at 4°C after conidia formed. The conidia of the fungal transformants were collected from PDA plates and inoculated into 50 ml flasks containing 10 ml Sabouraud dextrose broth (SDB) and cultured for 2 days at 28°C and 200 rpm on a rotary shaker for protein expression. Subsequently, 1 ml of the culture was transferred to flasks with 10 ml minimal medium plus different carbon resources [3% cellulose powder (CF-11, Whatman, Maidstone, England) and 2% wheat bran (ground to less than 0.5 mm in diameter by a mill at the lab) as inducer or 2% glucose as repressor] and incubated at 28°C and 200 rpm. The minimal medium contained 0.4% KH 2 PO 4 , 0.28% (NH 4 ) 2 SO 4 , 0.06% MgSO 4 ·7H 2 O, 0.05% CaCl 2 , 0.06% urea, 0.3% tryptone, 0.1% Tween 80, 0.5% CaCO 3 , 0.001% FeSO 4 ·7H 2 O, [44] and was adjusted to pH 5.5.
Qualitative and quantitative evaluation of promoter strength
After being cultured for 2 days, mycelia of transformants were collected for fluorescence observation or for total RNA extraction, after rinsing with sterilized water three times. The FastPrep ® -24 (MP Biomedicals, Solon, HO, US) instrument in combination with TRIzol ® Reagent (Invitrogen, Carlsbad, CA, USA) was successfully used for total RNA extraction. Reverse transcription was carried out using the PrimeScript ® RT reagent Kit (Takara, Dalian, China). Relative expression levels of egfp were calculated in comparison with the expression of act encoding actin by RT-PCR with primers GFPrtF, GFPrtR, actF and actR.
Purification of fused proteins from the T. reesei transformants
After being cultured for 7 days, the culture filtrate of transformants was collected by a centrifugation at 4°C and 8,000 × g for 10 min. The fusion proteins TCE1-FLE or TCE1-RIG in the collected culture filtrate were purified using Novagen Ni-NTA His•Bind ® Resin (Merck, Darmstadt, Germany) with step gradient elution. The 40 mM imidazole washouts were collected, and the buffer was exchanged with 20 mM pH 7.4 NaH 2 PO 4 using a Vivas-pin™ ultrafilter (10 kDa cut-off, GE Healthcare Piscataway, NJ, USA) to remove the imidazole. Fusion protein production was examined by SDS-PAGE and Western blotting using an anti-His antibody (Yeli, Shanghai, China). Proteins were quantified using the DC protein assay kit (Bio-Rad, Hercules, CA, USA), according to the manufacturer's instructions.
Enzyme activity assays
The crude secreted enzymes of fungal strains (the culture filtrate of T. reesei RC30-8 or its transformants was collected by a centrifugation at 4°C and 8,000 × g for 10 min after being cultured for 7 days) or the purified TCE1-FLE and TCE1-RIG were used to examine substrate specificity and to characterize its properties. CMCase activity was assayed by measuring the amount of reducing sugar released from CMC (Sigma, St. Louis, MO, USA) using the DNS method [45]. The assay mixture contained a specific amount of diluted enzyme, 100 μl of 2% CMC and 100 μl 50 mM saline sodium citrate buffer (SSC, pH 5.0). The mixture was incubated at 60°C for 10 min; 200 μl DNS was added to stop the reaction, followed by incubation for 5 min in boiling water. Photometric assays were analysed at OD 540 using a Varioskan Flash microplate reader (ThermoScientific, Rockford, IL, USA).
The specific activities of the crude enzymes secreted by T. reesei RC30-8 or its transformants or the purified fusion proteins on FPase activity were measured using a modified IUPAC method [47]. The assay mixture was incubated at an optimal condition of 60°C, pH 5.0 for 60 min, and the reaction was stopped with 120 μl DNS followed by an incubation in boiling water for 10 min. A unit of enzyme activity (U) was defined as the number of micromoles of reducing sugar or pNP released per minute per milligram protein or per milliliter fermented culture. Student's t-test was performed with Excel 2007 (Microsoft, WA, USA), employing a two-tailed test.
The thermostability of the purified proteins or the secreted crude enzymes was assayed by the similar enzyme activity assay methods mentioned above. Samples were exposed to thermal stress in water baths at 60°C, 70°C and 85°C for up to 24 h. Three aliquots were pipetted out for enzyme activity assays at intervals of 15 min-12 h during the exposure. | 6,746.6 | 2012-02-08T00:00:00.000 | [
"Biology",
"Engineering"
] |
MIDAS2: Metagenomic Intra-species Diversity Analysis System
Abstract Summary The Metagenomic Intra-Species Diversity Analysis System (MIDAS) is a scalable metagenomic pipeline that identifies single nucleotide variants (SNVs) and gene copy number variants in microbial populations. Here, we present MIDAS2, which addresses the computational challenges presented by increasingly large reference genome databases, while adding functionality for building custom databases and leveraging paired-end reads to improve SNV accuracy. This fast and scalable reengineering of the MIDAS pipeline enables thousands of metagenomic samples to be efficiently genotyped. Availability and implementation The source code is available at https://github.com/czbiohub/MIDAS2. Supplementary information Supplementary data are available at Bioinformatics online.
Introduction
Metagenotyping, the identification of intraspecific genetic variants in metagenomic data, is a powerful approach to characterizing population genetic diversity in microbiomes. Most pipelines identify variants based on alignment of reads to reference databases of microbial genomes and/ or gene sequences ( Supplementary Fig. S1). While comprehensive reference databases can reveal strain-level relationships which would be otherwise overlooked (Beghini et al., 2021), alignment to large databases is computationally intensive. Furthermore, the divergence of reference genomes from strains in the metagenomic sample affects sensitivity and precision (Bush et al., 2020;Olm et al., 2021), and existing metagenotyping tools do not automatically adapt database files based on information in the metagenome. In this article, we introduce Metagenomic Intra-Species Diversity Analysis System (MIDAS2) ( Supplementary Fig. S2), a major update to MIDAS (Nayfach et al., 2016) (Supplementary Table S1) that addresses these challenges through (i) a new database infrastructure geared to run on AWS Batch and S3 that achieves elastic scaling for constructing database files from large collections of genomes; and (ii) a fast and scalable implementation of the single nucleotide variant (SNV) calling pipeline that enables metagenotyping in thousands of samples with improved accuracy achieved through utilization of paired-end reads and databases customized to the species present in the samples. As the only tool that integrates all steps of the metagenotyping process, from database customization to alignment and variant calling, MIDAS2 helps to promote reproducible research.
Implementation
We generated MIDAS Reference Databases (MIDAS DB), comprised of species pangenomes, marker genes and representative genomes, from two public microbial genome collections: UHGG v.1 (Almeida et al., 2021) (4644 species/286 997 genomes) and GTDB v202 (Parks et al., 2022) (47 893 species/258 405 genomes). This is a significant increase in database content compared to MIDAS DB v1.2 (5952 species/31 007 genomes) and other tools (Supplementary Table S2). We implemented a new infrastructure that dramatically simplifies building a new MIDAS DB for other genome collections by using a table-of-contents file assigning genomes to species and denoting the representative genome for each species ( Supplementary Fig. S3). MIDAS DBs can be built locally, which enables customized selection of representative genomes, a key component of accurate SNV calling.
Metagenotyping SNVs across large numbers of samples is computationally intensive. First, alignment and pileup are applied to each species in each sample (single-sample step) without assuming a single strain per sample. Then these pileup results must be scanned for each genomic site to compute population SNVs (across-samples step). Previously published methods cap the number of processors (CPUs) that can be used, because they parallelize over the number of species being genotyped (Supplementary Note). The SNV module of MIDAS2 achieves better CPU utilization by splitting genomic sites into multiple chunks per species. We execute parallelization over chunks in a way that does not destroy cache coherence to the point where computation stalls on input or output (I/O; Supplementary Note).
Results
We compared the running time and memory utilization of the single-sample and across-samples SNV modules of MIDAS and MIDAS2, using the same database (MIDAS DB v1.2) and 211 samples from an inflammatory bowel disease cohort (NCBI accession: PRJNA400072). The single-sample SNV module of MIDAS2 is slightly faster than MIDAS ( Supplementary Fig. S4), with database customization and Bowtie2 alignment taking up to 75% of run time ( Supplementary Fig. S5). The across-samples SNV module benefited more from parallelization, scaling linearly ( Supplementary Fig. S4) and running 2.33 times faster in MIDAS2 with 48 CPUs (Fig. 1A). We also compared runtime with inStrain v1.6.3 Olm et al. Table S14).
MIDAS2, inStrain and metaSNV v2 were applied to three aliquots of a standardized bacterial community (Olm et al., 2021), and SNVs were compared between aliquots which should have identical metagenotypes (Supplementary Note). metaSNV v2 has the fewest false positives by only using uniquely aligned reads, but it genotyped just five of the eight species in the community (Supplementary Table S5). InStrain and MIDAS2 correctly detected all eight species. When both are run with a genome database containing only the reference genomes of the strains in the community, MIDAS2 has fewer false positives (Fig. 1B). However, the false positive rate of MIDAS2 is higher when using the MIDAS DB v1.2, in which these species' reference genomes are diverged from the sample. Thus, high-quality reference genomes and post-alignment filters that balance false positives against false negatives are crucial for metagenotyping.
Since metaSNV v2 was previously shown to be efficient enough to metagenotype thousands of samples, we assessed the scalability of MIDAS2 compared to metaSNV v2 on 1097 samples from the PREDICT study (NCBI accession: PRJEB39223), using MIDAS DB UHGG with both tools (Supplementary Note). Despite the same species selection criteria, MIDAS2 metagenotyped many more species (44 versus 14 for metaSNV v2) (Supplementary Note). MIDAS2 used more memory (21.21 GB versus 4 GB peak RAM utilization) and ran slightly longer (average 106 versus 84 min per species) to achieve this. We conclude that MIDAS2 can metagenotype thousands of samples with reasonable computational costs, providing a more sensitive alternative to metaSNV v2.
For each of the 44 species from PREDICT with MIDAS2 metagenotypes, we quantified evidence of a single dominant strain versus mixtures of multiple strains in each sample with an existing method (Garud et al., 2019). While most species showed evidence of distinct lineages across samples ( Supplementary Fig. S8), single samples often had a single dominant strain (Fig. 1C). However, samples with strain mixtures were common for several species, including Bacteroides_B dorei (62%) and Faecalibacterium prausnitzii_G (49%) (Supplementary Figs S9 and S10). We also showed that MIDAS2 can detect simulated strain mixtures with high accuracy (Supplementary Table S15), lending credibility to this finding. The SNV module of MIDAS2 was re-engineered to parallelize within species, making it increasingly faster than MIDAS as we deploy more CPUs. This analysis was performed with 211 metagenomic samples (NCBI accession: PRJNA400072). (B) Metagenotype accuracy was benchmarked using identical aliquots of a standardized microbial community, for which all consensus SNVs are false positives. More errors are made with a large reference genome database compared to one with only the species in the community (MIDASDB v1.2 versus Zymo Genome). Post-alignment filters, including how paired-end reads are handled, differ between tools (run with default filters) and affect false positive rates. Despite a large database (Pangenomes2), metaSNV v2 has a low false positive rate due to using only uniquely aligned reads, but this comes with the cost of lower sensitivity. Supplementary Figure S6 shows how database and post-alignment filters affect errors in population SNVs; MIDAS2 and inStrain have similar error rates with Zymo Genomes. (C) Distribution of samples with evidence of a strain mixture versus one dominant strain for 44 species metagenotyped by MIDAS2 in 1097 samples from the PREDICT cohort (NCBI accession: PRJEB39223) | 1,709.4 | 2022-06-19T00:00:00.000 | [
"Computer Science",
"Biology",
"Environmental Science"
] |
Parents’ Economic Status and Academic Performance in Public Primary Day Schools in Multinational Tea Estates Kericho County, Kenya
Dynamism in family finances, family type, and style of parenting has been associated with the well-being of a child. Poor performance usually indicates that some factors may be of the cause other than those found in schools since all schools in the republic of Kenya are given funds, teachers, and other resources equally. The purpose of this study is to investigate the influence of parents’ economic status and pupils’ academic performance in public primary tea estate schools in Kericho County. The research utilised social learning theory to bring the conceptualisation of the role of parents and family in enhancing academic performance. The study adopted a correlation research design. The study targeted 336 standards seven and eight pupils and 55 parent association members, 5 deputy headteachers and 5 headteachers from 5 schools from the selected schools in the tea estates in Kericho County. A sample population of 101 pupils, 55 parent Association members, 5 parents as well 5 headteachers, and 5 deputy headteachers were obtained using a stratified random sampling technique. The sampled 101 pupils were given questionnaires, while 55 parents association members, 5 headteachers, and 5 deputy headteachers were interviewed by the researcher. Both questionnaires and interview schedules were used to collect data from the field. Quantitative data were analysed using descriptive statistics where percentages and means were utilised. Pearson product-moment correlation coefficient was used as inferential statistics. Qualitative data were analysed through content analysis. Financial based problems attributed the highest issues that affect the parents in the estate for over-dependence wages and salaries that can sustain basic needs. Therefore, the study concluded that parent economic status had a significant effect on the pupil’s academic performance. The study recommended that there is a need for the multinational tea estates to consider assisting the children financially in estates through corporate social responsibility.
INTRODUCTION
Knowledge and education have become more like basic needs in the current world rather than secondary needs. Performance is then crucial based on the fact that most of the education systems are based on passing through some evaluation or test with the entire system. Poor economic growth in Kenya has led to persistent poverty among Kenyan households. Children from poor family settings combine schooling and other activities such as household chores, farm work, work outside homes and family business (Moyi, 2011). Omar (2012) who researched in Kenya also found that most parents are poor and most are unemployed making it difficult for them to source money for school fees.
In January 2003, the Kenyan government introduced Free Primary Education (FPE). Enrolment in public primary schools went up by 1.3 million children, from a total of 5.9 million to 7.2 million children in 2002/2003 and 1.4 million more children with 8.2 million children in 2003/2010. Lower academic performances and completion of fewer years of school and career aspiration were associated with adolescents from lower socioeconomic status backgrounds and ethnic minorities in America. Children from high occupational statuses are known to model their parents' positive schooling experiences and higher occupations (Dubow, Boxer & Huesmann, 2009). On the contrary, children from low-income parents may model their parents' lower levels of educational attainment (Obonyo, 2018). The majority of investigation done on the performance of education has concentrated on school factors while home factors also represent student's immediate environment which should also be investigated.
According to Gabriel et al., (2016), more than one million children were still out of school because of socio-economic status and cultural factors. The parents of children who were constantly absent were unemployed and of low economic status. The researchers noted socio-economic challenges affecting the children of the unemployed and low statuses as limited resources at school, and to household, buying food and enabling a cognitive enriched learning environment at home.
Even though there is an increase in enrolment, some schools are registering low marks that are far below the national average (Glennerster et al., 2011). A cross-sectional data from 2013 to 2018 of schools in the Kericho County representing multinational tea companies' estate schools with a mean score of 244.90, this was below the national mean score of 271.29 within the same periods. Studies done elsewhere indicate that school environments and home-based factors lead to poor academic performance. These studies have been done elsewhere, but little has been done in Kericho County. Therefore, it was necessary to carry out a study on parents' economic status factors that influence academic performance in public primary schools in multinational tea estates in Kericho County.
LITERATURE REVIEW
Several countable scholars have empirically discussed parent economic status factors and their influence on the performance of students (Akeri, 2015;Chinyoka & Naidu, 2014). But most of these researches are based on child labour, family welfare, family set-up and education level in the family, social-economic status, professional qualification and home chores, among other factors. The research concentrated on parents' income levels, parent education level, parenting styles and family type, which have not been researched. Akeri (2015) found that household income determines academic performance. Children from poor households are turned away because of failure to meet some costs, both direct and hidden. In some cases, some parents withdraw their children from school because of increased demand for household income. Wali (2016) suggests that parents' occupation indirectly reflects their intellectual ability which is inherited by their children. The high-income parents as the case in Kenya: take their children to boarding schools where they get the best education resources like getting enough time while in boarding, enough course and supplementary books as well as teachers. Low-income parents take their children to public schools where truancy is common. The children could be sent home to replace old or buy new school uniforms, bring money to pay teachers employed by parents and pay for monthly or end term exams. Sometimes, many children are affected by being out of classrooms, thus delaying syllabus coverage, hence poor performance. Wali (2016) notes that parents from low social-economic status households and communities develop academic skills more slowly compared to children from higher social-economic groups. Juma (2016) found that parents from higher-income families take their children to school earlier -Early Childhood Development Education (ECDE) level than their counterparts. Currently in Kenya, the Free Primary Education system where teachers, construction of classrooms, providing facilities like desks and most other resources to all public primary schools does not include ECDE which are three levels namely baby, nursery and introductory classes. Juma (2016) observed that lower-income parents, on the other hand, prefer their children to start school at a later time that is from grade one onwards where free primary education begins.
Usman, Mukhatar and Auwal (2016) investigated the socioeconomic status and academic performance in Nigeria Educational System. For a long time, Nigeria has been concerned about academic achievement, but in recent years parent socio-economic status has influenced achievement and quality of education. Private schools are rising with high performance and demand high payment for students to join creating links between the socioeconomic status of the parent and performance of students. Hence parental socio-economic, parent involvements, family size, socio-stability, social stratification, and interest in education are some of the home-based factors that should be considered in student academic achievement. The study investigated the effect of parent socio-economic status on the achievement of the students. A random sampling technique was used to select a sample of 80 students from four secondary school Kano State, Nigeria. Questionnaires were used in collecting data where correlation analysis was utilised. The students were intimated with goals of the research, confidentiality of their responses and how to respond to the questions or items on the questionnaire. The findings indicated that a statistically significant relationship between the parent's socio-economic and students' academic achievement (p < 0.05). The study recommended that the government should sensitise the public on the need to have gender equality in education, school fees regulation, parents' educational expenditure and incomes.
Odoh, Ugwuanyi and Chukwuani (2017) examined the parents' economic status and academic performance of accounting students in Nigerian universities. It was motivated by the problem of a steady decline in the performance of some classes of accounting students in the country, occasioning concerns amongst well-meaning citizens. The specific objective of the study was to ascertain the extent to which parents' socio-economic status is related to the academic performance of students in Nigerian universities. The scope of the study was narrowed to students in the Department of Accountancy, University of Nigeria, Nsukka. Descriptive survey design. Was adopted for the study. The population of the study is 150 final year students in the Department of Accountancy at the University of Nigeria. The sample size of 60 was selected using a non-probability purposive sampling technique. Data analysis was done with inferential statistics (Chi-square, X 2 ). Results obtained indicate that parental socio-economic status was significantly related to the academic performance of students in accounting studies in Nigeria that parental income level is positively and significantly related to students' academic performance in accounting studies in Nigeria. It was then recommended, among other things that the government should come up with policies for a better socio-economic climate for parents in Nigeria, which assisted boost students' academic performance in the country. Gabriel et al. (2016) establish relationship parental socio-economic factors and academic achievement of students in Westland District in Nairobi County. The study utilised the Classical Liberal Theory of Equal Opportunity and social Darwinism. A descriptive survey design was used where a sample of 125 respondents comprising of 91 students, 18 teachers and 16 parents was used. Questionnaires, interview schedules and focus discussion was used in collecting data. Both descriptive and inferential statistics were used in analysing questionnaires. An interview schedule was used to analyse thematic techniques. The results indicated that physical and instructional resources were inadequate or in poor condition. There was a strong negative correlation between the occupation of parents and the ability to finance education. There was a positive correlation between good parent-teacher relationship and their involvement in their children's academic achievement. The study concluded that parental occupation and involvement in learning activities had an effective parent-teacher relationship were facilitating factors. Parents' low ability to finance education, coupled with the poor status of physical and instructional resources were inhibiting factors to students' academic achievement. The study recommended the government to strengthen the partnership between strategic education development partners to mobilise physical teaching and learning resources to strengthen education. Scholarship and control of unemployment should be enhanced to improve parents' socio-economic status. Gobena (2018) established the family socioeconomic status and academic achievement of students. The study used a descriptive survey research design was employed where a sample of 172 students was selected from the College of Education and Behaviour Sciences students. The findings indicated that family income had no significant effect on the academic performance of students. There is a significant negative relationship between sex and the academic achievement of the student. Family education level affected the academic achievement of students significantly. The study recommended that the government should encourage families to support their children access to education. The government has the prerogative to ensure equal education for both high and low economic status. This is by harmonising the curriculum and giving equal resources to public schools as those of the private schools.
Abdu-Raheem (2015) did research to determine parents' socio-economic status on secondary school students' academic performance in Ekiti State. The study adopted a descriptive survey research design, targeting Junior Secondary School students in Ekiti State. Questionnaires were utilised to collect from a sample of 960 students from 20 secondary schools that were randomly selected. Regression analysis was used for testing the hypothesis. The results indicated that parents' socio-economic status had a significant effect on the academic performance of secondary school students. The study recommends that parents who are illiterate or have low literacy levels should ensure that their children are provided with home lessons during holidays and weekends. The government should embark on programs or formulate policies that can bridge the gaps between children of the rich and the poor academically.
Some of the empirical literature on parents' economic status focusing on household income on academic performance include Akeri (2015); Wali (2016) and Juma (2016). Wali (2016) concentrates on the parent's occupation on a child's education. Juma (2016) concentrated on the ECDE level rather than primary. Usman, Mukhatar and Auwal (2016) concentrated on the socio-economic status of the parents based on the Nigeria Educational System. Similarly, Odoh, Ugwuanyi and Chukwuani (2017) did in Nigeria but concentrated on University education. Gabriel et al. (2016) selected respondents from secondary school rather than primary. Gobena (2018) did on the family socioeconomic status on students' academic achievement. Abdu-Raheem (2015) did their study in Ekiti state in Nigeria, where the stud used purposive sampling of 48 students from 20 secondary schools. The current study was done in Kericho County, targeting public primary schools within the tea estate. Parental income level was used to determine whether it affects the performance of public primary schools in the tea estates in Kenya.
RESEARCH METHODOLOGY
The correlation research design was deemed appropriate as it is often identified with studies that yielded data that can be used to examine relationships among variables (Saunders, Lewis, & Thornhill, 2011). The research targeted 5 primary schools within the Chagaik zone representing the multinational tea estate within Kericho County which are Mosobet, Kerenga, Tagabi, Kiptetan and Jamji primary schools. The target population was 336 standards seven and eight pupils, 55 members of parent association (PA), 5 head teachers and 5 deputy headteacher making a total of 401 respondents. The sample size was 101 pupils, 55 parents' association members, 5 headteachers and 5 deputy headteachers. The pupils were selected using a simple random sampling technique based on the proportion of standard seven and eight pupils. Census was used to select 55 members of parents' association, 5 headteachers and 5 deputy headteachers from the 5 primary schools within the multinational tea company in Kericho. Questionnaires were administered to pupils while two types of interview schedules were used for parent and headteacher. Mean, percentage and standard deviation were used to obtain descriptive statistics interpretation. Pearson correlation coefficient was used as inferential statistics where Statistical Package for Social Science (SPSS) assisted in analysis. The interview schedules were analysed using thematic analysis. The results from the interview were grouped into themes where the content was analysed.
RESULTS AND DISCUSSION
The questionnaire, interview for both parents and headteachers, were analysed to examine the parent economic status on academic performance. The return rate was 90.1% representing 90 respondents for questionnaires and 100% for both interviews. The results from the three instruments were triangulated to investigated the parent economic status and academic performance. Descriptive statistics results were obtained through mean and standard deviations which are discussed as per Table 1 below. The results from that majority of the parents worked in the tea estate and depended on the income from the multinational tea companies. A mean of 3.2889 indicated that the majority did work in tea estates where the multinational company formed the main source of finance. Its variations are dependent on multinational companies and are low with a standard deviation of 1.47064.
According to the results, the income was moderately sufficient for school and running home expenses (mean of 3.6667). Based on the response of 90 respondents, the standard deviation of 1.16117 revealed relatively low variation. Therefore, the income was moderately sufficient mean it was not adequate for running home expenses and meet school expectations.
In response to whether the family had manageable family expenses based on the size and income level; the results showed that it was to a small extent of manageability (mean of 3.1556). The variations on the manageability family were low on size and income level (standard deviation of 1.26234). Therefore, the payment from the job done is sufficient to take care of basic needs. There is a need to empower people within the estate to ensure that there were sufficient resources. According to the responses, the family had financial problems most of the time to a small extent (mean of 3.0000). Financial issues associated with the socioeconomic problem directly affect the pupils. A standard deviation of 1.28954 showed that the variation in financial problems was low. Family financial problems should be solved by the source of finance which is mainly the employer by reviewing the compensation packages.
The results also revealed that to a small extent pupil were affected by the income of their parents when studying in school (mean of 2.6333). A standard deviation of 1.40984 was found, which revealed that the effect on pupils was low on parent's income. Hence, the parent's income does not have a direct effect on the pupil's studying in school.
Though the resources and basic need affect psychologically affecting the study pattern of the pupil.
Correlation analysis of parental economic status on parent literacy, parenting structure and family types is presented in Table 2 below. The results from Table 2 indicated that parental economic status had a significant effect on academic performance (P = 0.005 <0.05). The correlation between parental economic status was positive on academic performance (R = .594). This implied that with the improvement of parental economic status leads to improvement in academic performance. Parental economic status had positive significant relationship with parent literacy (P = .017<.05, R =.251). Therefore, the increase of literacy of the parent improves the parent's economic ability and status.
The interview given to the parents showed that families dependent on parent's income to a greater extent mean of 4.000. The variation was also low with a standard deviation of 0.75593. It implies that the income generated from the parents was highly dependent on for daily running of the family needs. It also found that children were to a great extent affected by the amount of income (mean of 3.7500). The variation of the effect parents' income on the pupil was low (standard deviation of 0.70711). This shows that parents' income was associated with the pupils psychological being.
The interview given to the parent was unstructured and the response was coded as per head or deputy headteacher from 1 to 5. In response to the question, Do most of the parents have problems with paying fees (activity) for their children revealed all the respondents agree that they have fee problems. Headteacher 1 indicated revealed that "most of the family are over-reliant on money from tea estate, which makes it difficult to raise school fees." Deputy headteacher 3 pointed out that, "the majority of children are always sent home for fees which have affected the pupil concentration in schools as well as psychological torment to them." All the respondents pointed out the problem of feebased on over depending on salary and wages from the multinational companies. The effect of parents' income has significantly affected pupils psychological in academics.
"Have there been reports of financial problems by pupils?"; all the headteachers and deputy headteachers reported financial problems. The problems were solved through support from the multinational company sponsorship, supporting through teachers' welfare kitty and others were allowed to continue with a promise of the parent to pay at a later date. Headteacher 2 claimed that, "There have been a number of pupils who have problems with fee as well as the inability to have basic needs. We have sorted the problems by recommending them for sponsorship for such pupils to the management of the multinational tea company".
Deputy headteacher 1 reiterated that "Yes, this issue sometime does not have a solution and we allow the pupils to continue with a promise that they were clear before the end of the term. We sometimes recommend the pupils to County Government and management of the multinational companies for support".
Usman, Kukhatar and Auwal (2016) concurred that parent economic status affected the academic performance of the student in Nigeria Educational System. The current research finding was associated with low income earning from the multinational tea companies that affected the school fees and even necessities. Even though Usman et al. (2016) findings are not associated with employment in the tea estates, the challenges were similar to the current research where school fees and educational resources were major challenges.
Odoh, Ugwuanyi and Chukwuani (2017) also were in line with the current research that parent economic status significantly affects academic performance. Despite the research done on accounting students, Nigeria University, similar problems were cited that the socio-economic condition of the parents is transferred to the student. Gabriel et al. (2016) found that ability to finance education had a positive effect on good parentteacher relations as well as children's academic achievements. The finding showed no direct link of parent economic status but indicates that low ability inhabits the ability of parents to finance education, provide physical facilities and provide instructional resources are necessary for academic performance.
This concurs with the current research that parents with low-income transfer the problem to school fees, provision of basic needs as well as learning material affecting the pupil's academic performance significantly.
On the contrary, Gobena (2018) found that family income did not have a significant effect on a student's academic performance. It recommended that students from poor economic status should be accorded equal treatment with those from high economic status. This was different from Abdu-Raheem (2015) found that socioeconomic status affected significantly on secondary student's performance. Therefore, the parent economic status is important, especially where the economic status is directly linked with the provision of basic as well as schools on the performance of the student.
CONCLUSIONS AND RECOMMENDATIONS
The majority of parents worked in the tea estate and dependent on the income they got from the multinational companies. The results showed that the tea estate has assisted in providing a source of finance through employment, especially in plucking tea from the estates. The income from employment was found to be sufficient for school as well as running home expenses to some extent. In a moderate extent, the family expense is manageable based on the family size and income levels. Despite the finance been sufficient for family expenses and pay school fees, the families are facing financial challenges most of the time. It implies that their salaries and wages are not sufficient for the family. The income of the parent did affect the studies of their children.
The interview results indicated that pupil's dependent on parents' income from employment. The amount of income affects the children on pupils' academic performance. There existed fee problems among the pupils which have affected their studies to some extent for over depending on salary and wages from the multinational company. Sponsorship, teachers' kitty and other support are some of the few aids for parents who are not able to support their children. Sometimes the pupils are lockout of education based on school fees. Therefore, parent economic status had a significant effect on the pupil's academic performance.
The study concluded that parental economic status plays a significant role in academic performance. This is because the income of the parents controls the ability to run home expenditure. It also facilitates school fee payment and ensures that basic needs are attained by the pupil. Despite the sufficiency of finance in covering basic needs and pay school fees, the majority of the parents have challenges in finance based on emergencies and other problems. Therefore, parental economic status significantly pupil's affect academic performance.
The study recommends that strategic partnerships between schools within tea estates with the multinational tea companies to enable students with fee problems to be assisted through the corporate social responsibilities of the companies. The tea companies should assist parents that have a financial problem from the perspective of the schools. This would not only assist the family but would uplift the livelihood of the family at a later stage. The finding revealed that parent's economic status has a significant effect on the pupil's performance. The study also recommends to the County of Kericho to intervene through increasing bursary allocation to children from estates. This based on poor salary and wages that allow a hand to mouth financial consumption. There is a need for the student despite low economic to continue with both primary and secondary studies. Through this bursary, the student is assisted to better their lives. The study also advocates the multinational to consider the burden of school fees and review the salaries according to the economic situation. This would allow pupils to obtain their basic needs as well as pay school fees. | 5,719 | 2020-06-12T00:00:00.000 | [
"Economics",
"Education",
"Sociology"
] |
Minocycline decreases CCR2-positive monocytes in the retina and ameliorates photoreceptor degeneration in a mouse model of retinitis pigmentosa
Retinal inflammation accelerates photoreceptor cell death caused by retinal degeneration. Minocycline, a semisynthetic broad-spectrum tetracycline antibiotic, has been previously reported to rescue photoreceptor cell death in retinal degeneration. We examined the effect of minocycline on retinal photoreceptor degeneration using c-mer proto-oncogene tyrosine kinase (Mertk)−/−Cx3cr1GFP/+Ccr2RFP/+ mice, which enabled the observation of CX3CR1-green fluorescent protein (GFP)- and CCR2-red fluorescent protein (RFP)-positive macrophages by fluorescence. Retinas of Mertk−/−Cx3cr1GFP/+Ccr2RFP/+ mice showed photoreceptor degeneration and accumulation of GFP- and RFP-positive macrophages in the outer retina and subretinal space at 6 weeks of age. Mertk−/−Cx3cr1GFP/+Ccr2RFP/+ mice were intraperitoneally administered minocycline. The number of CCR2-RFP positive cells significantly decreased after minocycline treatment. Furthermore, minocycline administration resulted in partial reversal of the thinning of the outer nuclear layer and decreased the number of apoptotic cells, as assessed by the TUNEL assay, in Mertk−/−Cx3cr1GFP/+Ccr2RFP/+ mice. In conclusion, we found that minocycline ameliorated photoreceptor cell death in an inherited photoreceptor degeneration model due to Mertk gene deficiency and has an inhibitory effect on CCR2 positive macrophages, which is likely to be a neuroprotective mechanism of minocycline.
Introduction
Inflammation in the central nervous system, as well as the retina, is considered a complicating factor in degenerative diseases [1][2][3][4][5]. Moreover, retinal inflammation is considered to accelerate photoreceptor cell death (PCD) in retinal degeneration (RD), including age-related macular degeneration and retinitis pigmentosa [6]. Hence, the management of inflammation is pivotal and presumably beneficial for patients with RD. The elucidation of inflammatory mechanisms for the management of RD is the major research focus. Minocycline, a semisynthetic broad-spectrum tetracycline antibiotic, has anti-inflammatory properties [7]. Several studies, including our study, have demonstrated that minocycline can ameliorate PCD in RD [8][9][10]. However, the mechanism of PCD rescue by minocycline remains largely unknown. Two potential mechanisms have been suggested; the anti-apoptotic mechanism and anti-inflammatory mechanism [11]. The innate immune system, which exerts a rapid non-specific response to an antigen, has been implicated in the development of RD, including human age-related macular degeneration and retinitis pigmentosa [12]. In healthy retinas, microglia or guardians of the retina located in the outer and inner plexiform layers maintain retinal homeostasis [12]. However, in RD, microglia are activated and the activated microglia migrate to the outer retina and subretinal space, the space between the outer segments of photoreceptors and retinal pigment epithelium (RPE). Minocycline inhibits both microglial activation and migration and hence, microglial suppression by minocycline is considered the major mechanism of PCD rescue [11]. However, minocycline is not a specific drug. Furthermore, other protagonists of retinal inflammation (i.e., bone marrow-derived macrophages) invade the outer retina [10,13,14]. Therefore, delineating the cause and mechanism of rescue of photoreceptor cells in the degenerative stage by minocycline is important.
Recently, we generated c-mer proto-oncogene tyrosine kinase (Mertk) −/ − Cx3cr1 GFP/+ Ccr2 RFP/+ mice. This enabled the observation of CX3CR1-green fluorescent protein (GFP)-and CCR2-red fluorescent protein (RFP)-positive cells in inherited RD without light damage or requirement of any non-physiological procedures such as doxycycline administration (widely used for tetracycline-controlled transcriptional activation) [15]. Before RD occurs, only Cx3cr1 expression is observed, which corresponds to resting microglia [16]. In progressive RD, Ccr2 expression is markedly increased [15].
In this study, we found that minocycline administration to Mertk −/− Cx3cr1 GFP/+ Ccr2 RFP/+ mice not only ameliorated PCD but also reduced the number of CCR2-RFP-positive cells in the outer retina and subretinal space. These results indicate that Ccr2 suppression is one of the mechanisms of photoreceptor protection by minocycline.
Equal numbers of male and female mice were used in this study. All mice were housed in the animal facility at the Jikei University School of Medicine and were maintained under a
Minocycline administration
Minocycline was purchased commercially from Sigma-Aldrich (St. Louis, MO, USA) and was dissolved in phosphate-buffered saline. Thereafter, minocycline was intraperitoneally administered once daily to Mertk −/− Cx3cr1 GFP/+ Ccr2 RFP/+ mice (age: 4-6 weeks). The dose of minocycline was either 50 or 100 mg/kg. Phosphate-buffered saline was administered to the control group.
Flat-mount retina and RPE preparation
All procedures for retinal and RPE flat-mounts were performed as previously described [10]. Images of flat-mounts were captured using a confocal microscope (LSM; Carl Zeiss, Thornwood, NY, USA). Regarding retinal flat-mounts, images of the entire retina were captured at 5 μm intervals and all images were projected in one slice. Regarding RPE flat-mounts, images of the entire visible RPE were captured at 3 μm intervals and projected in one slice.
Histological analysis
All retinal sections were prepared as previously described [10,17]. The numbers of CX3CR1-GFP-and CCR2-RFP-positive cells were counted using ImageJ (National Institutes of Health, Bethesda, MD, USA). To detect RFP, anti-RFP mouse antibody (MBL, M165-3) was used as the primary antibody, and the signals were visualized using a horseradish peroxidasetagged anti-mouse IgG antibody (GE Healthcare, NA9310V) and a peroxidase-diaminobenzidine kit (Nacalai Tesque, Kyoto, Japan). Immunohistochemical images were captured using a confocal microscope (LSM 880; Carl Zeiss, Thornwood, NY, USA).
Apoptosis assays
Retinas were frozen, sectioned, and subjected to the TUNEL assay using an in situ apoptosis detection kit (MK500, Takara Bio, Shiga, Japan).
Flow cytometry analysis
The effect of minocycline on circulating monocytes was examined by flow cytometry analysis. Peripheral blood was collected from 16-week-old Mertk −/− Cx3cr1 GFP/+ Ccr2 RFP/+ mice and was incubated with anti-Ly6C conjugated APC-Cy7 antibody (BioLegend, 128025) on ice for 30 min. Thereafter, cells were washed and stained with propidium iodide to exclude dead cells and analyzed using a BD FACSAria III Cell Sorter (BD Biosciences, San Jose, CA, USA). Data were analyzed using FlowJo software version 10.7.1 (Tree Star, Ashland, OR, USA).
Data analysis
Continuous variables are presented as mean ± standard deviation. The Steel-Dwass test was performed for non-parametric multiple comparisons between the groups. All statistical analyses were performed using the statistical program R (version 4.0.3; R Foundation for Statistical Computing). Statistical significance was set at P < 0.05.
Characterization of the Mertk −/− Cx3cr1 GFP/+ Ccr2 RFP/+ mouse retina
In Mertk −/− Cx3cr1 GFP/+ Ccr2 RFP/+ mice, the descriptions "4-" and "6-week-old" correspond to the retinal non-degenerative and RD ongoing stage, respectively [15]. Fig 1C-1E. In 4-week-old mice, only CX3CR1-GFP single-positive cells were visible in the inner retina ( Fig 1C). Neither CX3CR1-GFP-nor CCR2-RFP-positive cells were observed in the outer retina and subretinal space. In 3-month-old mice, RD, represented by thinning of the outer nuclear layer (ONL), was observed ( Fig 1D). The number of ONL nuclei decreased from approximately 12 at 4 weeks to 1-4 at 3 months. Abundant CX3CR1-GFP-and CCR2-RFP-positive cells were observed in the ONL and subretinal space (Fig 1D). Some cells were CX3CR1 and CCR2 double-positive. In 1.5-year-old mice, almost all nuclei in the ONL had disappeared, indicating severe RD (Fig 1E). The frequency of CX3CR1-GFP-and CCR2-RFP-positive cells was found to be lower compared to the ongoing degeneration stage (e.g., from 6 weeks to 3 months).
Minocycline administration reduced the number of CCR2-positive cells in neural retina
The numbers of CCR2-RFP-positive cells and CX3CR1-GFP and CCR2-RFP double-positive cells were reduced in the 50 and 100 mg/kg minocycline-treated (Mino50 and Mino100) groups than in the control group (Fig 2E and 2F). In retinal flat-mounts, retinal layer boundaries are difficult to observe. However, many CCR2-RFP-positive cells were observed in the outer plexiform layer and ONL in the 3D images of the control group (Fig 2B). In the 3D image of the Mino100 group, CCR2-RFP was hardly detected (Fig 2C). On the contrary, the number of CX3CR1-GFP-positive cells was not affected by minocycline administration (Fig 2D).
Minocycline administration reduced the number of CCR2 positive cells in the subretinal space
RPE flat-mounts were prepared to observe the apical side of the RPE, corresponding to the subretinal space (Fig 3) [10,15,18]. In the RPE flat-mounts from 4-week-old Mertk −/− Cx3cr1 GFP/+ Ccr2 RFP/+ and Mertk +/+ Cx3cr1 GFP/+ Ccr2 RFP/+ mice that did not show RD, neither CX3CR1-GFP positive cells nor CCR2-RFP positive cells were observed [15]. Abundant CX3CR1-GFP positive cells were observed in the subretinal space in the control, Mino50, and Mino100 groups (Fig 3A). The number of CCR2-RFP positive cells decreased in the Mino50 and Mino100 groups compared with the control group (Fig 3E and 3F); however, the number of CX3CR1-GFP positive cells did not change in all the groups (Fig 3D), indicating that minocycline administration probably did not affect the migration of CX3CR1-GFP positive cells to the subretinal space but suppressed the migration of CCR2-positive cells.
Thereafter, we confirmed that the fluorescent signals in the subretinal region were not caused by auto-fluorescence by a colorimetric assay. Immunohistochemistry was performed on the frozen sections of retinas of 3-month-old Mertk -/-Cx3cr1 GFP/+ Ccr2 RFP/+ mice by the peroxidase-diaminobenzidine method. Diaminobenzidine signals were observed in the outer retina (Fig 4A), which was consistent with the fluorescence observation (Fig 1D). Diaminobenzidine staining revealed that minocycline treatment decreased the number of CCR2-RFP positive cells (Fig 4B and 4C).
Amelioration of PCD by minocycline administration
Finally, we evaluated the therapeutic effects of minocycline in Mertk −/− Cx3cr1 GFP/+ Ccr2 RFP/+ mice. Previously, we reported PCD amelioration by minocycline administration in a lightinduced acute RD mouse model (Abca4 −/− Rdh8 −/− mice) [10]. However, the therapeutic effect of minocycline in inherited RD due to Mertk gene deficiency was unknown. First, minocycline was administered to 4-week-old Mertk −/− Cx3cr1 GFP/+ Ccr2 RFP/+ mice for 2 weeks. However, the severity of PCD did not differ between the minocycline-treated and control mice because PCD was relatively mild at the age of 6 weeks in Mertk −/− Cx3cr1 GFP/+ Ccr2 RFP/+ mice. Thereafter, minocycline (50 mg/kg) or PBS was administered daily for 2 weeks from the age of 6 weeks to Mertk −/− Cx3cr1 GFP/+ Ccr2 RFP/+ mice (Fig 5). Minocycline-treated Mertk −/− Cx3cr1 GFP/+ Ccr2 RFP/+ mouse retinas showed fewer CCR2-RFP positive cells in the subretinal region compared with control mouse retinas (Fig 5A). The thickness of the ONL was retained in the central region but not in the peripheral region (Fig 5B and 5C), indicating amelioration of PCD by minocycline. In addition, minocycline significantly suppressed the proportion of TUNEL-positive photoreceptor cells (Fig 5D and 5E). The ONL thickness in 4-week-old Mertk −/− Cx3cr1 GFP/+ Ccr2 RFP/+ and wild type (B6) mice are shown as negative controls in S1 Fig.
Long-term minocycline administration
To examine whether minocycline has protective effects against photoreceptor degeneration in the later stage in the Mertk knockout model, Mertk −/− Cx3cr1 GFP/+ Ccr2 RFP/+ mice were treated with minocycline from 8-week-old to 16-week-old. In the control group, the number of CCR2-RFP positive cells accumulated in the subretinal space (Fig 6A). On the other hand, fewer CCR2-RFP positive cells were observed in retinas of minocycline (50 and 100 mg/kg)treated mice (Fig 6A and 6B). The ONL thickness in minocycline-treated retina was retained, especially in the central region of the retina (Fig 6C), and the numbers of cells in the ONL of the Mino50 and Mino100 groups were significantly higher than that of the control group ( Fig 6D).
To examine whether minocycline treatment affects circulating monocytes, we examined Ly6C positive monocytes in the peripheral blood of control, Mino50, and Mino100 treated Mertk -/-Cx3cr1 GFP/+ Ccr2 RFP/+ mice (S2A Fig). We found no significant difference in the proportion of Ly6C positive cells among control and minocycline-treated mice (S1B Fig).
Discussion
Minocycline is considered an anti-inflammatory and neuroprotective drug candidate for RD [8,19]. In this study, we investigated the therapeutic effects of minocycline on RD induced by Mertk knockout. These mice contained GFP and RFP in the Cx3cr1 and Ccr2 loci, respectively, allowing us to monitor and evaluate microglia and macrophages during degeneration. We found that minocycline administration suppressed the number of CCR2-RFP positive cells and partially protected photoreceptor degeneration.
In Mertk knockout models, CCR2-RFP single and CCR2-RFP and Cx3cr1-GFP doublepositive cells accumulated in the outer retina and subretinal space, which is consistent with the findings of our study [15]. CCR2 is regarded as an essential chemokine receptor for macrophage recruitment to inflammation sites [20] and is detectable in monocyte-derived https://doi.org/10.1371/journal.pone.0239108.g003 macrophages but not in resident microglia [21]. Therefore, CCR2-RFP positive cells observed in the current study were considered macrophages. However, it should be noted that the localization of monocyte-derived macrophage in RD varies among degeneration models, that is, the retinal ischemia-reperfusion model showed the accumulation of macrophages mainly in the ganglion cell layer and inner plexiform layer [22]. Light damage induced only activated resident microglia, but not monocyte-derived macrophages, into the subretinal space [23]. These findings suggest that the effects of minocycline vary depending on the degeneration model.
Recently, it was reported that minocycline has an effect on the CCL2/CCR2 pathway and macrophage infiltration. In primary microglia (in vitro) and neuropathic pain rat models (in vivo), minocycline suppressed the upregulation of CCL2 and CCR2 [24]. In a mouse model of cerebellar hemorrhage, infiltration of monocytes and macrophages into the cerebellum was decreased by minocycline treatment [25]. In the current study, significant inhibition of the number of CCR2-RFP positive cells in both the retina and RPE was observed in flat-mounts. Further, whether the inhibition of the CCL2/CCR2 pathway in macrophages and monocyte infiltration suppress RD remains unclear. The anti-inflammatory and neuroprotective effects of CCL2/CCR2 suppression in RD have been reported [15,26]; however, a study reported that the elimination of the CCL2/CCR2 pathway inhibited monocyte infiltration, but did not block neurodegeneration in a light injury mouse model [13]. This issue should be resolved to elucidate the neuroprotective effects of minocycline.
We found that minocycline had a dose-dependent neuroprotective effect on photoreceptor cells (Figs 5 and 6). However, it should be noted that no significant difference was noted in the peripheral ONL thickness between control and minocycline-treated retinas. These findings suggest that minocycline was not completely protective against photoreceptor cell degeneration, which is suggested by the finding that 20%-30% of photoreceptor cells were stained with TUNEL after minocycline treatment (Fig 5D and 5E).
Because macrophages infiltrating the retina are thought to subsequently reduce Ccr2 expression [21], it is possible that the number of macrophages was misled by the altered expression level of Ccr2. To accurately verify the extent of monocyte infiltration inhibition by minocycline, an additional technique that specifically labels all retinal macrophages should be used to distinguish them from resident microglia.
In the current study, we investigated the neuroprotective effects of minocycline using a Mertk knockout mouse model. Consequently, minocycline ameliorated PCD in inherited RD due to Mertk gene deficiency and reduced the total number of CCR2-RFP positive monocyte-derived macrophages accumulating in the outer retina and subretinal space. The suppression of the chemokine receptor CCR2 in retinal macrophages might be one of the neuroprotective mechanisms of minocycline. Further studies are warranted to examine whether minocycline decreases the number of infiltrating monocytes or suppresses Ccr2 expression in monocytes. Resources: Hideo Kohno. | 3,442.6 | 2021-04-22T00:00:00.000 | [
"Medicine",
"Biology"
] |
Multiparameter quantum metrology and mode entanglement with spatially split nonclassical spin states
We identify the multiparameter sensitivity of split nonclassical spin states, such as spin-squeezed and Dicke states spatially distributed into several addressable modes. Analytical expressions for the spin-squeezing matrix of a family of states that are accessible by current atomic experiments reveal the quantum gain in multiparameter metrology, as well as the optimal strategies to maximize the sensitivity. We further study the mode entanglement of these states by deriving a witness for genuine $k$-partite mode entanglement from the spin-squeezing matrix. Our results highlight the advantage of mode entanglement for distributed sensing, and outline optimal protocols for multiparameter estimation with nonclassical spatially-distributed spin ensembles.
I. INTRODUCTION
Quantum metrology makes use of non-classical quantum states to enhance measurement precision [1][2][3][4][5][6]. The estimation of a single parameter, e.g., a phase shift in an atomic clock or interferometer, can be made more precise if the atomic spins are prepared in entangled superposition states that have lower quantum fluctuations than classical states. Recently, these ideas have been extended to the problem of multiparameter estimation, where a collective quantum enhancement from a simultaneous estimation of several parameters can be achieved [7][8][9][10][11][12][13][14]. While the sensitivity limits for general multiparameter scenarios are hard to determine due to the non-commutativity of the observables that provide maximal information on different parameters, this problem can be avoided when all parameters are encoded locally (i.e., the parameter-encoding Hamiltonians commute with each other) [15,16]. In this case, sometimes also called "distributed sensing", the collective quantum enhancement can be traced back to the entanglement between the modes where the parameters are encoded [10]. Entanglement in addressable modes can be generated by distributing an ensemble of atomic spins into M spatial modes. This technique has been studied recently both experimentally [17][18][19] and theoretically [20][21][22] for the case of split spin-squeezed ensembles that can be generated by a nonlinear (one-axis twisting) evolution [23]. Multiparameter estimation with a spatially distributed nonclassical spin ensemble. Each localized spin ensemble occupies a different spatial mode k = 1, . . . , M (a) and is subject to a different local electromagnetic field strength (b). The spins therefore experience a different phase shift θ k in each mode (c). Strategies to improve the collective measurement sensitivity consist in particle entanglement (d), i.e. the entanglement among two spins confined to the same mode k, and mode entanglement (e), i.e. spin entanglement that is shared between spins in different modes k = l.
For single-parameter estimation, the sensitivity gain and the spin entanglement of spin-squeezed states is efficiently captured by the Wineland spin-squeezing parameter [24]. The generalization of this concept to a spinsqueezing matrix quantifies the metrologically relevant quantum fluctuations in the context of multiparameter quantum metrology [25].
In this article, we identify the multiparameter squeezing matrix of nonclassical spin states split into multiple addressable modes, that are routinely prepared in existing platforms with atomic ensembles, such as, e.g., Bose-Einstein condensates (BECs). We provide exact analytical expressions for the spin-squeezing matrix of spinsqueezed states that are distributed over multiple spatial modes. We distinguish between deterministic and beamsplitter-like distributions of atoms that differ in their partition noise. Furthermore, we introduce a metrological witness for entanglement depth and use it to identify the number of entangled modes from the spin-squeezing matrix. To gauge the ability of the squeezing matrix to describe the full multiparameter sensitivity, we compare to the quantum Fisher matrix. Finally, we discuss possible paths towards a generalization of the spin-squeezing matrix to measurements of nonlinear spin observables and apply it to split Dicke states, whose quantum fluctuations cannot be described by the squeezing of linear spin observables.
II. MULTIPARAMETER SENSITIVITY AND SPIN SQUEEZING MATRIX
Assume that a set of M parameters θ = (θ 1 , . . . , θ M ) T , with k = 1, . . . , M , is encoded into M spatially separated modes by local rotations. These parameters could, for instance, represent an electromagnetic field at different positions, see Fig. 1. Each rotation is expressed in terms of local collective spin operatorsĴ α,k = N k i=1σ (i) α,k /2, wherê σ (i) α,k are the Pauli matrices α = x, y, z for the ith atom, and N k is the number of two-level atoms in mode k, such that N = k N k . We consider a parameter-imprinting evolutionÛ transforming an initial quantum stateρ intoρ(θ) = U (θ)ρÛ (θ) † , whereĴ r k ,k = r T kĴ k , r k = (r x,k , r y,k , r z,k ) T andĴ k = (Ĵ x,k ,Ĵ y,k ,Ĵ z,k ) T for k = 1, ..., M .
In order to estimate the parameters θ k , we consider the simultaneous measurement of a vector of local ob-servablesĴ s = (Ĵ s1,1 , . . . ,Ĵ s M ,M ) T . A straightforward way to construct estimators θ est,k for all parameters θ k is to compare the sample average of repeated measurements ofĴ s with its mean value, which is known from calibration. In the central limit, i.e., after η 1 repetitions, we obtain a multiparameter estimation error of [25] Σ = (ηM[ρ,Ĵ r ,Ĵ s ]) −1 , where Σ kl = Cov(θ est,k , θ est,l ) is the estimator covariance matrix, and is the moment matrix. The latter contains the inverse of the covariance matrix Γ[ρ,Ĵ s ] kl = 1 2 ( Ĵ s k ,kĴs l ,l ρ + Ĵ s l ,lĴs k ,k ρ ) − Ĵ s k ,k ρ Ĵ s l ,l ρ , and the commutator ma- Throughout this article, we define our reference frame for each mode k such that r k and s k are orthogonal vectors in the yz plane, while the mean-spin direction defines the x direction.
The matrix Σ contains information about the estimation error for arbitrary linear combinations n T θ of the parameters: Therefore, the essential information about multiparameter sensitivity is contained in the moment matrix M.
A. Spin-squeezing matrix
In order to motivate the construction of the spinsqueezing matrix, let us first briefly recall the Wineland et al. spin-squeezing parameter that expresses the sensitivity gain of single-parameter measurements. For M = 1, the expression (2) reduces to (∆θ est ) 2 = (∆Ĵ s ) 2 ρ /(µ Ĵ x 2 ρ ). An optimal classical strategy, i.e., in the absence of quantum entanglement, is given by a coherent spin state [6] and achieves an estimation error (∆θ est ) 2 SN = (µN ) −1 at the so-called shot-noise limit. The entanglement-induced quantum enhancement beyond this classical limit is quantified by the Wineland et al. spin-squeezing parameter [24] Any violation of the shot-noise condition ξ 2 [ρ,Ĵ r ,Ĵ s ] ≥ 1 witnesses entanglement among the spins [26,27] and indicates a quantum gain for estimations of the unknown phase parameter θ, generated byĴ r , from the measurement observableĴ s . A generalization of this idea leads to the spin-squeezing matrix [25]. In the considered scenario, the multiparameter shot-noise limit [10] is given by where F SN = diag(N 1 , . . . , N M ). The estimation error (2) is therefore above the shot-noise limit, i.e., Σ ≤ Σ SN when For square matrices A and B, the condition A ≥ B expresses that A − B is a positive semi-definite matrix. We write the condition (7) equivalently as [25] where the elements of the M × M spin-squeezing matrix read The single-parameter spin-squeezing coefficient (5) is recovered for M = 1.
In multimode settings, it is possible not only to entangle particles in the same mode (particle entanglement), but also to introduce delocalized entanglement among particles that are distributed into different modes (mode entanglement) [10,22,28,29]. It has been realized that mode entanglement is a useful resource for achieving collective quantum enhancements for the estimation of linear combinations of parameters that are distributed over multiple modes [9,10].
Since the shot-noise limit can only be overcome by particle-entangled states [10], a violation of the condition (8) implies particle entanglement among the spins, but does not reveal the distribution of entanglement across the modes. A variety of entanglement witnesses suitable for the detection of mode entanglement are available [21,22,[30][31][32][33][34][35][36][37][38][39][40][41]. However, also the spin-squeezing matrix contains information about the correlations between modes in its off-diagonal entries [25]. Below, in Sec. II B, we show how a small modification to the spinsqueezing matrix can transform it into a quantitative witness for genuine multimode entanglement that is able to identify lower bounds on the number of entangled modes.
The spin-squeezing matrix (9) expresses the multiparameter sensitivity obtained by measurements of the angular momentum observablesĴ s . To gauge the ability of this measurement to extract the full metrological features of the quantum stateρ under consideration, we compare to the quantum Fisher matrix F Q [ρ,Ĵ r ], which represents an upper bound on multiparameter sensitivity for any measurement strategy. Here, this upper bound can be saturated for a pure probe state, since all gener-atorsĴ r k ,k commute with each other [15,16]. We obtain from the multiparameter quantum Cramér-Rao bound that the estimation error from an optimal measurement is above shot noise if F Q [ρ,Ĵ r ] ≤ F SN , or equivalently and F Q [ρ,Ĵ r ] is the quantum Fisher matrix. The moment-based approach gives rise to a lower bound to the sensitivity of an optimal measurement, i.e., We hence obtain the following hierarchy of conditions where the first inequality holds for arbitrary statesρ, and the second inequality is valid for for shot-noise-limited multiparameter measurements, i.e., particle-separable statesρ. The strongest condition to check these matrix inequalities is obtained by comparing the respective minimal eigenvalues, i.e., where we used λ min ( We refer to λ min (ξ 2 [ρ,Ĵ r ,Ĵ s ]) as the the collective squeezing as it corresponds to the squeezing that can be achieved by the stateρ for the estimation of an optimal linear combination of parameters, which in turn is identified by the associated eigenvector [recall Eq. (4)]. The hierarchy (12) provides us with two pieces of information about multiparameter squeezing. First, a violation of the condition λ min (ξ 2 [ρ,Ĵ r ,Ĵ s ]) ≥ 1 identifies a quantum sensitivity enhancement achieved by squeezing, and larger violations imply stronger quantum gains. Second, the difference between λ min (ξ 2 [ρ,Ĵ r ,Ĵ s ]) and λ max (χ −2 [ρ,Ĵ r ]) −1 quantifies the metrological quality of the chosen measurement observablesĴ s , i.e., their ability to extract the full sensitivity from the given quantum state. For pure stateŝ Ψ = |Ψ Ψ| we can use F Q [Ψ,Ĵ r ] = 4Γ[Ψ,Ĵ r ] to obtain the explicit expression B. Spin-squeezing matrix for mode entanglement To derive a criterion for mode-separability, we compare the multiparameter sensitivity to the limit achievable by mode-separable states, given by [10] where Following the procedure of the preceding Section, we are able to express this condition for mode separability equivalently as where is the modified spin-squeezing matrix for mode separability.
As we demonstrate in Appendix A, this construction can be generalized even further to reveal genuine multipartite entanglement among groups of at least k modes. A pure state is called k-producible if it can be written as |Ψ k−prod = b α=1 |ψ α and each |ψ α is an arbitrary quantum state for not more than k parties. A density matrix is k-producible if it can be written as a convex linear combination of arbitrary k-producible pure states. It is possible to prove (see Appendix A) that any k-producible state of modes must satisfy This inequality is violated if and only if the smallest eigenvalue of the matrix ξ 2 MS [ρ,Ĵ r ,Ĵ s ] is smaller than 1/k.
Similarly as before, we may compare this criterion to an analogous construction based on the quantum Fisher matrix to gauge the quality of the Gaussian characterization (17) of the state's entanglement properties. States that are k-producible satisfy F Q [ρ k−prod ,Ĵ r ] ≤ kF MS [ρ k−prod ,Ĵ r ]. Following the steps of Eqs. (10)- (12) analogously, we obtain the hierarchy for any mode k-producible state, where χ MS [ρ, and for a pure state we obtain
III. SPLIT SQUEEZED STATES FROM ONE-AXIS-TWISTING
Squeezing represents the leading strategy to achieve quantum enhancements in quantum metrology experiments, from gravitational wave detectors [42] to atomic clocks [6]. In recent experiments, atomic squeezed spin states were distributed coherently into several addressable modes [17,18]. In this Section, we study the potential of this approach for multiparameter measurements, as well as the measurable signatures of mode entanglement, by determining the corresponding spin-squeezing matrices (9) and (17) analytically.
Generally, we distinguish between two different experimental procedures to achieve spatially distributed squeezed states. The first procedure was followed in the experiments [17][18][19] and consists of preparing a squeezed atomic state in a single spatial mode and then dividing this mode coherently into two or more modes via an operation that can be described as a beam splitter on spatial modes. This leads to a probabilistic distribution of atoms in the modes described by a multinomial distribution. As a consequence, partition noise will be present in the spin statistics. Alternatively, we also consider a second procedure, where the atoms are distributed deterministically over the spatial modes. The squeezed state may then be generated, e.g., by a collective interaction with a cavity [43] that affects all atoms in the same way, independently of their spatial mode. This procedure gives rise to a similar split spin-squeezed state, which, however, is free of partition noise.
A. Split squeezed states with partition noise
Consider an ensemble of N spin-1/2 particles, initially prepared in a coherent spin state polarized along the x direction, i.e., |N/2 x with J x |N/2 x = N/2|N/2 x . An evolution of this state generated by the one-axis twisting (OAT) Hamiltonian H = χJ 2 z for a time t = µ/(2χ) generates squeezing of the collective spin observables and introduces particle entanglement among the individual spins [6,23] in the state |Ψ(µ) = e −iHµ/(2χ) |N/2 x . Note that the resulting dynamics is cyclic with period 2π, and therefore we limit our attention to the interval 0 ≤ µ < 2π. For small nonzero µ, the state |Ψ(µ) shows along a direction s in the yz-plane a smaller variance than the spin-coherent state, originating from the entanglement created by the nonlinear evolution, while remaining polarized along the x axis.
In this squeezed spin state, all particles are localized in space and occupy the same external (spatial) mode. By applying a beam-splitter transformation to the external mode, the correlated spins can be distributed into M addressable modes with a ratio determined by the probability distribution p 1 , . . . , p M , so that on average N k = p k N particles are localized in mode k. We denote the resulting M -mode state by |Ψ PN (µ) and use the no-tationΨ PN (µ) = |Ψ PN (µ) Ψ PN (µ)|, where the subscript PN indicates the presence of partition noise. The bipartite (M = 2) version of this scenario has been analyzed theoretically in Ref. [21] and experimentally with a BEC in Ref. [17]. In these works the focus has been the detection of (mode) entanglement and EPR steering between the two partitions, while here our goal is to characterize their potential for applications in multiparameter quantum metrology and to identify entanglement from the metrological properties.
To obtain the metrological properties for multiparameter sensing of this state, we determine all first and second moments of spin observables in each mode for the stateΨ PN (µ). The local directions for the measurement s k and the rotation r k are chosen as the squeezed and anti-squeezed directions, respectively, corresponding to minimal and maximal eigenvectors of the local 2 × 2 covariance matrices in the yz-plane of each mode. The full expressions for first and second moments along arbitrary directions are provided in Appendix B, together with the angle specifying the directions s k and r k [see Eq. (B4)], which turn out to be independent of k. We obtain where we defined the functions It is easy to check that f − N (µ) ≤ 0 and f + N (µ) ≥ 0.
Spin-squeezing matrix
We first note that inserting Eq. (21) into Eq. (9) leads to T is a unit vector, and we have introduced the short-hand notation c N (µ) = cos 2N −2 (µ/2). The eigenvalues of this matrix can be easily identified as where λ min is non-degenerate for µ > 0 [recall that f − N (µ) ≤ 0] with eigenvector v and λ max is (M − 1)fold degenerate and corresponds to the eigenspace orthogonal to v. It is easy to verify that the collective squeezing coincides with the single-parameter spinsqueezing (5) of the spin ensemble before the splitting: The strongest suppression of quantum noise, i.e., the optimal quantum enhancement, is achieved for the estimation of a linear combination of parameters v T θ, determined by the minimal eigenvector v. It is important to note that this vector can be manipulated by tailoring optimal states that are maximally sensitive for any fixed linear combination of parameters. To see this, first note that the absolute weight of each parameter is determined by the splitting ratio p k . Second, the sign can be modified by applying local rotations: A π rotation around the x axis changes the sign of the k-th row and k-th column of the covariance matrix and thereby of the spin-squeezing matrix (9). Hence, such a rotation, which can be realized with high fidelity in atomic systems with external light fields, introduces a minus sign in the k-th component of the vector v. This allows us to engineer a split-squeezed state that maximizes the quantum gain for an arbitrary linear combination of parameters of the Notice that this linear combination is not necessarily the same one that reaches the highest sensitivity, since the quantum gain in each parameter is normalized by the shot-noise limit which depends on the local number of particles N k . When this number is high, the sensitivity is high even if squeezing is only moderate. In order to directly optimize the sensitivity, we must focus on the moment matrix Eq. (3), which relates to multiparameter sensitivity via Eqs. (2) and (4).
Our analysis based on the squeezing matrix contains only Gaussian properties of the state, i.e., first and second moments of collective spin observables. We may gauge the ability of these expressions to efficiently capture the properties of these states by comparison with more general functions based on the quantum Fisher matrix, see Eqs. (12) and (19). Inserting Eq. (21c) into Eq. (13), we find The matrix (26) has the (M − 1)-fold degenerate eigenvalue 1, and the non-degenerate Note that (27) coincides with F Q /N = 4(∆Ĵ r ) 2 /N for one-axis twisting of a single mode with N particles after time µ [see Eq. (21c)]. We thus recover a multiparameter version of the well-known result that spin squeezing efficiently captures the metrological features of states that can be considered to a good approximation as Gaussian [44,45], corresponding to the early time scales of the OAT evolution.
Mode-entanglement spin-squeezing matrix
To analyze the mode entanglement using the modified squeezing matrix (17), we make use of the analytical expression for the anti-squeezed variances of split spinsqueezed ensembles, given in Eq. (21c) for k = l. For arbitrary {p k } M k=1 , we obtain are the elements of the vector w = (w 1 , . . . , w M ) T and the diagonal matrix D, respectively. Strategies to analytically compute the eigenvalues for matrices of this form exist [46], but are in general cumbersome. For simplicity, we focus on the case of equal splitting ratio, i.e. p k = 1/M for all k = 1, . . . , M . In this case, w k and D k no longer depend on k and D is proportional to the identity matrix. We find the nondegenerate minimal eigenvalue with eigenvector e = (1, . . . , 1) T / √ M . Note that in the limit M → ∞, we recover Eq. (24). Intuitively, in this limit, each mode is populated by not more than a single particle and thus the particle entanglement, which is detected by (24), becomes equivalent to the mode entanglement, detected by (31).
The mode entanglement criterion (18) is shown in Fig. 2. We compare the minimal eigenvalue (31) to the k-separable limit (18). To observe the strongest possible violation of the separability condition, we optimize the time evolution parameter µ such that (31) takes on its smallest possible value. The optimal squeezing time µ MS is generally shorter than the time µ opt that optimizes the quantum gain over the shot-noise limit, i.e., the minimal eigenvalue of (36), whereas both coincide in the limit M → ∞.
Again, we may gauge the quality of our Gaussian spin measurements by comparison with the quantum Fisher matrix via the hierarchy (19). From Eq. (21c) we can easily obtain the matrix defined in Eq. (20) in the most general case. In the case of equal splitting ratio, p k = 1/M , we obtain We find the non-degenerate We observe that hence, in this limit, we recover the maximum eigenvalue (27) of the matrix (26). The eigenvalues (31) and (33) are plotted in Fig. 2 as thick and semi-transparent dashed lines, respectively. We visually observe the hierarchy (19) and as the squeezing time µ increases, we are able to identify genuine multipartite entanglement among larger groups of at least k modes.
B. Split squeezed states without partition noise
Let us now turn to split squeezed states with a fixed number of particles in each mode. A OAT evolution that acts on all spins collectively, regardless of their spatial mode, generates a split-squeezed stateΨ nPN (µ) that is free of partition noise. The analytical expressions for the spin expectation values of interest are listed in Appendix C. As in the previous case, we focus on the spin moments for the optimal directions for spin rotations r k and measurements s k , which correspond to the local squeezed and anti-squeezed spin directions, respectively. These directions are independent of k and coincide with those found previously in the presence of partition noise, since the mode splitting has no impact on the spin state. We obtain where v T = ( N 1 /N , . . . , N M /N ) T . The eigenvalues read Remarkably, the collective squeezing (37) coincides with that of (24), indicating that the presence of partition noise does not affect the quantum sensitivity advantage if the squeezing is exploited in an optimal way, i.e., for the linear combination v T θ of parameters yielding the largest quantum gain. For comparison, from Eq. (13), we obtain The non-degenerate λ max (χ −2 [Ψ nPN (µ),Ĵ r ]) = (N − 1)f + N (µ) + 1 coincides with the maximum eigenvalue of (26).
Mode-entanglement spin squeezing matrix
For the analysis of mode entanglement using the modified squeezing matrix (17), we combine our previous results with the expression (35c) for the anti-squeezed variances. For arbitrary choices of the {N k } M k=1 , we find where A N (µ) = N + f + N (µ) l N l (N l − 1), and the elements of w = (w 1 , . . . , w M ) T and the diagonal matrix D are given as respectively.
For the special case of equal splitting, i.e., N k = N/M for all k, we obtain the non-degenerate minimal eigenvalue Comparison with Eq. (31) reveals that the presence of partition noise has an effect on the detection of mode entanglement from the spin-squeezing matrix (17). A splitsqueezed state without partition noise shows a slightly smaller minimal eigenvalue and thus reveals more entanglement at the same nonlinear evolution time µ according to the witness (18). A graphical comparison is given in Fig. 2, where Eq. (43) is displayed as the thick solid lines. From Eq. (20), we obtain for the criterion based on the Fisher information matrix for a uniform splitting ratio From this we get Comparison with (33) confirms that the influence of partition noise on the mode separability witness remains present when we consider an optimal measurement. The eigenvalues (45) are plotted in Fig. 2 as semi-transparent solid lines.
C. Sensitivity advantage offered by mode entanglement
Let us now compare local (mode separable, Ms) and nonlocal (mode entangled, Me) strategies for the estimation of an arbitrary linear combination of parameters n T θ. From the results found above, we conclude thatindependently of the presence of partition noise-an optimally designed nonlocal strategy can lead to a quantum gain that coincides with the single-parameter spin squeezing coefficient of the initial spin ensemble before splitting, i.e., For a given linear combination characterized by the coefficients n, this sensitivity is achieved by preparing the optimal nonlocal stateρ Me,opt by splitting the maximally squeezed (i.e. the state minimizing ξ 2 [ρ,Ĵ r ,Ĵ s ]) initial spin ensemble in the stateρ with a splitting ratio p k = n 2 k and then applying local π-rotations in all modes with negative n k . To identify the potential advantage of mode entanglement, we compare Eq. (46) to the quantum gain of an optimal mode-local squeezing strategy with the same average number of particles in each mode. In this case, the spin-squeezing matrix is diagonal, and the multiparameter quantum gain is given by the average of local quantum gains, namely The optimal local strategy consists of maximally squeezing each local spin ensemble, i.e. up to the minimum of the local squeezing coefficient ξ 2 [ρ k ,Ĵ r k ,k ,Ĵ s k ,k ], respectively. An advantage of mode entanglement for the estimation of n T θ is indicated when the ratio of the respective optimized quantum gains is larger than one, i.e., when For large number of particles N , the scaling of this figure of merit can be determined analytically. The singleparameter spin squeezing coefficient for N particles at the optimal squeezing time behaves asymptotically as [23,47] For the preparation of the optimal nonlocal probe state, the BEC is split equally into M modes after a squeezing evolution up to maximum squeezing. The local strategy consists of optimal local squeezing evolutions of individual BECs whose particle number N/M coincides with the average particle number in each mode of the nonlocal state. The red dashed line represents the analytical prediction (50) for N → ∞. We plot N = 100 (blue), N = 10 4 (orange) and N = 10 6 (green). Bottom panel: Same ratios as before as a function of the total atom number N , for splitting into M = 2 (blue), M = 3 (orange), M = 4 (green) modes. The red dashed line represents the analytical prediction (50) for N → ∞.
Since the optimal mode-entangled strategy allows us to make use of the collective squeezing of all particles, we obtain ξ 2 Ms,opt = 3 2 3 2 N − 2 3 , whereas in each local mode we only have p k N particles. We now focus on the case of the estimation of an equally weighted linear combination of parameters, i.e., |n k | = 1/ √ M . The optimal splitting ratio for the nonlocal strategy in this case is also an equally weighted distribution of N/M atoms among all modes. Thus each local spin squeezing parameter yields ditional gain provided by mode entanglement is given by The behavior of the quantum gain at numerically determined optimal squeezing times are compared to the analytical prediction Eq. (50) in Fig. 3. Condition (48) is fulfilled for arbitrary values of N and M , demonstrating the increased quantum gain that is offered by modeentangled strategies. We further observe how the asymptotic prediction (50), which is shown as red dashed line in both panels, is approached with increasing N .
IV. SPLIT DICKE STATES
In the previous Section we focused on applications with squeezed spin states that are well characterized by averages and variances of collective spin observables. This formalism is, however, no longer suitable for non-Gaussian spin states, such as Dicke states (see Fig. 4) that can also be generated experimentally in BECs [48,49]. For single-parameter measurements, the Wineland spin-squeezing coefficient has been generalized also to nonlinear measurements to account for the fluctuations of non-Gaussian states [45,50]. In Sec. IV A, we show how generalized squeezing matrices can be constructed from more general local measurement observables, beyond collective spin components. Then, in Sec. IV B 2, we apply this concept to split Dicke states. We observe that, in contrast to the case of Gaussian squeezed states, local measurements (even of nonlinear operators) are no longer able to capture the state's full multiparameter sensitivity due to the nonlinearity of the optimal observables.
A. Spin-squeezing matrices from nonlinear measurements
In order to generalize the construction of the spinsqueezing matrix and its variants, we consider the measurement of a vector of local observablesX s = (X s1,1 , . . . ,X s M ,M ) T . Here, the observablesX s k ,k may contain higher-order moments of the local collective angular moment observables in the mode k. The value of the phases θ, imprinted as before by a set of local collective spin operatorsĴ r , is estimated from the average results using the method of moments [25]. We obtain in the central limit (η 1 repeated measurements) a multiparameter sensitivity of where the moment matrix for such a nonlinear measurement is described as Since the separability limits are derived from generally valid upper sensitivity limits that depend only on the generators but not on the measurement observables, we can define the squeezing matrix, in direct analogy to the approach presented in Sec. II A, as and all particle-separable states must satisfy [25] which is equivalent to shot-noise-limited multiparameter sensitivities.
Following an analogous procedure as in Sec. II B, we define the mode-separability squeezing matrix as i.e., any mode k-producible state must satisfy These definitions hold for arbitrary choices of the local measurement observablesX s . Notice also that the definitions (13) and (20) based on the quantum Fisher matrix are unaffected by this generalization, since they are already independent of the chosen measurement observables by virtue of a systematic optimization.
B. Split Dicke states
The highly sensitive features of Dicke states [48] can be efficiently captured by a nonlinear spin measurement up to second order. In the following Sec. IV B 1 we identify the optimal second-order observable for arbitrary singlemode Dicke states. In Sec. IV B 2 we explore the potential of local measurements of this observable for multiparameter metrology with a split Dicke state and identify the limitations of local measurement strategies for multiparameter quantum metrology with non-Gaussian states that contain mode entanglement.
Single-mode Dicke states
To identify an optimal second-order measurement observable, we first focus on the estimation of a single parameter using a single-mode Dicke state. Generally, for any set of accessible observablesÂ, the maximally achievable sensitivity for estimations of an angle imprinted by the generatorĤ r = r ·Ĥ using the method of moments is given by (57) and the optimal linear combination within this operator family achieving this sensitivity is determined asX s = s ·X with [45] and α ∈ R is an arbitrary constant.
To capture the nonlinear features of a Dicke state in mode k, we add to the set of 3 linear measurement ob-servablesĴ k all symmetrized operators of second order, i.e., {Ĵ α,k ,Ĵ β,k }/2 with α, β ∈ {x, y, z}. We obtain a family of 9 operators that can be used to express arbitrary spin observables of second order. We note that symmetrized second-order operators can be extracted by measuring expectation values of (Ĵ x,k +Ĵ z,k ) 2 ,Ĵ 2 x,k and For the Dicke state |j, m withĴ z,k |j, m = m|j, m , considering the family of 9 observables up to second order A k and 3 first-order generatorsĴ k , it is straightforward to verify that the commutator matrix C[ρ,Ĵ k ,X k ] is zero everywhere except for This means that we can limit our attention to the family of measurement observableŝ X k = (Ĵ x,k ,Ĵ y,k , 1 2 {Ĵ x,k ,Ĵ z,k }, 1 2 {Ĵ y,k ,Ĵ z,k }) T . The symmetry of the Dicke states around the z axis further allow us to focus only on rotations generated byĴ x,k andĴ y,k . Restricting to the setX k furthermore removes the singularity of the full 9 × 9 covariance matrix Γ[|j, m , k ], and we obtain (see Appendix E for details) Due to the symmetry of Dicke states (see Fig. 4), the sensitivity 2(j(j + 1) − m 2 ) is independent of the rotation axis r k = (r x,k , r y,k , 0) T in the xy-plane. This sensitivity indeed coincides with the quantum Fisher information matrix of Dicke states thus demonstrating the optimality of the considered measurements. The optimal observable, however, depends on r k and readŝ
Split Dicke states
We now try to extend these ideas to a multiparameter sensing protocol based on split multimode Dicke states, where in each mode k, an optimal local observable is measured, in analogy to the strategy discussed above for split squeezed states. We therefore suppose that each local parameter θ k is estimated from the measurement results of the observableX s k ,k = s k ·X k with s k = (−mr y,k , −mr x,k , r y,k , r x,k ) T chosen to match the optimal local measurement observable (62). The rotations are locally generated byĴ r k ,k around the axis r k = (r x,k , r y,k , 0) T .
In the following we focus on the relevant case of split Dicke states |j, m in the presence of partition noise [19,41], i.e., splitting is created by a beam splitter operation on the spatial modes, leading to the stateΨ j,m,PN . The full analytical expressions for the elements of the relevant covariances and commutators are given in the Appendix F. These allow for a straightforward construction of the spin-squeezing matrices (53) and (55), whose full expressions are rather lengthy and we therefore omit them here. In Fig. 5, the minimal eigenvalue of the squeezing matrix (53) is plotted for two-mode split Dicke states as a function of the splitting ratio p : 1 − p for different values of m.
To compare with the sensitivity that is accessible by an optimal measurement strategy, we employ, as before, the full optimized expression (13). We obtain with j = N/2 and v = { √ p 1 , √ p 2 , ...}. We obtain which indeed coincides with the quantum Fisher information of the Dicke state before splitting for arbitrary rotations in the xy-plane (61), normalized by the shotnoise level N = 2j. The resulting sensitivity is shown for comparison in Fig. 5 as dashed lines. Similarly, we may analyze the mode entanglement using the matrix (55) and its optimized version (20). The latter can be compactly expressed as where u is a vector and F a diagonal matrix with entries We obtain in the case of uniform splitting ratio, i.e., p k = 1/M for all k that In the limit of an infinite number of modes, we obtain again that which is given in Eq. (64). The mode entanglement detected by the criterion (19) from the quantum Fisher matrix is shown in Fig. 6. However, for the chosen local measurement observables, the spin-squeezing matrix (55) is unable to reveal mode entanglement of split Dicke states.
Summarizing the findings of this Section, we note that if optimal measurements are available, the highly sensitive Dicke states can be converted into an equally sensitive resource for multiparameter estimation through splitting into several spatial modes. Moreover, the splitting generates entanglement among large numbers of modes, which can be detected using metrological entanglement criteria.
Implementing an optimal measurement for spatially distributed non-Gaussian entangled states is, however, more challenging than in the case of Gaussian states. The reason is that the sum of local observables does not correspond to the global optimal observable unless it is linear. Hence, the squeezing matrix of split Dicke states obtained from local, nonlinear measurements describes a multiparameter sensitivity that remains considerably below the ultimate quantum limit. Yet, since the state is pure and the parameters are encoded locally with commuting generators, there exists another measurement strategy that attains the sensitivity described by the quantum Fisher matrix [15,16].
V. APPLICATION: NONLOCAL SENSING OF A MAGNETIC FIELD GRADIENT
An application of practical interest is the estimation of magnetic field gradients [51,52]. Here, we use our results to analyze the sensitivity that can be achieved for this task using split BECs in nonclassical spin states. In particular, we consider the case of a spin-squeezed BEC split into two modes [17] for the estimation of the difference of the magnetic field strength in two spatial positions. In each mode, the local magnetic field leads to a rotation of the spin state due to the Zeeman effect, yielding a parameter-imprinting evolution described by (1), where θ k depends on the local magnetic field strength and where the direction r k can be manipulated by suitable local rotations of the spin state. In the following, we assume that the state is oriented such that the effective rotation axis r k corresponds to the local axes of maximal sensitivity that were discussed in Sections III and IV.
We focus on an estimation of the parameter difference θ A − θ B , which contains information about the magnetic field difference and therefore its gradient. In order to assess the role of the mode entanglement for achieving this measurement sensitivity, we compare our protocol to a local strategy consisting of using the same local states without correlations between the modes. We note that, for the sake of experimental feasibility, we consider a realistic, finite, and fixed amount of squeezing, in contrast to our theoretical analysis of Sec. III C, where the squeezing of global and local strategies was independently optimized to determine the ultimate limits of each strategy.
As a concrete example, we consider a 87 Rb BEC of N = 1000 atoms that through OAT dynamics is prepared in a ξ 2 = −10 dB spin-squeezed state of the two hyperfine states |F = 1, m F = −1 and |F = 2, m F = 1 , Fig. 7a,b. By controlling the external trapping potential it is possible to distribute the particles into spatially separated modes [53], Fig. 7c. During this operation the state can be oriented horizontally (Fig. 7c), so that the squeezed quadrature is less affected by phase noise [54]. To make a quantitative prediction for the sensitivity, we assume an equal splitting of the atoms into two modes separated by d = 50 µm, which is at least a factor 10 larger than the BEC wavefunction size for typical trapping frequencies [17,53]. The advantage of using BECs for sensing is in fact that they are extremely localized ensembles, allowing to probe small volumes of space.
The interferometric (Ramsey) protocol begins with orienting the states vertically, Fig. 7d, to maximize the sensitivity to local phase imprinting. In Sec. III A we have seen that, in order to prepare an optimal state for the measurement of the phase difference, it is now convenient to rotate system B's local spin state by 180 • around the x-axis (the mean-spin direction), in order to reverse the sign of the covariance Cov(Ĵ s A ,A ,Ĵ s B ,B )ρ of the local measurement observables between the two modes. The consequence of this rotation to the spin-squeezing matrix (9) is that the off-diagonal elements acquire a minus Figure 7. Experimental protocol for sensing a gradient with a split spin-squeezed state. Nonclassical correlations are created by exposing a coherent spin state (a) to a nonlinear evolution, leading after a short time to a squeezed spin state (b). Splitting the external degree of freedom into two modes creates a split squeezed state. For the splitting, the state's fluctuations are aligned along the z axis by suitable rotations in order to minimize phase noise (c). To prepare for the Ramsey protocol, the states are rotated such that subsequent phase rotation around the z axis displace the state along its squeezed spin component (d). Moreover, for the estimation of a gradient the second system is rotated 180 degrees around its mean-spin direction x. In the presence of a gradient, the two local spin states experience different rotation angles (e). A final π/2-pulse around x closes the Ramsey sequence and allows us to estimate the phases from measurements of the relative populations (i.e., the spin z-components) in each mode (f). sign, while the rest of the elements is unchanged. This maps the linear combination of maximal sensitivity from (θ A + θ B )/ √ 2 to (θ A − θ B )/ √ 2, which is of interest here. In the presence of a field gradient, the two local states will acquire a different rotation angle depending on their position, see Fig. 7e. The interferometic protocol is terminated with a π/2-pulse around the x axis, Fig. 7f which allows to access the local phases by measuring the local population imbalances.
This protocol makes optimal use of the mode entanglement and leads to a sensitivity enhancement that coincides with the squeezing of the atomic ensemble before the splitting (see Sec. III), assuming that the splitting process does not introduce additional sources of noise. Since the spin-squeezing matrix quantifies the quantum gain over the shot-noise limit, we obtain the absolute sensitivity by appropriate multiplication with the shot-noise sensitivity, see Sec. II. For the specific case discussed here, we obtain an uncertainty for the phase difference of ∆((θ A − θ B )/ √ 2) = ξ/ √ N 3.2 mrad. The contribution of the mode entanglement can be revealed by treating the two BECs as independent ensembles for comparison. To this end, we study the properties of a reference state ρ A ⊗ρ B that has been prepared as the product of the two reduced states of modes A and B, respectively. Each subsystem consists of N A = N B = 500 atoms, and the local Wineland spin-squeezing coefficient 56 dB is limited by partition noise and coincides for both modes. The squeezing matrix reads The degeneracy of this matrix implies that the sensitivity gain is the same for arbitrary normalized linear combinations n T θ = n A θ A + n B θ B of the two local phases θ A and θ B for this local state and reads n T ξ 2 [ρ A ⊗ρ B ,Ĵ r ,Ĵ s ]n = ξ 2 A , whenever n 2 A +n 2 B = 1 (the gradient estimation considered here corresponds to n A = −n B = 1/ √ 2). Renormalizing the sensitivity gain, as before, with respect to the shot-noise limit, we obtain a sensitivity of ∆((θ
VI. CONCLUSIONS
The squeezing matrix represents a practical approach for quantifying multiparameter quantum gain of split squeezed states, and relates the quantum sensitivity advantage to the squeezing of a family of local observables. We have provided exact analytical expressions for the spin-squeezing matrices of nonclassical spin states that are relevant in current experiments with cold and ultracold atomic ensembles. Our analysis reveals practical and optimal state preparation and measurement strategies that maximize the multiparameter sensitivity for any linear combination of spatially distributed phase parameters.
For split squeezed states, the collective squeezing in multiparameter measurements coincides with the total squeezing of the spin ensemble before the splitting -independently of the presence of partition noise in the split-ting process. Comparison with the quantum Fisher matrix reveals the optimality of the chosen local measurement strategy as long as the state is Gaussian.
Our framework is applicable to arbitrary pure and mixed quantum states and allows us to include more general, nonlinear measurement observables. An analysis of nonlinear observables on split Dicke states points out the limitations of local measurements for non-Gaussian spin states.
Moreover, we have introduced a way to detect and put quantitative bounds on multimode entanglement directly from information about multiparameter squeezing. This experimentally practical method efficiently detects genuine multimode entanglement of split squeezed states.
Finally, we have studied the performance of these states for gradient sensing with realistic experimental parameters, and illustrated the metrological advantage provided by mode entanglement.
Our results outline concrete strategies for harnessing the nonclassical features of spatially split squeezed states for quantum-enhanced multiparameter measurements in an optimal way. These results provide relevant guidance for ongoing experiments with Bose-Einstein condensates.
In future works, it would be interesting to investigate how the spin-squeezing matrix could give a quantification of entanglement through a connection with entanglement monotones [40,55], and the metrological advantage provided by correlations stronger than entanglement [56][57][58][59]. Finally, a k-producible stateρ k−prod is by definition a mixture of such product statesρ (j) , each of which has entangled blocks of size no greater than k but may have different partition structures. Convexity of the quantum Fisher information and concavity of the variance then implies From this we can derive the limit on the mode-separability spin-squeezing matrix (17) following analogous steps as for the derivation of Eqs. (8) and (16). We finally obtain the result (18). | 10,746.4 | 2022-01-26T00:00:00.000 | [
"Physics"
] |
Measuring $CP$ violation and mixing in charm with inclusive self-conjugate multibody decay modes
Time-dependent studies of inclusive charm decays to multibody self-conjugate final states can be used to determine the indirect $CP$-violating observable $A_\Gamma$ and the mixing observable $y_{CP}$, provided that the fractional $CP$-even content of the final state, $F_+$, is known. This approach can yield significantly improved sensitivity compared with the conventional method that relies on decays to $CP$ eigenstates. In particular, $D \to \pi^+\pi^-\pi^0$ appears to be an especially powerful channel, given its relatively large branching fraction and the high value of $F_+$ that has recently been measured at charm threshold.
It is of great interest to search for effects of indirect CP violation in time-dependent studies of neutral charmmeson decays. In the Standard Model indirect CP violation is expected to be well below the current level of experimental precision [1], but many models of New Physics predict enhancements [2]. A very important CPviolating observable is A Γ , which is measured from the difference in lifetimes of the decays of D 0 and D 0 mesons to a CP eigenstate. In this Letter it is shown how inclusive self-conjugate multibody decays that are not CP eigenstates can also be harnessed for the measurement of A Γ , provided that their fractional CP -even content, F + , is known. This new approach has the potential to improve significantly the knowledge of A Γ and has become possible thanks to measurements of F + that have recently begun to emerge from analyses of coherent charm-meson pairs produced at the ψ(3770) resonance [3]. Furthermore, it is explained how exploiting these decays can also provide a corresponding improvement in the precision on y CP , which is an important observable that describes D 0 D 0 oscillations. For the purpose of concreteness the discussion is presented for the example decay D → π + π − π 0 , although the results are valid for all selfconjugate multibody modes. Here and throughout the discussion D indicates a neutral charm meson; this notation is used when it is either unnecessary or not meaningful to specify a flavour eigenstate.
Measurements with CP eigenstates
In the D-meson system the mass eigenstates, D 1,2 , are related to the flavour eigenstates D 0 and D 0 as follows: where the coefficients satisfy |p| 2 + |q| 2 = 1 and The phase convention CP |D 0 = |D 0 is adopted. Indirect CP violation occurs if r CP = 1 and/or φ CP = 0.
Charm mixing is conventionally parameterised by the quantities x and y, defined as where M 1,2 and Γ 1,2 are the mass and width of the two neutral meson mass eigenstates, and Γ the mean decay width of the mass eigenstates. In the chosen convention D 1 is almost CP even. The average of currently available measurements gives x = (0.41 +0.14 −0.15 )% and y = (0.63 +0.07 −0.08 )% [4]. Consider an environment where charm mesons are produced incoherently, such as the LHC, or in the cc continuum or from a b-decay at an e + e − B-factory, and are observed through their decay into a CP eigenstate of eigenvalue η CP . Time-dependent measurements allow the decay widthsΓ andΓ to be determined for mesons produced in the D 0 and D 0 flavour states, respectively. From these quantities the CP -violating observable A Γ and mixing observable y CP may be constructed: Assuming x, y, (r CP − 1/r CP ) and φ CP to be small, and assuming direct CP violation to be negligible, it can be shown [10] that these observables have the following dependence on the underlying physics parameters Expressions that also allow for the contribution of direct CP violation can be found in Ref. [11]. Thus in the limit of CP conservation A Γ vanishes and y CP → y. The average of currently available measurements, dominated by studies based on the CP -even eigenstates K + K − and π + π − , yields A Γ = (−0.058 ± 0.040)% and y CP = (0.866 ± 0.155)% [4]. (Here the A Γ average includes new measurements from the LHCb [5] and CDF [6] collaborations, in addition to the older set of results from LHCb [7], BaBar [8] and Belle [9] that are considered in Ref. [4].) Introducing self-conjugate multibody decays and the CP -even fraction F + The CP content of an inclusive self-conjugate multibody decay, for example D → π + π − π 0 , can be measured with a sample of coherently produced DD pairs at the ψ(3770) resonance, such as that collected by the CLEOc and BESIII experiments. A double-tag technique is employed in which one D meson is reconstructed in the signal decay of interest, and the other in its decay to a CP eigenstate. In such an event, and neglecting any CP violation, the quantum numbers of the ψ(3770) meson means that the CP eigenvalue of the signal decay is fixed. The CP -even fraction of the signal decay is given by ignates the number of decays tagged as CP -even (-odd), after correction for detector inefficiencies and the specific branching fractions of the CP eigenstate tags employed. In this manner F + has been measured for the decay D → π + π − π 0 and found to be 0.968 ± 0.017 ± 0.006, indicating the mode to be almost fully CP even [3].
Although CP violation is neglected in the currently available measurements of F + this assumption introduces negligible bias in the result. Both the Standard Model and theories of New Physics expect direct CP violation in charm decays to be ≤ 10 −3 [12], a prediction which is compatible with existing experimental results [13]. Any effects will therefore be small alongside the measurement precision attainable with the CLEO-c and current BE-SIII data sets. Furthermore, the double-tag analyses performed at these experiments have no sensitivity to indirect CP violation at leading order in (x, y), as the DD system is produced at rest. For the specific case of D → π + π − π 0 , a recent time-integrated high precision analysis by LHCb has revealed no evidence of any direct CP -violating effects [14].
There is a simple relationship between F + and the parameters that describe the intensity and strong-phase variation over the phase space of the decay. The amplitude of a multibody decay such as D → π + π − π 0 is dependent on the final-state kinematics, which can be uniquely defined by the Dalitz plot coordinates s 12 = m 2 (π + π 0 ) and s 13 = m 2 (π − π 0 ). The amplitude of a D 0 decay to a specific final state is given by A D 0 (s 12 , s 13 ) = a 12,13 e iδ12,13 , where the integral of |A D 0 (s 12 , s 13 )| 2 over the full Dalitz plot is normalised to unity. Consider the situation where the Dalitz plot is divided into two bins by the line s 12 = s 13 . The bin for which s 12 > s 13 is labelled −1 and the opposite bin is labelled +1. The parameter K i (K i ) is the flavour-tagged fractional intensity, being the proportion of decays to fall in bin i in the case that the mother particle is known to be a D 0 (D 0 ) meson: The parameter c i is the cosine of the strong-phase difference between D 0 and D 0 decays averaged in bin i and weighted by the absolute decay rate: 13ā12,13 cos(δ 12,13 −δ 12,13 ) A parameter s i is defined in an analogous manner for the sine of the strong-phase difference.
The CP -tagged populations of these bins, N ± i , normalised by the corresponding single CP -tag yields, is given by [15] Here h D is a normalisation factor independent of bin number and CP tag. When there is no direct CP violation in the decay A D 0 (s 12 , s 13 ) =ā 12,13 e iδ12,13 ≡ a 13,12 e iδ13,12 and so Under this assumption, and the identities N ± = i N ± i , and i K i = 1, it follows that in the two-bin case Measurements with inclusive self-conjugate multibody decays Now consider, for an incoherently produced D meson, the time-dependence of a self-conjugate multibody decay. The time evolution of the D 0 to the point (s 12 , s 13 ) is given by A D 0 (t, s 12 , s 13 ) = a 12,13 e iδ12,13 g + (t) + r CP e iφCP a 13,12 e iδ13,12 g − (t), (12) where Ignoring terms of O(x 2 , y 2 , xy) or higher, the rate of decay to that point is proportional to |A D 0 (t, s 12 , s 13 )| 2 = e −Γt a 2 12,13 − a 12,13 a 13,12 r CP Γt × y cos(δ 12,13 − δ 13,12 − φ CP ) + x sin(δ 12,13 − δ 13,12 − φ CP ) .
Integrating this over the two bins of the full Dalitz plot leads to the time-dependent decay probability P(D 0 (t)) = +1 |A D 0 (t, s 12 , s 13 )| 2 ds 12 ds 13 + −1 |A D 0 (t, s 12 , s 13 )| 2 ds 12 ds 13 (14) where use is made of the definitions of c i , s i and the relations given in Eqs. (10) and (11). The time evolution for the D 0 decay to the point (s 12 , s 13 ) is given by A D 0 (t, s 12 , s 13 ) = 1 r CP e −iφCP a 12,13 e iδ12,13 g − (t) + a 13,12 e iδ13,12 g + (t), and hence the time-dependent decay probability for the D 0 decay can be shown to be These expressions contain an additional dilution factor of (2F + −1) in comparison to the CP -eigenstate relations of Eqs. (5) and (6) and are identical in the case when F = 0 or 1. In the limit F + → 0.5 then both observables vanish. It is interesting to note that a similar relationship between the two classes of D decays was found in Ref. [3] when considering the determination of the unitarity triangle angle γ using B ± → DK ± decays.
Expressions (18) and (19) may be modified to allow for the possible contribution of direct CP violation. In this case the relations in Eq. (10) no longer apply. Direct CP violation adds an additional magnitude and weak phase difference when considering the relations between the amplitude of the D 0 and D 0 decay, and this additional magnitude and phase varies as a function of position in phase space.
With the inclusion of direct CP violation the expression for A eff Γ becomes where r CP and φ CP are unchanged in their meaning and relate only to indirect CP violation, (2F ′ Hence the effect of the additional amplitudes due to direct CP violation are contained within the terms F ′ + and ∆. In the limit of no direct CP violation ∆ → 0, and F ′ + → F + . Since ∆ must be small the third term in Eq. 20 is negligible in comparison to the others.
The equivalent expression for y eff CP becomes
Discussion and conclusions
Measurements of A eff Γ and y eff CP performed with any self-conjugate multibody decay can be used to determine A Γ and y CP , respectively, provided that the CP content of the decay is known. The mode D → π + π − π 0 is a very promising candidate for this purpose since the dilution effects arising from the factor (2F + − 1) in Eqs. (18) and (19) are < 10%, and it possesses a branching ratio that is around 3.5 times higher than that of D → K + K − , the most common CP -eigenstate mode used for these measurements. Therefore this channel offers an opportunity to improve the knowledge of A Γ and y CP significantly, particularly at e + e − experiments such as Belle-II, where the π 0 reconstruction efficiency is good. The relatively abundant four-body decay D → π + π − π + π − also has the potential to be a high impact channel, although this cannot be confirmed until its CP content is measured. The same remarks apply to D → K 0 S π + π − π 0 , which has a branching fraction of over 5% and comprises the CP -odd eigenstates K 0 S η and K 0 S ω as sub-modes. This channel also has the feature of being Cabibbo favoured, which means that it is extremely robust against any pollution from direct CP violation. The extensively-studied decay D → K 0 S π + π − is not suitable for an inclusive treatment, since it has a CP content of F + ∼ 0.5, as is evident from examining the relative proportion of CP -even and CP -odd double-tagged events reported in a CLEO analysis performed to measure the c i and s i parameters [16].
The Belle collaboration has reported a model dependent analysis of the mode D → K 0 S K + K − that measures y CP through comparing the CP -odd and CP -even regions of the Dalitz plot [17]. Studies also exist that fit time-dependent amplitude models to the Dalitz plots of the decays D → K 0 S π + π − and D → K 0 S K + K − in order to determine the mixing and CP violation parameters [18][19][20]. Furthermore, proposals have been made of how to perform model-independent analyses of selfconjugate decays binned in phase space [21,22]. The method advocated in this Letter is novel because it is inclusive, model-independent and suitable for those decays which are dominated by a single CP eigenstate, such as D → π + π − π 0 . Inclusive analyses are experimentally more straightforward since there is no need to account for the position in phase space of each decay, provided that the acceptance is relatively uniform.
As explained in Ref. [3], self-conjugate multibody modes can also be used to measure the unitarity triangle angle γ with B ± → DK ± decays as long as F + is known for the mode under consideration. In cases where no measurement of F + exists from the charm threshold it is possible to obtain this information from a comparison of a measurement of y eff CP and the value of y CP obtained from CP eigenstates, or indeed that of y itself, assuming negligible CP violation in the charm system. This strategy of using charm-mixing observables to help provide input for the γ determination is similar to that already proposed for quasi-flavour specific states [23].
In summary, inclusive measurements of the time evolution of mutibody self-conjugate charm decays offer the possibility to obtain significantly improved sensitivity to CP violation and mixing in the D 0 D 0 system. The observables A eff Γ and y eff CP are simply related to those of the CP eigenstate case, A Γ and y CP , by a dilution factor (2F + − 1), where F + is the fractional CP -even content of the decay. This parameter may be measured in coherently produced DD decays at the ψ(3770). One mode for which F + is known, D → π + π − π 0 , has the potential to yield a more precise determination of A Γ and y CP than is possible with CP eigenstate decays. Several other promising channels exist with relatively high branching fractions and should also be exploited, provided that analyses at the ψ(3770) show them to be dominated by a single CP eigenstate. Alternatively, measurements of y eff CP using these latter channels will allow their CP con-tent to be determined, which is valuable input for the programme to measure the unitarity angle γ. First results using this class of decays are eagerly awaited. | 3,837.4 | 2015-02-16T00:00:00.000 | [
"Physics"
] |
Predictions of Dynamic Behavior Under Pressure for Two Scenarios to Explain Water Anomalies
Using Monte Carlo simulations and mean field calculations for a cell model of water we find a dynamic crossover in the orientational correlation time $\tau$ from non-Arrhenius behavior at high temperatures to Arrhenius behavior at low temperatures. This dynamic crossover is independent of whether water at very low temperature is charaterized by a ``liquid-liquid critical point'' or by the ``singularity free'' scenario. We relate $\tau$ to fluctuations of hydrogen bond network and show that the crossover found for $\tau$ for both scenarios is a consequence of the sharp change in the average number of hydrogen bonds at the temperature of the specific heat maximum. We find that the effect of pressure on the dynamics is strikingly different in the two scenarios, offering a means to distinguish between them.
• The singularity-free (SF) scenario hypothesizes the presence of a line of temperatures of maximum density T MD (P ) with negative slope in the (T, P ) plane. As a consequence, K T and |α P | have maxima that increase upon increasing P , as shown using a cell model of water. The maxima in C P do not increase with P , suggesting that there is no singularity [4] [ Fig. 2(b)].
Above the homogeneous nucleation line T H (P ) where data are available, the two scenarios predict roughly the same equilibrium phase diagram. Here we show that dynamic measurements should reveal a striking difference between the two scenarios. Specifically, the low-T dynamics depends on local structural changes, quantified by the variation of the number of hydrogen bonds, that are affected by pressure differently for each scenario. We find this result by studying-using Monte Carlo (MC) simulations and mean field calculations-a cell model which has the property that by tuning a parameter its predictions conform to those of either the LLCP or the SF scenario. This cell model is based on the experimental observations that on decreasing P at constant T , or on decreasing T at constant P , (i) water displays an increasing local tetrahedrality [5], (ii) the volume per molecule increases at sufficiently low P or T , and (iii) the O-O-O angular correlation increases [6].
The entire system is divided into cells i ∈ [1, . . . , N], each containing a molecule with volume v ≡ V /N, where V ≥ Nv hc is the total volume of the system, and v hc is the hardcore volume of one molecule. The cell volume v is a continuous variable that gives, in d dimensions, the mean distance r ≡ v 1/d between molecules. The van der Waals interaction is represented by a potential with attractive energy ǫ > 0 between nearest-neighbor (n.n.) molecules and a hard-core repulsion at For a regular square lattice, each molecule i has four bond indices σ ij ∈ [1, . . . , q], corresponding to the four n.n. cells j, giving rise to q 4 different molecular orientations. Bonding and intramolecular (IM) interactions are accounted for by the two Hamiltonian terms where the sum is over n.n. cells, 0 < J < ǫ is the bond energy, δ a,b = 1 if a = b and δ a,b = 0 otherwise, and where (k,ℓ) i denotes the sum over the IM bond indices (k, l) of the molecule i and J σ > 0 is the IM interaction energy with J σ < J, which models the angular correlation between the bonds on the same molecule. The total energy of the system is the sum of the van der Waals interaction and Eqs. (1) and (2).
At constant P , the density of water decreases for T < T MD (P ) which the model takes into account by increasing the total volume by an amount v B > 0 for each bond formed.
Hence the total molar volume v of the system is where v free is a variable for the molar volume without taking into account the bonds, p B = N B /(2N) is the fraction of bonds formed and N B is the number of bonds [4,7].
We perform simulations in the NP T ensemble [7] for q = 6, v B /v hc = 0.5, J/ǫ = 0.5, and for two different values of J σ /ǫ: (i) J σ /ǫ = 0.05, which gives rise to a phase diagram with a LLCP [ Fig. 1(a)], and (ii) J σ = 0, which leads to the SF scenario [4]. We study two square lattices with 900 and 3600 cells, and find no appreciable size effects. We collect statistics over 10 6 MC steps after equilibrating the system for all P and T .
For J σ /ǫ = 0.05, |α P | for P < P C ′ displays a maximum, α max P [ Fig. 1(b)]. As P increases, α max P increases and shifts to lower T , converging toward T W (P ) [ Fig. 1(a)]. We find that the number of bonds, N B , increases on decreasing T , and at constant T decreases for increasing P , and is almost constant at T W (P ) [8]. This is consistent with trends seen both in experiments [5] and in simulations [9], suggesting that for T > T W (P ) the liquid is less structured and more HDL-like, while for T < T W (P ) it is more structured and more LDL-like.
We find that |dp B /dT | shows a clear maximum for all P < P C ′ which shifts to lower T upon increasing P [ Fig. 1(c)]. Remarkably, we also find that the locus of |dp B /dT | max coincides with the Widom line T W (P ) [ Fig. 1(a)] and that the value of |dp B /dT | max increases on approaching P C ′ . This is the same qualitative behavior as |α P (T )| max and C P (T ) max , which are used to locate T W (P ) [Figs. 1(b) and 2(a)]. The relation of |dp B /dT | with the fluctuations is revealed by its proportionality to |α P (T )| and to the fluctuation of the number of bonds where k B is the Boltzmann constant.
For J σ = 0 (SF scenario) we observe no difference for the behavior of N B and |dp B /dT |.
We further verify the prediction of the SF scenario [4] that C max P remains a constant upon Next, we study how this different behavior affects the dynamics. Previous simulations [10] found a crossover from non-Arrhenius to Arrhenius dynamics for the diffusion constant of models that display a LLCP, and showed the temperature of this crossover coincided with T W (P ). We calculate, for both scenarios, the relaxation time τ of S i ≡ j σ ij /4, which quantifies the degree of total bond ordering for site i. Specifically, we identify τ as the time For both scenarios we find a dynamic crossover (Fig. 3). At high T , we fit τ with the Vogel-Fulcher-Tamman (VFT) function where τ VFT 0 , T 1 , and T 0 are three fitting parameters. We find that τ has an Arrhenius T where τ 0 is the relaxation time in the high-T limit, and E A is a T -independent activation energy. We find that for J σ /ǫ = 0.05 the crossover occurs at T W (P ) for P < P C ′ [ Fig. 3(a)], and that for J σ = 0 the crossover is at Fig. 3(b)]. We note that for both scenarios the crossover is isochronic, i.e. the value of the crossover time τ C is approximately independent of pressure.
We next calculate the Arrhenius activation energy E A (P ) from the low-T slope of log τ vs. 1/T [ Fig. 4(a)]. We extrapolate the temperature T A (P ) at which τ reaches a fixed macroscopic time τ A ≥ τ C . We choose τ A = 10 14 MC steps > 100 sec [11] [ Fig. 4(b)]. We find that E A (P ) and T A (P ) decrease upon increasing P in both scenarios, providing no distinction between the two interpretations. Instead, we find a dramatic difference in the P dependence of the quantity E A /(k B T A ) in the two scenarios, increasing for the LLCP scenario and approximately constant for the SF scenario [ Fig. 4(c)].
We can better understand our findings by developing an expression for τ in terms of thermodynamic quantities, which will then allow us to explicitly calculate E A /(k B T A ) for both scenarios. For any activated process, in which the relaxation from an initial state to a final state passes through an excited transition state, ln(τ where ∆(U + P V − T S) is the difference in free energy between the transition state and the initial state. Consistent with results from simulations and experiments [12,13], we propose that at low T the mechanism to relax from a less structured state (lower tetrahedral order) to a more structured state (higher tetrahedral order) corresponds to the breaking of a bond and the simultaneous molecular reorientation for the formation of a new bond. The transition state is represented by the molecule with a broken bond and more tetrahedral IM order. Hence, where p B and p IM , the probability of a satisfied IM interaction, can be directly calculated.
To estimate ∆S, the increase of entropy due to the breaking of a bond, we use the mean We next test that the expression of ln(τ /τ 0 ), in terms of ∆S and Eq. (6), describes the simulations well, with minor corrections at high T . Here τ 0 ≡ τ 0 (P ) is a free fitting parameter equal to the relaxation time for T → ∞. From Eq. (7) we find that the ratio E A /(k B T A ) calculated at low T increases with P for J σ /ǫ = 0.05, while it is constant for J σ = 0, as from our simulations [ Fig. 4(d)].
In summary, we have seen that both the LLCP and SF scenarios exhibit a dynamic crossover at a temperature close to T (C max P ), which decreases for increasing P . We interpret the dynamic crossover as a consequence of a local breaking and reorientation of the bonds for the formation of new and more tetrahedrally oriented bonds. Above T (C max P ), when T decreases, the number of hydrogen bonds increases, giving rise to an increasing activation energy E A and to a non-Arrhenius dynamics. As T decreases, entropy must decrease. A major contributor to entropy is the orientational disorder, that is a function of p B , as described by the mean field expression for ∆S. We find that, as T decreases, p B -hence the orientational order -increases. We find that the rate of increase has a maximum at T (C max P ), and as T continues to decrease this rate drops rapidly to zero -meaning that for T < T (C max P ), the local orientational order rapidly becomes temperature-independent and the activation energy E A also becomes approximately temperature-independent, for the Eq.(6). Corresponding to this fact the dynamics becomes approximately Arrhenius.
We find that the crossover is approximately isochronic (independent of the pressure) consistent with our calculations of an almost constant number of bonds at T (C max P ). In both scenarios, E A and T A decrease upon increasing P , but the P dependence of the quantity that the same behavior is found using the mean field approximation. In all the panels, where not shown, the error bars are smaller than the symbol sizes. | 2,786 | 2007-02-05T00:00:00.000 | [
"Chemistry",
"Physics"
] |
Use of Homotopy Perturbation Method for Solving Multi-point Boundary Value Problems
Homotopy perturbation method is used for solving the multi-point boundary value problems. The approximate solution is found in the form of a rapidly convergent series. Several numerical examples have been considered to illustrate the efficiency and implementation of the method and the results are compared with the other methods in the literature.
Introduction
Multipoint boundary value problems arise in applied mathematics and physics. For example, the vibrations of a guy wire of uniform cross-section and composed of N parts of different densities can be given as a multi-point boundary value problem (Moshiinsky, 1950). (Hajji, 2009), considered the multi-point boundary value problems which occurs in many areas of engineering applications such as in modelling the flow of fluid such as water, oil and gas through ground layers, where each layer constitutes a subdomain. In (Timoshenko, 1961), many problems in the theory of elastic stability are handled by multi-point problems. In (Geng and Cui, 2010) large size bridges are sometimes contrived with multi-point supports which correspond to a multi-point boundary value condition. Many authors studied the existence and multiplicity of solutions of multi-point boundary value problems (Eloe and Henderson, 2007), (Feng and Webb, 2007), (Graef and Webb, 2009), (Henderson and Kunkel, 2008), (Liu, 2003). Some research works are available on numerical analysis of the multi-point boundary value problems. Numerical solutions of multi-point boundary value problems have been studies by (Geng, 2009), (Lin and Lin, 2010), (Tatari and Dehghan, 2006), (Wu and Li, 2011). Ghazala (Siddiqi andAkram, 2006a, 2006b) presented the solutions of fifth and sixth order boundary value problems using nonpolynomial spline technique. Recently, Akram and Hamood (Akram and Hamood, 2013a) used the reproducing Kernel space method to solve the eighth-order boundary value problems and in (Akram and Hamood, 2013b) find the solution of a class of sixth order boundary value problems using the reproducing kernel space method. Siddiqi and Iftikhar (Siddiqi and Iftikhar, 2013) presented the solution of higher order boundary value problems using the homotopy analysis method. J. H. He (He, 1999(He, , 2003(He, , 2004(He, , 2005 developed the homotopy perturbation method for solving nonlinear initial and boundary value problems by combining the standard homotopy in topology and the perturbation technique. By this method, a rapid convergent series solution can be obtained in most of the cases. Usually, a few terms of the series solution can be used for numerical ______________________________ a Email<EMAIL_ADDRESS>b Email<EMAIL_ADDRESS><EMAIL_ADDRESS>calculations. Chun C. and Sakthivel, (Chun C. and Sakthivel, 2010), implement the homotopy perturbation method for solving the linear and nonlinear two-point boundary value problems. The convergence of the homotopy perturbation method was discussed in (Biazar and H. Ghazvini, 2009), (He, 1999), (Hussein, 2011), (Turkyilmazoglu , 2011). This method has been successfully applied to ordinary differential equations, partial differential equations and other fields (Belendez, 2007), (Dehghan and Shakeri, 2008), (He, 1999(He, , 2003(He, , 2004(He, , 2005, (Rana, 2007), (Yusufoglu, 2007).
In this paper, the application of the homotopy perturbation method for finding an approximate solution for multi-point boundary value problems has been investigated.
The organization of the rest of the paper is as follows: In section 2, the homotopy perturbation method is applied to some ordinary differential equations with given multi-point boundary conditions. In section 3, the homotopy perturbation method is used to solve several examples. Finally, in section 4, the conclusion is presented.
A n a l y s i s o f t h e H o m o t o p y P e r t u r b a t i o n M e t h o d ( H e , 1 9 9 9 )
Consider the nonlinear differential equation where L is a linear operator, N is a nonlinear operator, ( ) f r is a known analytic function, B is a boundary operator and Γ is the boundary of the domain Ω . By He's homotopy perturbation technique (He, 1999), define a homotopy ( , ) : where r ∈ Ω , [0,1] p ∈ is an embedding parameter and 0 u is an initial approximation of Eq. ( 2. 1 ) which satisfies the boundary conditions. Clearly As p changes from 0 to 1, then ( , ) v r p changes from 0 ( ) u r to ( ) u r This is called a deformation and 0 ( ) The series in Eq. ( 2. 8 ) is convergent in most cases, and the convergence rate of the series depends on the nonlinear operator, see (Biazar and H. Ghazvini, 2009), (He, 1999). Moreover, the following judgments are made by He ( He, 1999), ( He, 2006), ( ) i The second order derivative of ( ) N v w.r.t. v must be small as the parameter may be reasonably large, i.e., must be smaller than one, so that, the series converges.
To implement the method, several numerical examples are considered in the following section.
N u m e r i c a l E x a m p l e s
Example 3.1 Consider the following third-order linear differential equation with three point boundary conditions The exact solution of the Example 3.1 is where the constants are 5 k = and 1 a = (Akram et al., 2013), (Ali et al., 2010), (Saadatmandi and Dehghan, 2012), (Tirmizi et al., 2005). Using the homotopy perturbation method, the following homotopy for the system ( 3. 1 ) is constructed gives the following set of differential equations where A and B are unknown constants to be determined. The corresponding solutions for the above system of equations are the series solution given as Using the 11-term approximation, that is Imposing the boundary conditions of the system ( The comparison of the approximate series solution of the problem (3.1) with the results of methods in (Akram et al., 2013), (Ali et al., 2010), (Saadatmandi and Dehghan, 2012), (Tirmizi et al., 2005) is given in Table 1, which shows that the method is quite efficient. In Figure 1 shows that the method is in excellent agreement with (Tatari and Dehghan, 2007).
Example 3.2 Consider the linear fourth-order nonlocal boundary value problem
(4) (1) (Lin and Lin, 2010), (Wu and Li, 2011). Using the homotopy perturbation method, the following homotopy for the system ( 3. 5 ) is constructed is the embedding parameter. Assume that the solution of Problem ( 3. 5 ) is 2 0 1 2 u u pu p u = + + +L (3.7) Substituting Eq. ( 3. 7 ) in Eq. ( 3. 6 ), and equating the coefficients of like powers of p , gives the following set of differential equations 0 : p Using only 6-term approximation, that is (Lin and Lin, 2010), (Wu and Li, 2011) in Table 2, which shows that the method is quite efficient. Absolute errors Exact U u − are plotted in Figure 2.
Example 3.3
The following fourth order nonlinear boundary value problem is considered . 4 The exact solution of the problem (3.3) is x e x u = ) ( . Using the homotopy perturbation method, the following homotopy for the system ( 3. 10 ) is constructed is called He's polynomial (Ghorbani, 2009) Substituting Eq. ( 3. 12 ) and Eq. (3. 13) in Eq.
(3. 11), and equating the coefficients of like powers of p , gives the following set of differential equations 0 : p 2.50527 10 1.524 10 ( ). Table 3, the comparison of the exact solution with the series solution of the problem (3.3) is given, which shows that the method is quite efficient. In Figure 3 absolute errors Exact U u − are plotted in Figure 3.
Example 3.4
The following fifth order nonlinear three points boundary value problem is considered The exact solution of the problem (3.4) is Using the homotopy perturbation method, the following homotopy for the system ( 3. 14 ) is constructed is called He's polynomial (Ghorbani, 2009). Substituting Eq. ( 3. 16 ) and Eq. ( 3. 17 ) in Eq.
( 3. 15 ), and equating the coefficients of like powers of p , gives the following set of differential equations 0 : p Table 4, the comparison of the exact solution with the series solution of the problem (3.4) is given, which shows that the method is quite efficient. In Figure 4 absolute errors Example 3.5 The following sixth order nonlinear boundary value problem is considered The exact solution of the problem (3.5) is The comparison of the exact solution with the series solution of the problem (3.5) is given in Table 5, which shows that the method is quite accurate.
Example 3.6 The following seventh order nonlinear boundary value problem is considered The exact solution of the problem (3.6) is The comparison of the exact solution with the series solution of the problem (3.6) is given in Table 6, which shows that the method is quite accurate.
Example 3.7 The following seventh order nonlinear boundary value problem is considered 1 2 ( ) ( ) (35 12 2 ), 0 1 The exact solution of the problem (3.7) is ( ) The comparison of the exact solution with the series solution of the problem (3.7) is given in Table 7, which shows that the method is quite accurate.
Conclusion
In this paper, the homotopy perturbation method has been applied to solve the multi-point boundary value problems. It is clearly seen that homotopy method is a powerful and accurate method for finding solutions for multi-point boundary value problems in the form of analytical expressions and presents a rapid convergence for the solutions. The numerical results showed that the homotopy perturbation method can solve the problem effectively and the comparison shows that the present method is in good agreement with the existing results in the literature. ( Tirmizi et al., 2005) Absolute Error ( Ali et al., 2010) Absolute Error (Akram et al., 2013) 0. (Lin and Lin, 2010) in (Wu and Li, 2011) | 2,329.6 | 2013-10-10T00:00:00.000 | [
"Mathematics"
] |
Statistical Simulation, a Tool for the Process Optimization of Oily Wastewater by Crossflow Ultrafiltration
This work aims to determine the optimized ultrafiltration conditions for industrial wastewater treatment loaded with oil and heavy metals generated from an electroplating industry for water reuse in the industrial process. A ceramic multitubular membrane was used for the almost total retention of oil and turbidity, and the high removal of heavy metals such as Pb, Zn, and Cu (>95%) was also applied. The interactive effects of the initial oil concentration (19–117 g/L), feed temperature (20–60 °C), and applied transmembrane pressure (2–5 bar) on the chemical oxygen demand removal (RCOD) and permeate flux (Jw) were investigated. A Box–Behnken experimental design (BBD) for response surface methodology (RSM) was used for the statistical analysis, modelling, and optimization of operating conditions. The analysis of variance (ANOVA) results showed that the COD removal and permeate flux were significant since they showed good correlation coefficients of 0.985 and 0.901, respectively. Mathematical modelling revealed that the best conditions were an initial oil concentration of 117 g/L and a feed temperature of 60 °C, under a transmembrane pressure of 3.5 bar. In addition, the effect of the concentration under the optimized conditions was studied. It was found that the maximum volume concentrating factor (VCF) value was equal to five and that the pollutant retention was independent of the VCF. The fouling mechanism was estimated by applying Hermia’s model. The results indicated that the membrane fouling given by the decline in the permeate flux over time could be described by the cake filtration model. Finally, the efficiency of the membrane regeneration was proved by determining the water permeability after the chemical cleaning process.
Introduction
Oily wastewater produced from the electroplating industry, consisting of organic materials mixture and heavy metals, is a strong global pollutant that affects the environment and human health [1][2][3][4]. Therefore, it needs to be treated before being discharged into the receiving environment or reused [5]. Removing oil and heavy metals is necessary because they are toxic substances and can cause extensive pollution to water and soil and inhibit the growth of plants and animals. Their effects on human beings are also very dangerous due to the carcinogenic and mutagenic risks that they can produce [6,7].
Oil can be present in wastewater in three forms (droplet size) including free-floating oil (more than 150 µm), unstable dispersed oil (between 20 and 150 µm) and stable emulsified
•
Variations in initial oil concentration (C oil ), feed temperature values (T), and transmembrane pressure (∆P) were investigated. • COD and stabilized permeate flux were determined to obtain the optimal separation conditions.
• Statistical analysis of the data was carried out to obtain a suitable mathematical model of the process. • Finally, it was found that the model fitted well with the experimental results. The influence of the different factors on the COD retention and the permeate flux was discussed.
Oily Wastewater Collection
Oily wastewater contaminated with heavy metals was collected from an oil separator installed in an electroplating business in Sfax, Tunisia. The characteristics of three different effluents collected over three months are summarized in Table 1. At first, wastewater was pre-filtered using a porous filter paper of 60 µm to remove free-floating oil and solid particles that could clog the membranes.
Ultrafiltration Process
The crossflow ultrafiltration experiments were performed using a semi pilot scale ( Figure 1). The installation was equipped with automated systems to control the feed flow rate and temperature. The membrane module contained a tubular UF ceramic multichannel (7 channel) membrane made from titania purchased from NovaSep, (Miribel, France) with a surface area of 0.155 m 2 and a 150 kDa separation cut-off. The membrane water permeability was 230 L/h·m 2 ·bar. All tests were performed under a transmembrane pressure and temperature ranges from 2 to 5 bar and 20 to 60 • C. The permeate flux was calculated according to the following equation [49]: where J w is the permeate flux (L/m 2 h), V is the volume of permeate (L), S is the membrane surface area (m 2 ), and t is the duration of ultrafiltration (h).
Analytical Methods
Conductivity and pH were measured by a conductivity meter (EC-400L, Istek, Seoul, Korea) and a pH meter (pH-220L, Istek). Turbidity was measured by a turbidity meter The membrane regeneration was accomplished by rinsing the membrane with distilled water and then using an acid-base treatment with an alternative circulation of 2% solutions of NaOH at 80 • C and HNO 3 at 60 • C for 30 min. Finally, the membrane was washed with distilled water until a neutral pH was obtained. The efficacy of the cleaning protocol was checked by measuring the initial water permeability after the cleaning cycle.
Analytical Methods
Conductivity and pH were measured by a conductivity meter (EC-400L, Istek, Seoul, Korea) and a pH meter (pH-220L, Istek). Turbidity was measured by a turbidity meter (model 2100A, Hach) agreeing with standard method 2130B. The COD was determined by a colorimetric technique (COD 10119, Fisher Bioblock Scientific, Illkirch, France). The oil and heavy metal retention content was measured by determining the feed and solution concentrations using a UV-spectrophotometer (UV-9200, Beijing, China) at a wavelength of 363 nm and atomic absorption spectroscopy (AAS, PerkinElmer, Waltham, MA, USA), respectively.
For the evaluation of UF rejection, the rejection of different parameters (COD, turbidity, oil, and heavy metals) was determined by Equation (2) [50,51]: where C f and C p represent the concentration of pollutants in the feed and in the permeate, respectively.
Experimental Design Methodology
The response surface methodology model (RSM) was applied to evaluate the effects of ultrafiltration parameters and to optimize various conditions for different responses. Table 2 summarizes the studied variables: initial oil concentration (X 1 ), temperature (X 2 ), and transmembrane pressure (X 3 ). A Box-Behnken experimental design (BBD) with three numeric factors over three levels was studied [51]. The BBD included 13 randomized runs with one replicate at the central point. The matrix, experimental range, and responses are presented in Table 3.
RSM is a statistical method for the multifactorial analysis of experimental data that supplies a higher understanding of the process than standard methods of experimentation due to its ability to predict how inputs affect outputs in a complex process where different factors can interact among themselves. All the other polynomial equation coefficients were tested for significance with an analysis of variance (ANOVA) [52]. For responses obtained after the experiments (R COD and permeate flux), a polynomial model of the second degree was established to evaluate and quantify the influence of the variables as follows: where X i and X j are the coded variables (−1 or +1), b 0 is the mean of the responses obtained, b i is the main effect of factor i for the response Y, b ij is the interaction effect between factors i, and j represents the error in the response.
and X j represent the predicted response, the constant coefficient, the linear coefficient, the interaction coefficient, the quadratic coefficient, and the coded values of the factors, respectively. The sufficiency of the model was determined by the coefficient of determination (R 2 ) and p-value. The statistical analysis was evaluated using Design-Expert 12 software. Response surface plots were indicated for two factors, where the third factor was set to its medium value.
Investigation of the Fouling Mechanism
To determine the fouling mechanism that occurred during the UF of the oily wastewaters, a mathematical model established by Hermia [53] was applied. This model is based on conventional constant pressure dead-end filtration equations; it has been widely evaluated in crossflow filtration studies [54] and has been used to predict decreases in flux during the MF and UF of oil-in-water emulsions [55][56][57][58]. The equation of the model is expressed by Equation (5) [53] as follows: where V is the permeation volume, t is the filtration time, K is a constant, and n is a value illustrating the different fouling mechanisms ( Table 4). The Hermia model is based on four empirical approaches: complete pore blocking, standard pore blocking, intermediate pore blocking, and cake filtration.
In a complete blocking model, each pollutant particle blocks a pore of the membrane without overlapping on top of any other. In the standard blocking model, the size of the particle is smaller than the pore diameter; consequently, the foulant particles can enter the pores and form a deposit on the pore walls, which reduces the pore volume.
In the intermediate blocking model, some pollutant particles are in direct contact with the pores, but a number of them are on top of others. In the cake filtration model, many foulant particles accumulate on the membrane surface and create a cake layer, forming an additional resistance to the permeate flux [7].
The correlation of the experimental permeate flux decline data with the above fouling mechanisms was studied by comparing the correlation coefficient R 2 values reported from the linear regression analysis using Equations (6)-(9) ( Table 4). A higher R 2 correlation coefficient equation corresponds to the dominant membrane fouling mechanism. Complete pore blocking 2 Membranes 2022, 12, 676 6 of 21 where V is the permeation volume, t is the filtration time, K is a constant, and n is a value illustrating the different fouling mechanisms ( Table 4). The Hermia model is based on four empirical approaches: complete pore blocking, standard pore blocking, intermediate pore blocking, and cake filtration.
Fouling Mechanism N Linearized Form Schematic Diagram
Complete pore blocking 2 Standard pore blocking 1.5 Intermediate pore blocking 1 Cake filtration 0 In a complete blocking model, each pollutant particle blocks a pore of the membrane without overlapping on top of any other. In the standard blocking model, the size of the particle is smaller than the pore diameter; consequently, the foulant particles can enter the pores and form a deposit on the pore walls, which reduces the pore volume. In the intermediate blocking model, some pollutant particles are in direct contact with the pores, but a number of them are on top of others. In the cake filtration model, many foulant particles accumulate on the membrane surface and create a cake layer, forming an additional resistance to the permeate flux [7].
The correlation of the experimental permeate flux decline data with the above fouling mechanisms was studied by comparing the correlation coefficient R 2 values reported from the linear regression analysis using Equations (6)-(9) ( Table 4). A higher R 2 correlation coefficient equation corresponds to the dominant membrane fouling mechanism.
UF Experiments
The efficiency of the UF of the industrial oily wastewater contaminated with heavy metals using a ceramic membrane (150 KDa) was not determined only on the basis of the Standard pore blocking 1.5 Membranes 2022, 12, 676 6 of 21 where V is the permeation volume, t is the filtration time, K is a constant, and n is a value illustrating the different fouling mechanisms ( Table 4). The Hermia model is based on four empirical approaches: complete pore blocking, standard pore blocking, intermediate pore blocking, and cake filtration.
Fouling Mechanism N Linearized Form Schematic Diagram
Complete pore blocking 2 Standard pore blocking 1.5 Intermediate pore blocking 1 Cake filtration 0 In a complete blocking model, each pollutant particle blocks a pore of the membrane without overlapping on top of any other. In the standard blocking model, the size of the particle is smaller than the pore diameter; consequently, the foulant particles can enter the pores and form a deposit on the pore walls, which reduces the pore volume. In the intermediate blocking model, some pollutant particles are in direct contact with the pores, but a number of them are on top of others. In the cake filtration model, many foulant particles accumulate on the membrane surface and create a cake layer, forming an additional resistance to the permeate flux [7].
The correlation of the experimental permeate flux decline data with the above fouling mechanisms was studied by comparing the correlation coefficient R 2 values reported from the linear regression analysis using Equations (6)-(9) ( Table 4). A higher R 2 correlation coefficient equation corresponds to the dominant membrane fouling mechanism.
UF Experiments
The efficiency of the UF of the industrial oily wastewater contaminated with heavy metals using a ceramic membrane (150 KDa) was not determined only on the basis of the where V is the permeation volume, t is the filtration time, K is a constant, and n is a value illustrating the different fouling mechanisms ( Table 4). The Hermia model is based on four empirical approaches: complete pore blocking, standard pore blocking, intermediate pore blocking, and cake filtration.
Fouling Mechanism N Linearized Form Schematic Diagram
Complete pore blocking 2 Standard pore blocking 1.5 Intermediate pore blocking 1 Cake filtration 0 In a complete blocking model, each pollutant particle blocks a pore of the membrane without overlapping on top of any other. In the standard blocking model, the size of the particle is smaller than the pore diameter; consequently, the foulant particles can enter the pores and form a deposit on the pore walls, which reduces the pore volume. In the intermediate blocking model, some pollutant particles are in direct contact with the pores, but a number of them are on top of others. In the cake filtration model, many foulant particles accumulate on the membrane surface and create a cake layer, forming an additional resistance to the permeate flux [7].
The correlation of the experimental permeate flux decline data with the above fouling mechanisms was studied by comparing the correlation coefficient R 2 values reported from the linear regression analysis using Equations (6)-(9) ( Table 4). A higher R 2 correlation coefficient equation corresponds to the dominant membrane fouling mechanism.
UF Experiments
The efficiency of the UF of the industrial oily wastewater contaminated with heavy metals using a ceramic membrane (150 KDa) was not determined only on the basis of the where V is the permeation volume, t is the filtration time, K is a constant, and n is a value illustrating the different fouling mechanisms ( Table 4). The Hermia model is based on four empirical approaches: complete pore blocking, standard pore blocking, intermediate pore blocking, and cake filtration.
Fouling Mechanism N Linearized Form Schematic Diagram
Complete pore blocking 2 Standard pore blocking 1.5 Intermediate pore blocking 1 Cake filtration 0 In a complete blocking model, each pollutant particle blocks a pore of the membrane without overlapping on top of any other. In the standard blocking model, the size of the particle is smaller than the pore diameter; consequently, the foulant particles can enter the pores and form a deposit on the pore walls, which reduces the pore volume. In the intermediate blocking model, some pollutant particles are in direct contact with the pores, but a number of them are on top of others. In the cake filtration model, many foulant particles accumulate on the membrane surface and create a cake layer, forming an additional resistance to the permeate flux [7].
The correlation of the experimental permeate flux decline data with the above fouling mechanisms was studied by comparing the correlation coefficient R 2 values reported from the linear regression analysis using Equations (6)-(9) ( Table 4). A higher R 2 correlation coefficient equation corresponds to the dominant membrane fouling mechanism.
UF Experiments
The efficiency of the UF of the industrial oily wastewater contaminated with heavy metals using a ceramic membrane (150 KDa) was not determined only on the basis of the
UF Experiments
The efficiency of the UF of the industrial oily wastewater contaminated with heavy metals using a ceramic membrane (150 KDa) was not determined only on the basis of the observed stabilized permeate flux but also concerning the retention of different parameters (oil, turbidity, COD, and heavy metals). It is worth noting that an almost total retention of oil and turbidity and a high elimination of heavy metals such as Pb, Zn, and Cu (>95%) were achieved by the UF process regardless of the initial pollutant values and the treatment conditions. The COD removal and permeate flux results show that they were affected by different parameters such as the initial oil concentration, the feed temperature, and the applied transmembrane pressure. Table 5 illustrates the regression coefficients obtained by the ANOVA of a quadratic model for COD removal and the modified quadratic model for permeate flux. The p-value determined the significance of the input factors and their interactions in the studied model. A factor affects the response if the p-value is less than the used probability level. The significance was judged at probability levels less than 0.05 [59]. Table 5 shows the mathematical model that explains the relationship between responses and dependent and independent variables represented by oil concentration (X 1 ), temperature (X 2 ), transmembrane pressure (X 3 ), and the significance level of the linear and quadratic models.
COD Removal Response
In line with Joglekar et al. [60], who proved that the model fit is good when R 2 > 0.80, the R 2 value coefficient of 0.985 confirmed the agreement of the mathematical model with the experimental data and showed that the model fit was significant.
Furthermore, R 2 evaluates the discrepancy or variance in the apparent values, which could be explained by the independent variables and their interactions rather than the design of specific factors. In fact, R 2 = 0.985 shows that the model could describe 98.5% of the total response variation and that only 1.5% of it cannot be explained by the empirical model. As a result, the model equation was better at representing the COD removal regarding the three independent variables. The comparison of the experimental results (actual values) and the predicted values by the model is presented in Figure 2. The theoretical and empirical values were very close for the COD removal. This proximity reflects the robustness of the statistical models used. Table 5 shows the mathematical model that explains the relatio sponses and dependent and independent variables represented by oil temperature (X2), transmembrane pressure (X3), and the significance lev quadratic models.
In line with Joglekar et al. [60], who proved that the model fit is g the R 2 value coefficient of 0.985 confirmed the agreement of the mathe the experimental data and showed that the model fit was significant.
Furthermore, R 2 evaluates the discrepancy or variance in the appa could be explained by the independent variables and their interactio design of specific factors. In fact, R 2 = 0.985 shows that the model coul the total response variation and that only 1.5% of it cannot be explain model. As a result, the model equation was better at representing th garding the three independent variables. The comparison of the exper tual values) and the predicted values by the model is presented in Figu and empirical values were very close for the COD removal. This pro robustness of the statistical models used. In Figure 3, the experimental results prove that the removal of COD was strongly affected by the three independent variables represented by initial oil concentration, temperature, and transmembrane pressure. Furthermore, almost total oil retention was observed whatever the conditions of the UF treatment were. In Figure 3, the experimental results prove that the removal of COD was s affected by the three independent variables represented by initial oil concentratio perature, and transmembrane pressure. Furthermore, almost total oil retention w served whatever the conditions of the UF treatment were.
Permeate Flux Response
The effects of the input factors on permeate flux values were given and analyz modified quadratic model proved that the linear model terms of initial oil concentrat and temperature (X2), as well as the quadratic model of the term X12, were signifi value < 0.05). The optimized model showed that the permeate flux was only affected initial oil concentration and the temperature as the applied transmembrane pressure
Permeate Flux Response
The effects of the input factors on permeate flux values were given and analyzed. The modified quadratic model proved that the linear model terms of initial oil concentration (X 1 ) and temperature (X 2 ), as well as the quadratic model of the term X 12 , were significant (p-value < 0.05). The optimized model showed that the permeate flux was only affected by the initial oil concentration and the temperature as the applied transmembrane pressure did not affect the permeate flux. This estimated result correlated with the experimental The relatively high R 2 (0.901) value confirms that the model fit the data well. Additionally, this coefficient measures the variability in the observed response values, which can be described by the independent factors and their interactions over the range of the corresponding factors; it indicated that the model could describe 90.1% of the total variation-only 9.9% of it was not described. Figure 5 suggests that the experimental results for the permeate flux value were not close enough to the predicted value.
Optimization of COD removal and permeate flux
The optimizations by RSM were performed by maximizing the COD removal and permeate flux. In Figures 6 and 7, the responses can be observed from the three-dimensional surfaces obtained with the proposed quadratic degree model. The interactions of independent variables with the treatment of the oily wastewater were investigated. The initial oil concentration (19-117 g/L), the feed temperature (20-60°C), and the transmembrane pressure (2-5 bar) were evaluated. According to the results illustrated in Table 3 and Figures 6 and 7, it is clear that the maximum COD removal (97%) and the highest permeate flux (232 L/h.m 2 ) were obtained at the optimal conditions of Coil = 117 g/L, T = 60°C, and ΔP = 3.5 bar by applying the RSM model. From Figures 6 and 7, it can be observed that the model is highly desirable, since the predicted values for the COD removal and permeate flux were 96.57% and 226.26 L/h.m 2 , respectively. The relatively high R 2 (0.901) value confirms that the model fit the data well. Additionally, this coefficient measures the variability in the observed response values, which can be described by the independent factors and their interactions over the range of the corresponding factors; it indicated that the model could describe 90.1% of the total variation-only 9.9% of it was not described. Figure 5 suggests that the experimental results for the permeate flux value were not close enough to the predicted value. The relatively high R 2 (0.901) value confirms that the model fit the d tionally, this coefficient measures the variability in the observed respons can be described by the independent factors and their interactions over corresponding factors; it indicated that the model could describe 90.1% o tion-only 9.9% of it was not described. Figure 5 suggests that the expe for the permeate flux value were not close enough to the predicted value
Optimization of COD Removal and Permeate Flux
The optimizations by RSM were performed by maximizing the CO permeate flux. In Figures 6 and 7, the responses can be observed from t sional surfaces obtained with the proposed quadratic degree model. Th independent variables with the treatment of the oily wastewater were in initial oil concentration (19-
Optimization of COD Removal and Permeate Flux
The optimizations by RSM were performed by maximizing the COD removal and permeate flux. In Figures 6 and 7, the responses can be observed from the three-dimensional surfaces obtained with the proposed quadratic degree model. The interactions of independent variables with the treatment of the oily wastewater were investigated. The initial oil concentration (19-117 g/L), the feed temperature (20-60 • C), and the transmembrane pressure (2-5 bar) were evaluated. According to the results illustrated in Table 3 and Figures 6 and 7, it is clear that the maximum COD removal (97%) and the highest permeate flux (232 L/h·m 2 ) were obtained at the optimal conditions of C oil = 117 g/L, T = 60 • C, and ∆P = 3.5 bar by applying the RSM model. From Figures 6 and 7, it can be observed that the model is highly desirable, since the predicted values for the COD removal and permeate flux were 96.57% and 226.26 L/h·m 2 , respectively.
Membranes 2022, 12, 676 10 of 21 °C, and ΔP = 3.5 bar by applying the RSM model. From Figures 6 and 7, it can be observed that the model is highly desirable, since the predicted values for the COD removal and permeate flux were 96.57% and 226.26 L/h·m 2 , respectively. Based on Table 6, different methods for the optimization of UF processes such as Box-Behnken experimental design (BBD), central composite design (CCD), central composite rotatable design (CCRD), and the Taguchi method have been applied in many previous works. The optimized responses obtained in this study by the BBD method were Based on Table 6, different methods for the optimization of UF processes such as Box-Behnken experimental design (BBD), central composite design (CCD), central composite rotatable design (CCRD), and the Taguchi method have been applied in many previous works. The optimized responses obtained in this study by the BBD method were close to some other reactions reported in the literature determined by using BBD or CCD methods [61,62]. Our results confirm that the BBD model achieved higher response values in terms of COD removal and permeate flux compared to results reported by the literature using other models [63][64][65][66]. Table 6. Comparison of the UF membrane, optimization method, optimal factors, and responses.
Effect of Concentration
The UF experiments were carried out by recycling the retentate and recovering the permeate at optimized conditions of treatment as follows: C oil = 117 g/L, T = 60 • C, and ∆P = 3.5 bar. Figure 8 represents the variation of the permeate flux as a function of the volume concentrating factor (VCF). In concentration mode (without recirculation of the permeate), the mass balance is determined using the following classical equation: Where: Vi, Vp, and Vr are the initial, permeate, and retentate volumes, respectively; In concentration mode (without recirculation of the permeate), the mass balance is determined using the following classical equation: where: V i , V p , and V r are the initial, permeate, and retentate volumes, respectively; and C i , C p , and C r are the initial oil concentration, oil concentration in the permeate, and oil concentration in the retentate, respectively On the other hand, the volume balance is given by Equation (11): Considering that: -The oil retention was determined by: -The concentration factor (CF) and the volume concentration factor (VCF) are given by: Equations (12)- (14) can be combined to obtain the following equation: For R = 100%, as is the case here, total retention of the oil is shown-i.e., C p = 0; consequently, CF = VCF.
The maximum VCF value observed in this case was equal to five. Indeed, the permeate flux decreased slightly from 232 L/h·m 2 at VCF = 1 to 212 L/h·m 2 at VCF = 5, then it decreased quickly to 171 L/h·m 2 at a VCF of 6.2. A negligible flux reduction was present of around 8.6% between a VCF of 1 and a VCF of 5. At a VCF of 6, the decrease in the flux was significantly (up to 26%) associated with membrane fouling-mainly due to the concentration of pollutants near the membrane surface [67]. Figure 9 shows a high retention of contaminants in terms of COD, oil, and heavy metals of up to 94%, whatever the FCV value range (from 1 to 6).
Application of the Hermia Model
The accumulation of oil and suspension at the membrane surface causes a rapid decrease in the permeate flux. The determination of the flux decline during fouling is critical for ultrafiltration processes. Four filtration models including complete pore blocking, standard pore blocking, intermediate pore blocking, and cake filtration evaluated the flux decline mechanism [65]. Figure 10a-d illustrate the different pore blocking models for UF of the oily industrial wastewater by a ceramic TiO 2 membrane at optimal treatment conditions, as follows: C oil = 117 g/L, T = 60 • C, and ∆P = 3.5 bar. According to the R 2 values, it appears that the formation of the cake layer model resulted in slightly higher R 2 values in comparison to the other fouling mechanisms; therefore, it can be chosen as the best model to describe the fouling mechanism. As a result, it can be expected that the majority of the particles in the feed solutions were bigger compared to the membrane pores. Consequently, accumulated molecules on the membrane surface increased the resistance to the permeate flux [68][69][70].
Cleaning Study
After concentration tests at optimized conditions, the results confirmed intensive membrane fouling (>26%). For this reason, to recover the initial membrane performance, an acid-base cleaning procedure was required [71]. The efficiency of the membrane regeneration was determined by checking the water permeability. Figure 11 presents the evolution of the water permeate flux with the transmembrane pressure for the virgin and the regenerated membranes. The results demonstrated that the water permeability values were very close, confirming the efficiency of the cleaning process used.
Cleaning Study
After concentration tests at optimized conditions, the results confirmed intensive membrane fouling (>26%). For this reason, to recover the initial membrane performance, an acid-base cleaning procedure was required [71]. The efficiency of the membrane regeneration was determined by checking the water permeability. Figure 11 presents the evolution of the water permeate flux with the transmembrane pressure for the virgin and the regenerated membranes. The results demonstrated that the water permeability values were very close, confirming the efficiency of the cleaning process used.
Conclusions
The objective of this study was to determinate the best conditions for the treatment of industrial wastewater contaminated with oil and heavy metals, using the response surface methodology. The obtained results revealed that BBD for the RSM model was effectively useful for this application. The UF process achieved the almost total retention of oil and turbidity and a high removal of heavy metals such as Pb, Zn, and Cu (>95%), independently of the initial values and treatment conditions. However, the COD removal and permeate flux were mainly affected by the initial oil concentration, feed temperature, and applied transmembrane pressure. The optimized conditions were 117 g/L, 60 °C, and 3.5 bar. Under these conditions, 97% COD removal and 232 L/h·m 2 permeate flux were achieved experimentally, and a maximum volume concentrating factor (VCF) of five was obtained. The results also revealed that the different pollutant retention values were independent of the VCF. Moreover, Hermia's model was applied to assess the membrane fouling mechanism. The data was in agreement with the cake layer model. The chemical cleaning process allowed the complete restoration of the initial water membrane permeability.
This study shows that the UF process is an efficient method for the simultaneous elimination of oil and heavy metals from industrial wastewater. Furthermore, the response surface methodology is very useful for modeling and optimizing membrane treatments.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflicts of interest.
Conclusions
The objective of this study was to determinate the best conditions for the treatment of industrial wastewater contaminated with oil and heavy metals, using the response surface methodology. The obtained results revealed that BBD for the RSM model was effectively useful for this application. The UF process achieved the almost total retention of oil and turbidity and a high removal of heavy metals such as Pb, Zn, and Cu (>95%), independently of the initial values and treatment conditions. However, the COD removal and permeate flux were mainly affected by the initial oil concentration, feed temperature, and applied transmembrane pressure. The optimized conditions were 117 g/L, 60 • C, and 3.5 bar. Under these conditions, 97% COD removal and 232 L/h·m 2 permeate flux were achieved experimentally, and a maximum volume concentrating factor (VCF) of five was obtained. The results also revealed that the different pollutant retention values were independent of the VCF. Moreover, Hermia's model was applied to assess the membrane fouling mechanism. The data was in agreement with the cake layer model. The chemical cleaning process allowed the complete restoration of the initial water membrane permeability.
This study shows that the UF process is an efficient method for the simultaneous elimination of oil and heavy metals from industrial wastewater. Furthermore, the response surface methodology is very useful for modeling and optimizing membrane treatments.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 7,616 | 2022-06-30T00:00:00.000 | [
"Engineering"
] |
ADP-ribosylation Factor-like GTPase ARFRP1 Is Required for Trans-Golgi to Plasma Membrane Trafficking of E-cadherin*
ADP-ribosylation factor-related protein 1 (ARFRP1) plays a specific role in Golgi function controlling recruitment of GRIP domain proteins and ARL1 to the trans-Golgi. Deletion of the mouse Arfrp1 gene causes embryonic lethality during early gastrulation, because epiblast cells detach from the ectodermal cell layer and do not differentiate to mesodermal tissue. Here we show that in Arfrp1-/- embryos E-cadherin is mistargeted to intracellular compartments, whereas in control embryos it is present at the cell surface of trophectodermal and ectodermal cells. In enterocytes of intestine-specific Arfrp1 null mutants (Arfrp1vil-/-), E-cadherin is associated with intracellular membranes, partially colocalizing with the cis-Golgi marker GM130 or with punctae close to the cell surface. In contrast, in control enterocytes E-cadherin is exclusively located in the lateral membranes. In addition, ARL1 is dislocated from Golgi membranes to the cytosol of Arfrp1vil-/- enterocytes. Depletion of endogenous ARFRP1 by RNA interference leads to a dislocation of E-cadherin from the cell surface in HeLa cells and to a reduced cell aggregation in Ltk-Ecad cells. ARFRP1 was coimmunoprecipitated in a complex with E-cadherin, α-catenin, β-catenin, γ-catenin, and p120ctn from lysates of Madin-Darby canine kidney cells stably expressing myc-ARFRP1. These data indicate that knock-out of Arfrp1 disrupts the trafficking of E-cadherin through the Golgi and suggest an essential role of the GTPase in trans-Golgi network function.
GTPases of the ADP-ribosylation factor (ARF) 3 family operate as molecular switches in the regulation of vesicular traffick-ing and organelle structure (1,2). The ARF family includes three different groups of proteins, the ARFs, the ARLs (ARFlike proteins), and the secretion-associated Ras-related proteins. ARF-related protein 1 (ARFRP1) is a 25-kDa GTPase and member of the ARL family (2,3). In contrast to other ARFs and ARLs, ARFRP1 can hydrolyze GTP in the absence of a GTPaseactivating protein and lacks the N-myristoylation site (glycine 2), which is required for membrane association (3). For the closest relative of ARFRP1, the yeast Arl3p protein, it was shown recently that membrane association is mediated via acetylation of the N-terminal methionine residue (4,5). ARFRP1 interacts with the Sec7 domain of the ARF-specific guanine nucleotide exchange factor cytohesin 1 in a GTP-dependent manner. This interaction resulted in the inhibition of the ARF/Sec7-dependent activation of phospholipase D in vitro and in vivo (6).
We and others have recently shown that ARFRP1 as well as its yeast ortholog Arl3p specifically control targeting of ARL1 and its effector Golgin-245 to the trans-Golgi (7)(8)(9)(10). GTPbound ARFRP1 (ARFRP1-Q79L mutant) was associated with Golgi membranes and colocalized with ARL1. In contrast, the guanine nucleotide exchange defective ARFRP1 mutant (ARFRP1-T31N) clustered within the cytosol. Expression of ARFRP1-T31N or depletion of endogenous ARFRP1 by RNA interference disrupted the Golgi association of ARL1 and the GRIP domain protein Golgin-245 and altered the distribution of a trans-Golgi network (TGN) marker, syntaxin 6 indicating that ARFRP1 plays an important role for TGN structure and function (10).
Deletion of Arfrp1 in mice resulted in embryonic lethality (11). Arfrp1 Ϫ/Ϫ blastocysts implanted in vivo and formed egg cylinder-stage embryos that appeared normal until day 5. During early gastrulation (at day 6 -6.5), Arfrp1 Ϫ/Ϫ embryos exhibited profound alterations of the distal part of the egg cylinder. Rounded pyknotic cells within this area were only loosely attached to the ectodermal cell layer, and some apoptotic cells were found in the proamniotic cavity. This observation suggested that ARFRP1 plays a critical role in processes during early gastrulation such as adhesion-dependent morphogenesis, cytoskeletal reorganization, and/or development of cell polarity (11).
Specific contacts of cells to the extracellular matrix and to neighboring cells are fundamental for embryogenesis, survival, and wound repair. Cadherins represent a large family of cell-cell adhesion proteins that play crucial roles in tissue patterning, cellular growth control, and in the regulation of cell shape and migration (12)(13)(14). Changes in cadherin expression are associated with numerous developmental events such as epithelial-mesenchymal transitions, e.g. during gastrulation each member of the family exhibits a specific spatial and temporal expression pattern (15). E-cadherin, the prototypical member of the classic cadherin family, is a major component of epithelial adherens junctions, where it mediates cell-cell adhesion through calcium-dependent, homophilic binding between molecules on adjacent cells (13,16,17). At the adherens junction, E-cadherin is bound to catenins with -catenin attached to the cytoplasmic domain of E-cadherin and ␣-catenin associated with -catenin. In contrast to previous models, ␣-catenin does not directly link the cadherin-catenin complex to the actin cytoskeleton (18,19). Recently, EPLIN/Lima-1 was identified as a missing link between the cadherin-catenin complex and the actin cytoskeleton (20). p120 ctn binds to a juxtamembrane site in the cytoplasmic tail of E-cadherin (21), and several roles of p120 ctn modulating cadherin function have been discussed in the literature (22). p120 ctn is implicated to be involved in exocytosis, endocytosis, and turnover of cadherins (23)(24)(25)(26).
In this study, we tested the hypothesis that ARFRP1 modulates cadherin-mediated adhesion processes. We find ARFRP1 in a complex with E-cadherin, -catenin, ␣-catenin, ␥-catenin, and p120 ctn . Additional data suggest that ARFRP1 is required for cell surface localization of E-cadherin because in the absence of ARFRP1, E-cadherin is dislocated from the plasma membrane, and cell adhesion is markedly reduced in vivo and in vitro.
Cell Culture and Transient Transfection-HeLa cells were cultured in minimum essential medium with Earle's salts plus 10% (v/v) FCS. For aggregation assays, Ltk Ϫ Ecad cells were grown in Dulbecco's modified Eagle's medium (DMEM) high glucose in the presence of 10% (v/v) FCS, 100 units/ml penicillin, and 100 g/ml streptomycin at 5% CO 2 . Transient transfections of cells were performed with Lipofectamine 2000 (Invitrogen) according to manufacturer's protocols.
MDCK T23 Cells and Stable Transfection of Myc-Arfrp1-The MDCK T23 cell line, which stably expresses the tetracycline-repressible transactivator, was described earlier (31) and was kindly provided by Prof. Keith E. Mostov (Department of Anatomy, University of California, San Francisco). MDCK T23 cells were maintained in DMEM with 10% FCS supplemented with the necessary antibiotics and cultured under continuous presence of 40 ng/ml doxycycline. The medium was renewed every 48 h. N-terminal Myc tag was fused to the mouse Arfrp1 open reading frame by PCR and cloned into the pTRE2hyg vector (Clontech). Transfection of MDCK T23 cells was performed in 6-well plates with Lipofectamine 2000 according to the manufacturer's instructions. Twenty four hours after transfection, cells were reseeded into 10-cm dishes, and selection of transfected cells was achieved with 40 ng/ml doxycycline and 300 g/ml hygromycin in DMEM. After selection for 12 days, surviving colonies were isolated with the use of cloning rings and expanded in 48 wells. At confluency, cells from each of the surviving clones were split and maintained in the presence or absence of doxycycline. ARFRP1 expression was assessed by immunofluorescence microscopy 48 h after removal of doxycycline. Clones positive for ARFRP1 expression were expanded, and inducible expression was confirmed by Western blot analysis.
Immunocytochemistry and Indirect Immunofluorescence Microscopy-At the indicated time points, cells were washed with PBS and fixed with methanol (Ϫ20°C for 10 min). Cells were washed with PBS, blocked with PBS, 0.1% (v/v) Tween 20:5% (v/v) normal goat serum for 20 min at room temperature and incubated with primary antibodies in antibody diluent (Dako, Glostrup, Denmark) for 1 h at room temperature. After extensive washing with PBS, 0.1% (v/v) Tween 20, cells were incubated with Alexa Fluor 488-or Alexa Fluor 546-conjugated secondary antibodies in antibody diluent at room temperature for 30 min. After washing with PBS, 0.1% (v/v) Tween 20, cells were mounted in fluorescent mounting medium (Dako) and analyzed with a Leica TCS SP2 Laser Scan inverted microscope. We scanned the cells sequentially with an argonkrypton laser (488 nm) to excite the Alexa 488 dye, and with a helium-neon laser (543 nm) to excite the Alexa 546 dye. The spectral detector recorded light emission at 510 -560 and 580 -660 nm, respectively. We processed images of 1024 ϫ 1024 pixels with Adobe Photoshop CS (version 8.0.1).
Generation of Intestine-specific Arfrp1 Null Mutants (Arfrp1 vilϪ/Ϫ Mice)-For tissue-specific disruption of Arfrp1, we used the Cre/loxP recombination system and generated Arfrp1 flox/flox mice in which exons 2 and 4 of Arfrp1 were flanked with loxP sites. The targeting vector also contained pGKneo/HSVtk cassette (Neo/tk) with a third loxP site that was introduced between exon 4 and 5. It was electroporated into embryonic stem (ES) cells that were screened for homologous recombination. A homologous recombined ES cell clone containing the targeted allele was retransfected with pIC-Cre to generate ES cell clones carrying the floxed Arfrp1 allele. One ES cell clone was injected into blastocysts, which were subsequently transferred into a day 2.5 pseudopregnant female C57BL/6 mouse. Male chimeric mice were mated with C57BL/6 females to generate Arfrp1 flox/ϩ mice. Arfrp1 flox/ϩ mice were backcrossed with C57BL/6 three times. Intestinespecific Arfrp1 null mutants (Arfrp1 vilϪ/Ϫ ) were generated by intercrossing Arfrp1 flox/flox with transgenic mice that express Cre recombinase under the control of the villin promotor/enhancer (villin-Cre) (32). The animals were housed in a controlled environment (20 Ϯ 2°C, 12:12-h light/dark cycle) and had free access to water and standard chow diet. All animal experiments were approved by the Ethics Committee of the Ministry of Agriculture, Nutrition, and Forestry (State of Brandenburg, Germany).
RNA Preparation and First Strand cDNA Synthesis-Total RNA from different tissues of the mice was extracted, and cDNA synthesis was performed as described previously (33).
Quantitative RT-PCR-Quantitative real time PCR analysis (qRT-PCR) was performed using the Applied Biosystems 7300 real time pcr System as described previously (33). For the determination of Arfrp1 mRNA levels in ileum and colon, a TaqMan gene expression assay was used (Arfrp1 E6_E7, Mm00513004_m1). Data were normalized (34), and a -actin expression assay (Mm00607939_s1) was used as endogenous control.
Knockdown of Endogenous ARFRP1 by shRNA Interference-The mammalian expression vector, pSUPER.basic (OligoEngine), was used for expression of shRNA targeting human ARFRP1 in HeLa cells. A gene-specific insert defining a 19-nucleotide sequence corresponding to nucleotides 691-709 (GTGGATGGTGAAGTGT-GTC, GenBank TM accession number NM_003224.2, ARFRP1-shRNA) was separated by a 9-nucleotide noncomplementary loop sequence (TTCAAGAGA) from the reverse complement of the same 19-nucleotide sequence. Both sequences were subcloned into the BglII and HindIII sites of the pSUPER vector and referred to as pSUPER-ARFRP1. HeLa cells were transfected with pSUPER or pSUPER-ARFRP1 and processed for Western blot analysis or immunofluorescence after 4 -8 days of incubation.
Adhesion Assay-For cell aggregation assays, 10 5 Ltk Ϫ Ecad cells were transfected with siCONTROL TM nontargeting siRNA, ARFRP1-specific siRNAs (si-a-ARFRP1 and si-b-ARFRP1), and mutated siRNAs (scrambled-a and scrambled-b) using Lipofectamine 2000 (Invitrogen) according to the manufacturer's protocols. After 3 days, cell aggregation assays were performed as described previously (37). At the time points 0 and 45 min, two 10-l aliquots were removed, and cells/cell aggregates were photographed using an Olympus BX-60 microscope with an Achrostigmat objective (10ϫ magnification, 0.25 numerical aperture). JPEG images were generated with the Soft Imaging System Color View 12 and the analysis 3.0 software. After counting of particles (cells and cell aggregate), the aggregation index was calculated according to Nagafuchi and Takeichi (41) as follows: where N 0 is the total particle number at t ϭ 0 min, and N t is the particle number after an incubation period of 45 min. Mean values Ϯ S.D. of four independent aggregation assays are presented.
Distribution of E-cadherin Is Altered in Arfrp1
Ϫ/Ϫ Embryos-We have previously shown that deletion of Arfrp1 in mice results in embryonic lethality because of the failure of differentiating epiblast cells to form the mesoderm. Epiblast cells of Arfrp1 Ϫ/Ϫ embryos detached from the embryonic ectoderm, consistent with a defect in the regulation of cell-cell adhesion (22). Because changes in cadherin expression coincide with gastrulation (38), we analyzed the expression and distribution of E-cadherin and N-cadherin in control and Arfrp1 Ϫ/Ϫ embryos between ED 5.0 and 6.5.
In ED 5.0 control embryos (Fig. 1), a weak ARFRP1 expression is detected in ectodermal cells, and E-cadherin is located at the lateral membrane of trophectodermal epithelial cells (arrow in Fig. 1) and at the surface of ectodermal cells. In contrast, in ED 5.0 Arfrp1 Ϫ/Ϫ embryos, no E-cadherin staining was visible in the trophoblast, and only a punctate pattern was detected in the embryonic ectoderm. The phenotype observed at ED 5.0 was even more pronounced at ED 6.0. At this stage, control embryos show a regular E-cadherin staining at the surface of each cell, although it is present predominantly in intracellular, aggregate-like structures in the Arfrp1 Ϫ/Ϫ embryos. No N-cadherin staining was detectable in ED 5.0 and ED 6.0 embryos as described previously (data not shown, see Ref. 39). To test whether other plasma membrane proteins were affected in Arfrp1 Ϫ/Ϫ embryos, we stained the glucose transporter GLUT1. GLUT1 was detected at the cell surface of both control and Arfrp1 Ϫ/Ϫ embryos (supplemental Fig. 1A). In addition, the plasma membrane marker Na ϩ /K ϩ -ATPase, an integral membrane protein complex, was detected at the cell surface of the Arfrp1 Ϫ/Ϫ embryos (supplemental Fig. 1B). Retention of E-cadherin in the Golgi of Arfrp1 Ϫ/Ϫ Intestinal Epithelial Cells-The intestinal epithelium is characterized by rapid cellular turnover with continuous proliferation, cellular migration, differentiation, and polarization (40). Here, E-cadherin, together with integrins, plays an important role for the development and maintenance of normal intestinal epithelial architecture and is required for complex cell-cell interactions.
As shown in Fig. 3, E-cadherin was localized in the lateral membrane of the cell surface of crypts and villi in control Arfrp1 flox/flox mice. In contrast, in Arfrp1 vilϪ/Ϫ mice we detected E-cadherin also in intracellular compartments (arrows in Fig. 3A) and as punctae (arrowheads in Fig. 3A) close to the plasma membrane. The expression of E-cadherin as detected by Western blotting (Fig. 3B) was not altered in ileum and colon of Arfrp1 vilϪ/Ϫ mice.
Defective Trans-Golgi Organization Is Associated with Altered E-cadherin Distribution in Intestinal Enterocytes in the
Absence of Arfrp1-Because we and others have previously shown that ARFRP1 controls targeting of ARL1 to Golgi membranes (7,8,10), we analyzed the ARL1 distribution in intestinal epithelial cells of control and Arfrp1 vilϪ/Ϫ mice. In Arfrp1 flox/flox mice, ARL1 was associated with Golgi membranes (arrows in Fig. 4A, left panel) and was also present in the cytosol (arrowheads in Fig. 4A, left panel). In contrast, in Arfrp1 knock-out mice ARL1 was exclusively located in the cytosol of epithelial cells. Consistent with the intense staining of ARL1 in knock-out tissue, the ARL1 protein in total lysates of ileum (Fig. 4A, lower panel) was more abundant in Arfrp1 vilϪ/Ϫ mice than in controls.
We next stained sections of ileum with specific Golgi markers. The trans-Golgi marker TGN38 (Fig. 4B) detected the Golgi in all control cells, whereas no staining was observed in intestinal sections of Arfrp1 vilϪ/Ϫ mice. In contrast, the Golgi complex marker 58K stained the Golgi in epithelial cells of control and Arfrp1 vilϪ/Ϫ mice (supplemental Fig. 2). The cis-Golgi could be visualized with the anti-GM130 antibody in both Arfrp1 flox/flox and Arfrp1 vilϪ/Ϫ cells. However, the pattern of GM130 staining differed somehow in the Arfrp1 vilϪ/Ϫ cells where the cis-Golgi appeared to be broader. Interestingly, costaining with the anti-E-cadherin antibody demonstrated that the intracellular E-cadherin was partially colocalized with GM130 (arrows in Fig. 4C), consistent with the conclusion that E-cadherin is retained in the Golgi.
To examine whether ARFRP1 also regulates the trafficking of catenins through the Golgi, we stained sections of intestine of control and Arfrp1 vilϪ/Ϫ mice with an anti--catenin antibody. -Catenin was detected at the cell surface of the epithelial cells of Arfrp1 flox/flox mice. In cells of Arfrp1 vilϪ/Ϫ ileum, -catenin was partially located intracellularly (arrows in supplemental Fig. 3). In contrast, other cell surface proteins, e.g. the apical OCTOBER 3, 2008 • VOLUME 283 • NUMBER 40 protein dipeptidyl peptidase 4 and the lateral protein GLUT2, showed no differences in their subcellular localization between cells from control and Arfrp1 vilϪ/Ϫ mice (Fig. 4D).
ARFRP1 Is Essential for Correct Targeting of E-cadherin in HeLa
Cells-The results shown in Figs. 1 and 3 suggested that the correct targeting of the cell-adhesion molecule E-cadherin requires the presence of ARFRP1. To further support this conclusion, we studied the cellular localization of E-cadherin in HeLa cells in which expression of ARFRP1 was suppressed by an ARFRP1-specific shRNA construct (Fig. 5A). The same shRNA construct was previously used to demonstrate that Golgi association of ARL1 and its effector Golgin-245 is disrupted in cells lacking ARFRP1 (10).
Indeed, in ARFRP1 knockdown cells identified by a diffuse ARL1 staining (10), no E-cadherin was detected at the plasma membrane (Fig. 5B). Only in cells with Golgiassociated ARL1 was cell surface staining of E-cadherin detected (see arrows in lower panel of Fig. 5B). In contrast to the results obtained by in vivo knock-out of Arfrp1 (Figs. 2-4), E-cadherin protein levels were markedly reduced by suppression of Arfrp1 expression (Fig. 5, upper panel).
ARFRP1-regulated Targeting of E-cadherin Is Independent of ARL1-To elucidate whether the ARFRP1-dependent recruitment of ARL1 to the trans-Golgi is required for the correct targeting of E-cadherin to the cell surface, we inhibited ARL1 expression in HeLa cells as described earlier (28) and stained them for E-cadherin. No difference in E-cadherin localization was detectable in control and ARL1 knockdown cells (supplemental Fig. 4), indicating that the association of ARL1 and its effector Golgin-245 with Golgi membranes is not required for the correct E-cadherin localization at the plasma membrane.
ARFRP1 Is Associated with the E-cadherin-Catenin Protein Complex-The observation that ARFRP1 expression is required for cell sur-face targeting of E-cadherin suggests an association of ARFRP1 with the E-cadherin-catenin complex. To test this assumption, we used MDCK cells stably expressing Myctagged ARFRP1 (myc-ARFRP1) in the absence of doxycycline (Tet-Off system). Fig. 6A demonstrates that expression of myc-ARFRP1 begins 2 days after cultivation of MDCK cells in the absence of doxycycline. The expression reaches a maximum at day 4 and stays stable until day 10 (Fig. 6A). ARFRP1 was immunoprecipitated with anti-Myc antibody, and the associated proteins were detected by Western blot analyses. As shown in Fig. 6B, E-cadherin, ␣-catenin, -catenin, ␥-catenin, and p120 ctn coimmunoprecipitated with ARFRP1. In contrast, IQGAP1 was not coimmunoprecipitated with the anti-Myc antibody (Fig. 6B). These observations suggest that ARFRP1, by interacting with the E-cadherin-catenin-p120 ctn complex, is involved in correct localization of E-cadherin to the cell surface. To analyze in which cellular compartment the interaction of ARFRP1 with the E-cadherin complex occurs, we stained MDCK cells overexpressing Myc-tagged ARFRP1 for E-cadherin with the anti-gp84 antibody and ARFRP1 with the anti-Myc antibody. As shown in Fig. 6C, E-cadherin is predominantly located at the cell surface, and is only partially present in intracellular membranes where it colocalizes with ARFRP1 (arrows in Fig. 6C). In contrast, myc-ARFRP1 is predominantly located in intracellular membranes and the cytosol and only to a small part at the cell surface (arrowheads in Fig. 6C).
Reduced E-cadherin-mediated Adhesion in ARFRP1 Knockdown Cells-To analyze whether E-cadherin-mediated adhesion was affected by knockdown of Arfrp1, we used Ltk Ϫ Ecad cells that are mouse fibroblasts (Ltk Ϫ ) stably expressing E-cadherin (37,41). Arfrp1 expression was inhibited by siRNA (Fig. 7, C and D), and cell aggregation assays were performed. Cells were trypsinized to generate single cell suspension and were allowed to aggregate under constant agitation for 45 min. Cell aggregates were counted before and after the incubation period. In untransfected cells or in cells transfected with a nontargeting siRNA, Ltk Ϫ Ecad cells form aggregates. In con-
. Inhibition of ARFRP1 expression in HeLa cells impairs cell surface localization of E-cadherin.
A, ARFRP1 expression was down-regulated in HeLa cells by transfection of pSUPER-ARFRP1, and protein lysates were prepared 4, 6, or 8 days after transfection. Expression of ARFRP1, E-cadherin and ␣-tubulin as a loading control were detected by Western blot analyses as described under "Experimental Procedures." B, after 4 days, cells were fixed with methanol and stained for ARL1 with an affinity-purified polyclonal anti-ARL1 antibody in combination with an Alexa488-conjugated secondary antibody. E-cadherin was stained with the anti-gp84 antibody in combination with an Alexa546-conjugated secondary antibody. Immunofluorescence was analyzed by confocal laser scanning microscopy as described under "Experimental Procedures." Arrows in the lower panel depict a cell with normal ARL1 localization showing cell surface staining of E-cadherin.
trast, in cells transfected with ARFRP1-siRNAs, a marked reduction in aggregation was observed, resulting in a 50 -60% reduction of the aggregation index (Fig. 7, A and B). To show the specificity of this effect, cells were transfected with siRNA oligonucleotides with 3 bases of the ARFRP1-specific siRNAs mutated (scrambled siRNA). Cell aggregation was unaltered in these cells. Furthermore, Arfrp1-specific siRNA only suppressed mRNA expression of Arfrp1, whereas mRNA levels of E-cadherin were not altered. These data confirm that ARFRP1 is required for localization of functional E-cadherin at the cell surface, and is therefore involved in regulation of E-cadherin-mediated cell-cell adhesion. It should be noted that in Arfrp1-depleted Ltk Ϫ Ecad cells, E-cadherin protein levels were reduced (Fig. 7C), suggesting that incorrect targeting of E-cadherin results in enhanced degradation in these cells. In contrast, E-cadherin expression was not reduced in Arfrp1 vilϪ/Ϫ intestinal cells (Fig. 3B), and E-cadherin did not colocalize with the lysosomal marker LAMP1 (supplemental Fig. 5).
DISCUSSION
Here we demonstrate that ARFRP1 is required for cell surface localization of E-cadherin and that it is associated with the E-cadherincatenin complex. First, as early as at day 5.0, Arfrp1 Ϫ/Ϫ embryos exhibit an abnormal subcellular distribution of E-cadherin, which appeared retained in intracellular compartments (Fig. 1A). Second, in epithelial cells of intestine-specific Arfrp1 knock-out (Arfrp1 vilϪ/Ϫ ) mice, E-cadherin was associated with intracellular membranes, partially colocalizing with the cis-Golgi (Figs. 3 and 4). Third, shRNA-mediated knockdown of Arfrp1 in HeLa cells resulted in a loss of E-cadherin from the cell surface (Fig. 5). Fourth, E-cadherin and its binding partners ␣-catenin, -catenin, ␥-catenin, and p120 ctn coimmunoprecipitated with ARFRP1 from lysates of MDCK cells overexpressing myc-ARFRP1 (Fig. 6). Finally, E-cadherin-mediated adhesion was affected by knockdown of Arfrp1 in Ltk Ϫ Ecad cells (Fig. 7).
After its synthesis, E-cadherin is translocated from the trans-Golgi network to the cell surface for incorporation, together with catenins, into adherens-junction complexes (22,42). The partial colocalization of E-cadherin with a cis-Golgi marker in Arfrp1 knockout cells suggests that ARFRP1 is involved in the transport of E-cadherin from the Golgi apparatus to the cell surface. We also detected -catenin partially located in intracellular membranes of Arfrp1 vilϪ/Ϫ epithelial cells (supplemental Fig. 3) indicating that mistargeting of E-cadherin also affects its interaction partner -catenin.
Based on data in vitro data from cultured cells transfected with ARFRP1 constructs, we and others have previously suggested that the GTPase is required for targeting of ARL1 and of its effector Golgin-245 to the trans-Golgi network (7)(8)(9)(10)43). The present data provide solid proof for this conclusion in showing that ARL1 dissociated from Golgi membranes to the cytosol in an in vivo knockdown of Arfrp1 (Arfrp1 vilϪ/Ϫ epithelial cells, Fig. 4A). Furthermore, the trans-Golgi marker TGN38 was undetectable in the Arfrp1 vilϪ/Ϫ intestine (Fig. 4B), indicating that ARFRP1 is required for the correct organization of the trans-Golgi.
The observation of marked alterations of the trans-Golgi network raises the question whether ARFRP1 plays a general role in organizing the trans-Golgi, and thereby affects E-cadherin targeting, or whether ARFRP1 specifically regulates E-cadherin trafficking through the TGN. Three findings support the conclusion that ARFRP1 specifically modulates the transport of E-cadherin from the Golgi to the plasma membrane as follows.
1) ARFRP1 interacts with the E-cadherin-catenin complex. 2) Knockdown of the trans-Golgi protein ARL1, which is also required for the organization of the trans-Golgi, did not result in a mistargeting of E-cadherin in HeLa cells (supplemental Fig. 4) and in A431 cells (data not shown). 3) Knock-out of Arfrp1 did not dislocate other cell surface proteins such as the ubiquitously expressed glucose transporter GLUT1 (supplemental Fig. 1A) or the Na ϩ /K ϩ -ATPase (supplemental Fig. 1B) in intracellular membranes.
Interestingly, removal of the ARL1 effector Golgin-97 from Golgi membranes by overexpression of GRIP domains or depletion of Golgin-97 by siRNA resulted in an inhibition of E-cadherin targeting (44). This finding supports the hypothesis that specific properties of the TGN are required for sufficient and correct targeting of E-cadherin. However, because depletion of ARL1 failed to modify the E-cadherin distribution in HeLa cells (supplemental Fig. 4), we can exclude that ARL1 acts downstream of ARFRP1 in controlling E-cadherin localization.
The investigation of the functional relevance of ARFRP1 for E-cadherin cell surface localization demonstrated a markedly reduced adhesiveness in Ltk Ϫ Ecad cells in which ARFRP1 was depleted by siRNA. We chose this model because here cell adhesion is mediated by E-cadherin only. The impaired ability of cell-cell adhesion in the absence of ARFRP1 in this system may be ascribed to incorrect sorting of E-cadherin. In contrast, intestinal epithelial cells express other cell adhesion proteins such as LI-cadherin (45) and integrins (46). In fact, the intestinal epithelium of Arfrp1 Ϫ/Ϫ mice does not exhibit visible adhesion defects despite the dislocation of E-cadherin (Fig. 3). However, as demonstrated in Figs. 1 and 3, deletion of Arfrp1 in embryos or the intestinal epithelium did not result in a complete loss of E-cadherin from the cell surface suggesting that the transport of E-cadherin from the TGN to the plasma membrane is not completely disrupted but markedly impaired.
In both in vivo systems, Arfrp1 Ϫ/Ϫ embryos (ED 6.0) and intestine of Arfrp1 vilϪ/Ϫ mice, we detected an altered subcellular distribution of E-cadherin but no reduction of E-cadherin protein levels in comparison with wild-type controls. In con-FIGURE 7. Depletion of ARFRP1 impairs E-cadherin-mediated cell aggregation. Ltk Ϫ Ecad cells were transfected as indicated without siRNA (control), nontarget siRNA (si-control), two different ARFRP1-specific siRNAs (si-a-ARFRP1 and si-b-ARFRP1), and two mutated ARFRP1-specific siRNAs (scrambled-a and scrambled-b). Single cell suspensions were allowed to aggregate for the indicated times. A, microscopic examination of Ltk Ϫ cells stably expressing E-cadherin (Ltk Ϫ Ecad) treated with different ARFRP1-directed or control siRNAs at t ϭ 0 min and at t ϭ 45 min. B, quantification of single cells and cell aggregates represented as the aggregation index. Aggregation index was calculated according to Nagafuchi and Takeichi (41) as A i ϭ (N 0 Ϫ N t )/N 0 , where N 0 is the total particle number at t ϭ 0 min and N t is the particle number after an incubation period of 45 min. Bars represent mean values (ϮS.D.) of four independent experiments with two samples counted at each time point. C, expression of ARFRP1 and E-cadherin in transfected Ltk Ϫ Ecad was analyzed by Western blotting with -actin detected as a loading control. D, expression of ARFRP1 and E-cadherin mRNA was detected by qRT-PCR as described under "Experimental Procedures." OCTOBER 3, 2008 • VOLUME 283 • NUMBER 40 trast, knockdown approaches in HeLa and Ltk Ϫ cells showed decreased E-cadherin protein levels in Arfrp1-depleted cells. Because the RNA levels of E-cadherin were not affected, we suggest that a mistargeting or impaired processing of E-cadherin results in its elevated degradation in some systems.
ARFRP1 Regulates E-cadherin Localization
In addition to ARFRP1, several other GTPases modulate the exocytotic and endocytotic trafficking of E-cadherin (22,47). Rac1 regulates endocytosis and trafficking of E-cadherin to the cell surface during epithelial morphogenesis (48). Wang et al. (49) demonstrated that expression of dominant-negative Rac1 and Cdc42 led to the accumulation of E-cadherin at a distinct post-Golgi step before E-cadherin interacts with p120 ctn . In addition, expression of Rab5 (50) or ARF6 (51) mutants can block endocytosis of E-cadherin.
In summary, our data provide evidence that ARFRP1 plays an important role for the correct cell surface targeting of E-cadherin. This finding suggests that ARFRP1 is thereby involved in the regulation of the specific spatio-temporal expression pattern of E-cadherin during early embryogenesis, which is essential for morphological events such as gastrulation, neurulation, cardiogenesis, and somitogenesis (15,52,53). | 6,469.4 | 2008-10-03T00:00:00.000 | [
"Biology"
] |
Fungal Community Composition at the Last Remaining Wild Site of Yellow Early Marsh Orchid (Dactylorhiza incarnata ssp. ochroleuca)
The yellow early marsh orchid (Dactylorhiza incarnata ssp. ochroleuca) is a critically endangered terrestrial orchid in Britain. Previous attempts to translocate symbiotic seedlings to a site near the last remaining wild site demonstrated some success, with a 10% survival rate despite adverse weather conditions over a two-year period. However, to facilitate future reintroduction efforts or conservation translocations, a more comprehensive understanding of the fungal microbiome and abiotic soil characteristics at the final remaining wild site is required. Obtaining comprehensive information on both the fungal community and soil nutrient composition from wild sites has significant benefits and may prove critical for the success of future conservation translocations involving threatened orchids. This preliminary study, conducted at the last remaining wild site, revealed a significant correlation between the relative abundance of the orchid mycorrhizal fungal order Cantharellales and the concentrations of nitrate and phosphate in the soil. Another orchid mycorrhizal fungal group, Sebacinales, was found to be distributed extensively throughout the site. The composition of fungal communities across the entire site, orchid-hosting and non-orchid-hosting soils is discussed in relation to reinforcing the current population and preventing the extinction of this orchid.
Introduction
Despite being the second-largest family of flowering plants, members of the Orchidaceae face a high risk of extinction, with terrestrial orchids being particularly vulnerable [1]. Various factors influence the population dynamics of orchids, including pollinators, climate change, and orchid mycorrhizal fungi [2][3][4][5][6][7][8]. In the wild, successful seedling recruitment of terrestrial orchids depends on the presence of compatible mycorrhizal fungi in the soil, either specialist or generalist [9,10]. Due to their minute, nutrient-lacking seeds, orchid germination relies on the assistance of fungi that provide essential resources such as carbon [11]. Human activities have long-term effects on mycorrhizal fungal communities, contributing to the rarity of terrestrial orchids in critical ecosystems [12][13][14]. Previous studies on mycorrhizal fungi and other fungal groups have shown that declines in orchid populations can be linked to soil characteristics [15][16][17][18][19][20]. While specific relationships have yet to be thoroughly investigated, evidence suggests that variations in mycorrhizal communities driven by habitat conditions impact the local distribution of terrestrial orchids, as seen in genera such as Dactylorhiza [21][22][23].
Conservation efforts for orchids require tailored approaches specific to individual species and their ecology [1,[24][25][26]. This is particularly relevant for the critically endangered yellow early marsh orchid (Dactylorhiza incarnata (L.) Soó subsp. ochroleuca (Wüstnei ex Boll) P.F.Hunt & Summerh) [21]. The last remaining population in Britain was found on a protected fen habitat in Suffolk [27], meaning that this orchid is on the brink of extinction. As part of a pilot study, symbiotic seedlings were successfully produced and translocated to a newly identified site near the wild location, aiming to explore the feasibility of conservation translocation for this taxon [27]. Despite facing unexpected challenges such as year-long flooding and an unusually hot and dry summer in 2022, a 10% recovery rate was achieved, with the translocated plants still thriving at the new site [27]. The senility of the remaining population has hindered natural seedling recruitment, and annual fluctuations in orchid numbers at the wild site further emphasise the need for future reinforcement. Consequently, a pragmatic solution is to introduce seedlings to reinforce the last remaining wild site, to regions of the site that currently lack orchids. To ensure success, it is crucial to gain a comprehensive understanding of the mycorrhizal communities at the wild site, including how they vary spatially and in response to biotic conditions.
High-throughput sequencing methods offer significant advantages over conventional approaches in identifying fungal communities within the soil, providing enhanced resolutions and species detection capabilities. Primarily, qPCR and DNA metabarcoding techniques enable the identification and relative quantification of community components, thereby offering valuable insights into fungal community ecology [10,28]. These methods are increasingly becoming primary tools for the assessment of diverse groups of plantassociated fungal communities [29]. As these fungal groups can play a critical role in the fitness of their host plants, understanding their diversity and relative abundance in soils holds great importance for the conservation of threatened orchids.
The aim of this study was to investigate the composition of orchid mycorrhizal fungi (OMF) and key endophytes known to associate with orchids, in regions of the wild site in which orchids were present and not present. To achieve this, soil samples were collected from around orchid populations and across the wider wild site, allowing for a comprehensive analysis of soil biotic and abiotic characteristics. We hypothesised that the OMF diversity and abundance may differ in the proximity of orchid populations, although the absence of such trends could indicate the suitability of the wider site for the reintroduction of orchids. Here, we discuss the compositions of the major fungal group communities and soil characteristics at the last wild site of an endangered orchid in Britain.
Sampling Site
In the spring of 2021, the last remaining wild site of the yellow early marsh orchid (D. incarnata ssp. ochroleuca), in Suffolk, Britain [27], was visited for data collection. The site was visually surveyed to identify the presence of orchids and subsequently divided into 20 rectangular plots of equal size (Figure 1b). Plots 11-15 were found to have actively growing orchid populations at the time of sampling, while the remaining plots did not.
Soil Processing and Chemical Analyses
Fresh soil samples from individual plots were divided into two subsamples and the first subsample was used for water content. The remaining subsamples were air-dried and passed through a 2 mm sieve before the following chemical analyses. Soil pH and electri- To collect representative soil samples, five random soil subsamples weighing 20 g each were taken from the upper 5 cm of substrate within each plot. These subsamples were then combined in labelled plastic bags and homogenised. Within 12 h of sampling, soil was transferred to a 4 • C fridge to minimise DNA degradation. For metabarcoding analysis, a small amount of soil (<250 mg) was added to BashingBead™ Lysis Tubes (Zymo Research, Cambridge Bioscience, Cambridge, UK) to preserve the environmental DNA for extraction and amplification processes.
Soil Processing and Chemical Analyses
Fresh soil samples from individual plots were divided into two subsamples and the first subsample was used for water content. The remaining subsamples were air-dried and passed through a 2 mm sieve before the following chemical analyses. Soil pH and electrical conductivity (EC) were measured in triplicate using calibrated HANNA HI8424 pH and EXTECH EC400 EC meters (Camlab, Cambridge, UK).
Soil nitrate was measured using calorimetry (cadmium reduction method). This process involved two stages of adding reagents. First, the NitraVer 6 was added to the diluted soil extract. After the reaction, the NitriVer 3 was added, and the colour intensity of the resultant solution was measured using a calibrated Hach DR900 (Camlab, Cambridge, UK), with a measurement wavelength of 520 nm; calibration involved the use of a blank sample of deionised water.
Colorimetry was used to analyse phosphorus (USEPA Ascorbic Acid Method). This method is a one-stage process, adding PhosVer3 to the soil extract. The intensity of the colour of the resultant solution was measured using a calibrated Hach DR900 with the measurement wavelength of 610 nm. Calibration involved the use of a blank sample of deionised water.
Soil DNA Extraction
DNA was extracted and purification was performed on soil samples stored in Bash-ingBead™ Lysis Tubes using Quick-DNA TM Fecal/Soil Microbe Miniprep Kits (Zymo Research), following the manufacturer's instructions. Briefly, the soil samples were disrupted using a tissue lyser (QIAGEN, Manchester, UK) for approximately 3 min at 25 Hz. The resulting mixture was then centrifuged, and the supernatant was transferred to Zymo-Spin™ III-F Filter Tubes. Genomic lysis buffer was added to the filtrate and the solution was passed through a Zymo-Spin™ IICR Column. The column was cleaned using DNA Pre-Wash Buffer and g-DNA Wash Buffer and the eluted DNA was passed through a pre-prepped Zymo-Spin™ III-HRC Filter.
The DNA concentration and quality of all 20 eluted samples were assessed using a Nanodrop 2000/2000c Spectrophotometer (Thermo Scientific, Waltham, MA, USA).
Metabarcoding
Purified DNA samples were amplified by PCR in the internal transcribed spacer 2 (ITS2) region, targeting fungi as part of the eDNA survey-fungi pipeline. The analysis included 3 replicate PCRs per sample, with the primers used in the metabarcoding step originating as described by White 1990 [30]. PCRs were performed in the presence of both negative and positive control samples (a mock community with a known composition). Amplification success was determined by gel electrophoresis. PCR replicates were pooled and purified, and sequencing adapters were successfully added and confirmed by gel electrophoresis. Sequences were then quantified using a Qubit broad range kit (Thermofisher, Swindon, UK) according to the manufacturer's protocol. The final library was sequenced using an Illumina MiSeq V3 kit (San Diego, CA, USA) at 10.5 pM with a 20% PhiX spike in. Resulting sequence data underwent processing using a specialised bioinformatics pipeline, which involved data filtering and trimming, merging paired ends, eliminating sequencing errors (e.g., chimeras), clustering similar sequences into molecular Operational Taxonomic Units (OTUs), and aligning a representative sequence from each cluster with a reference database. These steps transformed raw sequence data into usable data for ecological analysis.
Sequences were demultiplexed based on the combination of the i5 and i7 index tags with bcl2fastq (v2.20.0.422; https://support.illumina.com/sequencing/sequencing_ software/bcl2fastq-conversion-software.html (accessed on 15 August 2023)). Paired-end FASTQ reads for each sample were merged with USEARCH v11 [31], requiring a minimum overlap of 80% of the total read length. Merged sequences were quality filtered with USEARCH to retain only those with an expected error rate per base of 0.01 or below and dereplicated by sample, retaining singletons. Dereplicated sequences were then processed with ITSx (v1.1b1) to extract only fungal ITS2 sequences, removing the primers and any remaining ribosomal sequence. Unique ITS2 sequences from all samples were denoised in a single analysis with UNOISE [32], requiring retained zero-radius OTUs (ZOTUs) to have a minimum abundance of eight in at least one sample. Taxonomic assignments were made using sequence similarity [33,34] searches of the ZOTU sequences against two reference databases-the NCBI nucleotide (NCBI nt; downloaded 28 September 2021; https://www.ncbi.nlm.nih.gov/nuccore/ (accessed on 28 September 2021)) database and UNITE (v8.2). Hits were required to ensure a minimum e-score of 1 × 10 −20 and cover at least 90% of the query sequence.
Consensus taxonomic assignments were made for each OTU using sequence similarity searches against the NCBI nt (GenBank) reference database and UNITE (v8.2). Assignments were made to the lowest possible taxonomic level where there was consistency in the matches. Conflicts were flagged and resolved manually. Minimum similarity thresholds of 98%, 95%, and 92% were used for species-, genus-, and higher-level assignments, respectively. In cases where there were equally good matches to multiple species, public records from GBIF were used to assess which were most likely to be present in the United Kingdom.
In cases where resolution was not possible, higher-level taxonomic identifications or multiple potential identifications were provided. Subsequently, the OTU table underwent filtering to exclude low-abundance OTUs from each sample, using a threshold of <0.025% or <10 reads, whichever was greater. Sequences that were unidentified, non-target, and common contaminants (e.g., human and livestock DNA) were then eliminated. It is important to note that unidentified or misidentified taxa can arise from incomplete or inaccurate reference databases, and some taxa may be missed due to low-quality DNA, environmental contaminants, or the prevalence of other species in the sample.
The identification associated with each hit was converted to match the GBIF taxonomic backbone (3 March 2021 edition; downloaded from https://hosteddatasets.gbif.org/ datasets/backbone/2021-03-03/ (accessed on 28 September 2021)), to allow results from different databases to be combined.
Consistency in matches determined the lowest taxonomic level for assignments, with identifications based on fewer than three hits flagged as tentative. Minimum similarity thresholds of 98%, 95%, and 92% were applied for species-, genus-, and higher-level assignments, respectively. To cluster ZOTUs, a 97% similarity threshold was used with the USEARCH tool. An OTU-by-sample table was then generated by mapping dereplicated reads for each sample to the representative sequences of the OTUs, using USEARCH at an identity threshold of 97%. Low-abundance detections were subsequently removed from the analysis. Filter thresholds were established as a percentage of the total reads per sample, utilizing either <0.025% or <10 reads, whichever value was greater. Values in the resulting OTU table were calculated in a comparable manner but expressed as percentages. Each column (sample) represented a sum of 100, with the values indicating the percentage of reads obtained for each OTU sequence in the respective sample.
Abiotic Soil Characteristics
While the soil pH was consistent across all plots at the study site, soil phosphate and nitrate concentrations varied spatially (Table 1), with plot 7 exhibiting the highest phosphate and nitrate concentrations (188.92 and 944.59 mg/kg −1 , respectively) and plot 1 the lowest (15.1 and 75.51 mg/kg −1 , respectively). However, there were no significant differences in phosphate and nitrate levels between orchid-hosting and non-orchid-hosting plots ( Figure 2). Table 1. Soil abiotic characteristics (pH, electric conductivity, soil water content, phosphate, and nitrate) in the 20 plots from the wild site of the yellow early marsh orchid (D. incarnata ssp. ochroleuca). Coloured rows represent orchid-hosting plots at the time of sampling (shaded red) but were absent from all other plots.
Soil DNA Concentrations and Fungal Community Composition by Plot
The total DNA yields of soils collected in the plots ranged between 42.1 ng/µL and 164 ng/µL (Table S1). Soils from plot 6 yielded the second-lowest DNA concentrations (48.8 ng/µL) as well as the lowest number of fungal taxa (10; Table 2) and presented only a single OTU (Tulasnella, relative abundance: 95.5; Figure 3). In contrast, soils in plot 10 had the highest DNA yields (164 ng/µL) but, as in plot 6, only a single OTU was identified (Mycena epipterygia, relative abundance: 85.6; Figure 3). Although plot 9 yielded the lowest Table 2).
Soil DNA Concentrations and Fungal Community Composition by Plot
The total DNA yields of soils collected in the plots ranged between 42.1 ng/µL and 164 ng/µL (Table S1). Soils from plot 6 yielded the second-lowest DNA concentrations (48.8 ng/µL) as well as the lowest number of fungal taxa (10; Table 2) and presented only a single OTU (Tulasnella, relative abundance: 95.5; Figure 3). In contrast, soils in plot 10 had the highest DNA yields (164 ng/µL) but, as in plot 6, only a single OTU was identified (Mycena epipterygia, relative abundance: 85.6; Figure 3). Although plot 9 yielded the lowest concentration of soil DNA, it exhibited the highest number of OTUs (16; Figure 3), while plots 4 and 7 had the greatest richness of fungal taxa (207 and 236 taxa, respectively; Table 2). In general, mycorrhizal fungal OTUs belonging to the Sebacinaceae family were dominant, followed by Thelephoraceae. Sebacinaceae OTUs were detected in all plots except 6 and 7 (Table S2). In comparison, Cerabasdiaceae and Tulasnellaceae exhibited a narrower OTU distribution and relative abundance.
At the order level, three out of five orchid-hosting plots contained Cantharellales and In general, mycorrhizal fungal OTUs belonging to the Sebacinaceae family were dominant, followed by Thelephoraceae. Sebacinaceae OTUs were detected in all plots except 6 and 7 (Table S2). In comparison, Cerabasdiaceae and Tulasnellaceae exhibited a narrower OTU distribution and relative abundance.
At the order level, three out of five orchid-hosting plots contained Cantharellales and 10 out of 15 non-orchid-hosting plots contained Cantharellales, indicating a relatively even distribution between the two groups (Table S2).
The relative abundance of Agaricales was found to be the highest among all the distributed orders in the site (Table S2). The relative abundance of Agaricales was greater in orchid-hosting plots compared to non-orchid-hosting plots where orchids were absent, with the exception of plot 10. Correlations between larger groups of fungi and soil characterisitcs were conducted at the order level, specifically focusing on the soil nutrients nitrate and phosphate. Pleosporales fungi were the most abundant order and, similar to Capnodiales, were present in all plots. Helotiales and Sordariales exhibited good relative abundance in all plots except for plot 6. Overall, the relative abundance of Ascomycetous fungi (mainly falling into dark septate endophytes) was observed in Pleosporales, Helotiales, Hypocreales, and Sordariales. On the other hand, the most abundant group of Basdiomycetous was Agaricales, followed by Sebacinales and Thelephorales (Figure 4). In plots 5 and 6, the diversity of fungi, including both endo-and ectomycorrhizal fungi, was low. Interestingly, both plots 5 and 6 were colonised by the fungal genus Paraphoma. abundance in all plots except for plot 6. Overall, the relative abundance of Ascomycetous fungi (mainly falling into dark septate endophytes) was observed in Pleosporales, Helotiales, Hypocreales, and Sordariales. On the other hand, the most abundant group of Basdiomycetous was Agaricales, followed by Sebacinales and Thelephorales (Figure 4). In plots 5 and 6, the diversity of fungi, including both endo-and ectomycorrhizal fungi, was low. Interestingly, both plots 5 and 6 were colonised by the fungal genus Paraphoma. Agaricales were detected in all plots except for plot 6. Ectomycorrhizal fungi in the order of Thelephorales were present in most plots except for plots 2, 6, and 20. Among these plots, five exhibited a notably high relative abundance of this fungal group.
Relationship between Soil Characteristics and Fungal Distribution and Relative Abundance
Results are shown for all 20 plots within the site (all plots), orchid-hosting plots (11)(12)(13)(14)(15), and non-orchid-hosting plots (1-10 and 16-20). The analysis of the data generated a symmetrical correlation heatmap with Pearson's 'r' ranging from +1 (blue) to −1 (red), and the corresponding p values for each correlation are presented in the Supplementary File, Table S3. Agaricales were detected in all plots except for plot 6. Ectomycorrhizal fungi in the order of Thelephorales were present in most plots except for plots 2, 6, and 20. Among these plots, five exhibited a notably high relative abundance of this fungal group.
Relationship between Soil Characteristics and Fungal Distribution and Relative Abundance
Results are shown for all 20 plots within the site (all plots), orchid-hosting plots (11)(12)(13)(14)(15), and non-orchid-hosting plots (1-10 and 16-20). The analysis of the data generated a symmetrical correlation heatmap with Pearson's 'r' ranging from +1 (blue) to −1 (red), and the corresponding p values for each correlation are presented in the Supplementary File, Table S3.
Orchid-Hosting Plots Only
Overall, the distribution of the key OMF orders Cantharellales and Sebacinales was not significantly correlated with soil nutrients such as phosphates and nitrates in the orchid-hosting plots ( Figure 6). When comparing the entire wild site, orchid-hosting plots, and non-orchid-hosting plots separately, it was observed that the relative abundance of Hypocreales exhibited a positive correlation with Pezizales within the orchid-hosting plots. Agaricales, the most dominant basidiomycetous fungus, showed a significant negative correlation with Hypocreales (p = 0.0448).
Non-Orchid-Hosting Plots
In the non-orchid-hosting plots, the relative abundance of Cantharellales was positively correlated with nitrate and phosphate, which was also observed for Helotiales (Figure 7). Similarly, the relative abundance of Hypocreales displayed positive correlations with Pleosporales, Helotiales, Sordariales, and Pezizales.
Overall, the distribution of the key OMF orders Cantharellales and Sebacinales was not significantly correlated with soil nutrients such as phosphates and nitrates in the orchid-hosting plots ( Figure 6). When comparing the entire wild site, orchid-hosting plots, and non-orchid-hosting plots separately, it was observed that the relative abundance of Hypocreales exhibited a positive correlation with Pezizales within the orchid-hosting plots. Agaricales, the most dominant basidiomycetous fungus, showed a significant negative correlation with Hypocreales (p = 0.0448).
Non-Orchid-Hosting Plots
In the non-orchid-hosting plots, the relative abundance of Cantharellales was positively correlated with nitrate and phosphate, which was also observed for Helotiales (Figure 7). Similarly, the relative abundance of Hypocreales displayed positive correlations with Pleosporales, Helotiales, Sordariales, and Pezizales.
Discussion
The conservation translocation of threatened orchids provides a means to mitigate the risk of extinction within this highly diverse plant family. When working with endangered species, it is common to study small population sizes, as was the case at the final
Discussion
The conservation translocation of threatened orchids provides a means to mitigate the risk of extinction within this highly diverse plant family. When working with endangered species, it is common to study small population sizes, as was the case at the final remaining wild site of the yellow early marsh orchid in Britain, which had only 16 wild plants at the time of sampling. In this study, we employed a targeted sampling approach to investigate the abiotic soil conditions and fungal microbiome composition in plots hosting orchids and not hosting orchids. Our goal was to assess whether the site harboured essential groups of orchid mycorrhizal fungi and understand how their diversity and relative abundance varied in relation to orchid populations and soil nutrient concentrations. This information would aid future conservation translocation studies at the same site or new sites.
There were no significant differences in phosphate and nitrate levels between the soils of orchid-hosting plots and non-orchid-hosting plots. This was perhaps to be expected, given that the habitat was a fen with good drainage and dense fen vegetation. While the proportion of total sequence reads does not directly indicate the relative abundance of taxa, it can be assumed that the proportion of sequence reads reflects their abundance. However, total sequence reads can be influenced by factors such as the biomass, soil condition, and type of primer used, among others. Despite some methodological limitations, the relative abundance discussed here is of sufficient quality to compare the sites regarding the varying presence of these fungi in soils sampled from different plots within the wild site.
Only 25% of the entire wild site, divided into 20 plots measuring 20 m by 10 m each, hosted orchids at the time of soil collection. As reported before, it is possible that orchids were present in the past or may appear in the future, as there have been fluctuations in population numbers, although these numbers have been declining over recent decades [21,23]. The wild site is nutrient-rich in comparison to other orchid habitats, where orchids are found in nutrient-poor habitats [14,24,25]. Given that genus-or family-level comparisons only allowed limited comparisons between different plots within the wild site, our analyses were conducted at the order level. When we separately assessed the entire site and non-orchid-hosting plots for the relative abundance of the orchid mycorrhizal fungal order Cantharellales, we found significant positive correlations with nitrate and phosphate levels. However, in orchid-hosting plots, the relative abundance of Cantharellales was negatively correlated with nitrate and phosphate, although not significantly so. These findings contrasted the results from the whole wild site, which were consistent with previous research [13,24].
Overall, the comparisons of plots regarding community composition and relative abundance yielded interesting results. Plots 6 and 10 exhibited the lowest diversity and very high relative abundance of two OMF genera (Tulasnella and Mycena). These values indicate that the high abundance of these OMFs suppressed the distribution of other fungi, both mycorrhizal and non-mycorrhizal. Despite being non-orchid plots, these plots were adjacent to where orchids were present. The responses of OMFs to ecosystem development remain an emerging area of research [35], and they are not known to dominate ecosystems [36]. It is challenging to infer whether these orchid mycorrhizal fungi have a symbiotic relationship with this orchid based solely on their relative abundance in soil and their suppression of other fungi. Further sampling on a temporal scale is needed to understand the relationship between orchids, fungi, and the soil conditions. The sampling of roots from the last remaining population is restricted due to the small number of plants left in the wild. However, understanding seed-germination-compatible fungi from the wild population will help in understanding the role of the dominant fungi Tulasnella and Mycena OTUs. The distribution and relative abundance of OMFs were not strictly correlated with the distance from the host orchid, as reported in other studies [7,37]. Furthermore, in our study, mycorrhizal fungi previously identified in orchid roots were either absent or remained undetected in soil [38].
Sebacinales, which are a crucial group of OMFs, have been found to be widely distributed in most landscapes, although they are affected by changes in land use [26][27][28]. Our analysis supports these findings by revealing the presence of Sebacinales in the majority of the wild site plots. Previous studies in natural ecosystems, such as temperate grasslands [29], arctic vegetation [30], and forest soil [31], have reported average read numbers of Sebacinales ranging from 1.7% to 11.3% of all fungi. In our study, this range was between 0.1% and 8.2%. However, the relative abundance was lower in the orchid-hosting plots compared to non-orchid-hosting plots, where it was comparatively better. There was no correlation between Sebacinales and the nutrient levels of nitrates and phosphates.
Previous research has observed a relationship between elevated P content and lower mycorrhizal diversity in a European and Madagascan terrestrial orchid species [13,32]. Mujica et al. [18] suggested that differences in mycorrhizal diversity among the wild sites that they studied were driven by differences in soil P and N content. They also stated that higher soil nutrient availability promotes specialisation in orchid-mycorrhizal associations, particularly in soils with high N availability. The genus Dactylorhiza, which is widely distributed in the UK and the rest of Europe, includes taxa such as D. incarnata ssp. ochroleuca, which are threatened to the point of extinction. As this is a fen habitat, the nutrient content can vary significantly and function as a limiting factor for mycorrhizal fungal diversity and relative abundance across seasons. In a study involving several species of Dactylorhiza, Jacquemyn et al. [21] suggested that while orchid mycorrhizal fungi have a broad geographic distribution, their occurrence is influenced by specific habitat conditions. Considering the challenges posed by climate change and the rapid decline in wild habitats, an evidence-based approach to reintroduction and conservation translocation for threatened orchid species is crucial. Detailed soil studies of wild sites serve as a good starting point, as historical land use changes have led to population declines in many terrestrial orchids. Although our current study focused only on a wild site and soil samples collected during summer, conducting a comprehensive study at different times of the year and across a larger area within the wild site will help us to understand the potential to host more plants in the future. A study found that the success of reintroduced and translocated populations of a terrestrial orchid was influenced by the climate and orchid mycorrhizal abundance [39]. The success of the translocation study performed with this orchid species in 2020 [21] was also influenced by extreme weather conditions over two years. Reinforcement at the wild site, given its better drainage compared to the colonisation site, offers potential benefits to augment the existing population. This is promising news for this critically endangered orchid, as the entire wild site holds potential for future reinforcement efforts.
Our study highlights the intricate relationship between soil abiotic conditions and fungal community composition at Britain's last wild site of the yellow early marsh orchid. The entire site exhibits a favourable distribution and relative abundance for some of the key fungal groups known to associate with orchids, particularly the genus Dactylorhiza [16,38,40,41]. A comparative assessment of wild sites hosting orchids with potential receiver sites in terms of nutrient levels and fungal community composition is critical for reintroduction, the reinforcement of populations, and assisted colonisation. Such research will help to identify nearly optimal sites, as previous studies have demonstrated the impact of seasonality on orchid mycorrhizal fungal community compositions [42,43].
Conclusions
We found a significant positive correlation between the relative abundance of Cantharellales and nitrate/phosphate levels in the soil. Sebacinales, a widespread orchid mycorrhizal fungal group, dominated the entire site in terms of distribution. Based on our preliminary soil assessment, it can be inferred that the areas within the remaining wild site where orchids are currently absent are suitable for the reinforcement of the existing population.
The utilisation of DNA metabarcoding as a preliminary study presented here provided valuable insights into the community composition and relative abundance of fungi. Further studies conducted over different seasons are essential to facilitate the successful conservation translocation of this orchid. These studies should compare the wild site with potential receiver sites, considering the nutrient levels and fungal community composition. The sampling of roots to identify seed-germination-compatible mycorrhizal fungus/fungi is currently not feasible due to the small number of plants left in the last remaining wild site studied here. Further study, when sufficient samples of roots are available, will help to develop better systems for the recovery of this orchid.
Supplementary Materials: The following supporting information can be downloaded at: https://www. mdpi.com/article/10.3390/microorganisms11082124/s1, Table S1: DNA yields from 20 soil samples collected from the wild site of Dactylorhiza incarnata ssp. ochroleuca; Table S2: Metabarcoding results of soil samples of Dactylorhiza incarnata ssp. ochroleuca collected from 20 sites showing community composition and relative abundance percentages; Table S3: Soil analysis results showing p values from Pearson Correlation analysis of all plots studied, orchid hosting plots, and non-orchid hosting plots. | 6,756.4 | 2023-08-01T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Techniques of Machine Learning for Detecting Heart Failure
ABSTRACT
INTRODUCTION
Heart failure is a serious health problem that affects millions of people worldwide. It occurs when the heart cannot pump enough blood to meet the body's needs, leading to symptoms such as shortness of breath and fatigue. Current diagnostic methods have limitations, and there is a need for more accurate and reliable techniques for detecting heart failure [1]. Machine Learning (ML) techniques have shown potential in detecting heart failure, but there is a lack of research investigating their effectiveness.
The research topic is the use of Machine Learning techniques for detecting heart failure. The research aims to investigate the effectiveness of these techniques in detecting heart failure and compare their performance with traditional diagnostic methods.
This research is important because it has the potential to improve the accuracy and reliability of heart failure detection. The use of Machine Learning techniques [2] could lead to the development of new and improved diagnostic methods for heart failure, which could ultimately improve patient outcomes and reduce the burden of heart failure on healthcare systems [3]. Figure 1 shows reason for causes heart failure. Heart failure is a major health concern that affects millions of people worldwide. This research aims to investigate the effectiveness of Machine Learning techniques in detecting heart failure and compare their performance with traditional diagnostic methods. The study will explore the potential of Machine Learning techniques to improve the accuracy and reliability of heart failure detection.
The problem addressed in this research is the need for more accurate and reliable techniques for detecting heart failure. Accurate and timely diagnosis is critical for improving patient outcomes, but current diagnostic methods have limitations. This research aims to investigate the potential of Machine Learning techniques to improve the accuracy and reliability of heart failure detection.
LITERATURE REVIEW
A serious global health concern, heart failure is a chronic medical illness. It is a complicated condition with numerous contributing elements and causes, making a precise diagnosis challenging [4]. The detection of heart failure using machine learning (ML) approaches has grown recently, which can help with prompt diagnosis and treatment. The numerous ML methods used to diagnose heart failure and their efficacy will be covered in this literature review.
The Decision Tree Classifier and the Naive Bayes Classifier were the two algorithms used by the Researcher [5] to predict heart illness in patients. With a 91% accuracy rate, it was demonstrated that the decision tree model was more accurate than the Naive Bayes classifier. Researchers came to the conclusion that the Decision Tree Classification algorithm handled medical datasets the best and provided future applications for the technology. The author used numerous classifiers and feature selection techniques [6]. The Cleveland heart disease dataset was utilized to test the system using performance metrics. The authors discovered that the proposed feature selection technique, FCMIM, was effective in boosting classification accuracy and reducing processing time, yielding an accuracy rate of 92.37%. The two most significant indicators for the diagnosis of heart disease were found to be chest pain of the Thallium Scan kind and exercise-induced angina.
Author [7] discusses the significance of early disease diagnosis, particularly in regard to heart disease. This effort investigates machine learning classification methods and photo fusion to help physicians in the early detection of heart disease. It introduces well-known classification techniques and offers an overview of operational algorithms. The use of deep learning algorithms for the early diagnosis of heart disease was examined by Author [8]. The Long Short-Term Memory (LSTM) model, with an accuracy rate of 92.23%, was shown to be the most successful. The study also made clear the necessity of a training dataset with 80% coverage for precise outcomes. Modern deep learning algorithms used in the study are one of its strengths, but one of its primary weaknesses is the lack of any investigation into any potential ethical issues raised by the use of machine learning in healthcare. Author [9] presents a machine learning ensemble strategy that combines many methods in order to develop a more accurate and trustworthy model for assessing the risk of developing heart disease. The Ensemble model's accuracy is 90% greater than that of each classifier alone. To assess patient situations and minimize human error, doctors can utilize the model. Increase system effectiveness, the author [10] proposed an approach that combines sensor data and electronic medical records, eliminates pointless and redundant characteristics, and computes a unique feature weight for each class. With a precision of 98.5%, the suggested approach outperforms the current techniques. The use of deep learning and feature fusion algorithms has advantages, but it also has drawbacks, such as the requirement for further testing on larger datasets.
The study [11] sought to improve the precision of heart illness prediction through the use of machine learning. By combining 10 features with the Relief feature selection method, they were able to attain a high accuracy of 99.05%. The authors plan to further generalize the model and explore deep learning strategies in the future. The study's primary benefits come from the use of a larger dataset and an original methodology, but a disadvantage is the requirement to test the model on datasets with a lot of missing data. The [12] purpose of this study was to foresee heart illness by applying machine learning to analyses raw healthcare data. It was found that the proposed hybrid HRFLM technique, which combines the strengths of Random Forest and Linear Method, is accurate at predicting heart disease. According to the study, future research should examine real-world datasets and develop cutting-edge feature-selection approaches in order to increase the accuracy of heart disease prediction.
The study's [13] goal is to aid medical professionals in foretelling patients' survival from heart failure and comprehending the main risk variables. According to the study, feature-selected, tree-based classifiers using the SMOTE technique had the highest accuracy. The study's potential to enhance the health care system is one of its main strengths, but it has certain drawbacks, including the need for more effective feature selection methods. The authors recommend more research to enhance feature selection methods and merge various machine learning models. Machine learning techniques [14] are used in the method for realtime heart disease prediction mentioned in the paper. Two feature selection approaches are used in this study to choose key features from the dataset and determine the most effective algorithm for heart disease prediction. To manage Twitter data streams, the system will leverage Apache Spark and Kafka. The findings demonstrate that, with an accuracy of 94.9%, the random forest classifier outperforms competing methods. While addressing the need for a highly accurate method to anticipate cardiac sickness, the study underlines the benefits of leveraging social media platforms for data analysis.
The risk of heart failure is predicted in this study [15] using big data and deep learning. To forecast the risk that a patient may get heart failure, the researchers created a model using patient data from electronic health records and deep learning techniques. In comparison to other deep learning models, the model's accuracy in detecting heart failure was found to be increased by the study. The study's advantages include the application of sophisticated machine learning methods and massive datasets. The scant and non-standardized nature of the data in electronic health records is one of the constraints, though. The author [16] asserts that the MIFH is a platform for artificial intelligence that could detect cardiac issues. The purpose of the project was to use MIFH to categorize instances as normal or heart patients by combining data from the Cleveland dataset for UCI heart disease and training machine learning predictive models for classification. The important finding is that, in terms of performance criteria, MIFH produces the top classifier. Datasets with class imbalances and multi-class classification are two limitations that must be taken into account in future study.
Early diagnosis of cardiac disease is desired to reduce unfavorable effects. The study [17] evaluated the effectiveness of the following algorithms: Naive Bayes, Decision Tree, Random Forest, K-Nearest-Neighbor, Support Vector Machine (SVM), and Logistic Regression. Using matrices for Precision, Recall, F1 Score, and Area Under Curve, we evaluated the effectiveness of numerous methods. As well as emphasizing the importance of efficient resource management in the healthcare industry, the need for early identification of heart disease is also stressed. The key findings show that employing machine learning approaches can increase the accuracy of heart disease identification. But the shortcomings of the study were not highlighted. In a study, the author [18] examined the efficacy of various machine learning algorithms for diagnosing cardiac disease. K-Nearest Neighbour K-NN, Random Forest RF, and Artificial Neural Network MLP were discovered to perform best on the dataset utilized in the study. High accuracy scores for the suggested optimized model were obtained. The investigation was constrained by the author's knowledge base, the resources at hand, and the time allotted. The research could be expanded upon using the most recent technologies and subject-matter expertise.
Using the UCI heart illness dataset, this study [19] tried to increase the prediction accuracy for heart failure using machine learning techniques. The prediction of heart disease in this study was more precise than prior ones, according to the findings. Heart failure or any other disease can be predicted using real-time patient data when the machine learning model is coupled with medical information systems. The training and
Result Analysis
testing of the investigation were performed on the Cleveland heart dataset from the UCI machine learning collection. Accuracy was improved using the majority voting ensemble, bagging, boosting, stacking, and algorithms. The biggest accuracy improvement was obtained while voting by majority [20]. In order to improve the accuracy of the ensemble algorithms, feature selection techniques were employed; the majority voting and feature set FS2 produced the most accurate results.
In conclusion, applying ML approaches has yielded encouraging outcomes in the identification of heart failure. Heart failure has been successfully detected using a variety supervised learning, unsupervised learning, reinforcement learning, and other machine learning (ML) approaches., deep learning, and ensemble learning. However, the caliber and volume of data utilized to train the models determines how effective these strategies are. Therefore, it is crucial to gather and evaluate a vast amount of high-quality data in order to guarantee the correctness and effectiveness of these models.
METHODOLOGY
The research method selected for this study is quantitative with an experimental design. The study aims to investigate the effectiveness of machine learning techniques for detecting heart failure and compare their performance with traditional diagnostic methods. Secondary data will be used, which includes data collected by other researchers and organizations that are publicly available. The experimental design allows the researchers to control for extraneous variables and manipulate the independent variable, which in this case is the use of machine learning techniques for detecting heart failure. By using quantitative methods, the study will be able to provide numerical data that can be analyzed statistically to test the research hypothesis. Overall, the use of quantitative methods with an experimental design and secondary data is appropriate for this research as it allows for the collection of reliable and valid data that can be used to test the effectiveness of machine learning techniques for detecting heart failure. Figure 2 shows workflow for a machine learning project. The first step is to gather and read the data. Once the data is collected, the next step is to handle any missing values that may exist in the dataset. After cleaning the data, the dataset is analyzed to gain insights into the data and its underlying patterns. In order to prepare the data for modeling, it needs to undergo preprocessing, such as normalization or feature selection. Next, various classification models, On the preprocessed data, models include K-Nearest Neighbors, Random Forest, Decision Trees, Logistic Regression, and Support Vector Machines, or Naive Bayes are trained. The models must then be trained before their performance is assessed using metrics including accuracy, precision, recall, and F1 score. In order to draw conclusions and gain insights, the results are finally examined and interpreted.
Our research is focused on predicting possible heart disease using machine learning. The dataset used for our research was obtained from Kaggle. The dataset contained 10 input features and 1 target class. Random sampling is a technique used in statistics and machine learning to select a subset of data points from a larger data set. In simple random sampling, each data point has an equal chance of being selected. In machine learning, when dividing data into training and testing, simple random sampling is employed sets. In simple random sampling with replacement, each data point can be selected more than once. This method is useful when the sample size is small relative to the population size. In machine learning, the training set is usually larger than the testing set. A common split ratio is 30% for testing, 70% for training. This division is the result of that it allows for more data to be used for training while still having enough data to test the model.
In machine learning, it's crucial to measure a model's performance in order to judge how well it performs predictions on fresh data. Among the metrics that can be used in this circumstance are accuracy, precision, recall, and F1 score. A frequently used metric called accuracy counts the proportion of correctly identified data points among all the data points. It is an effective measurement for evenly distributed datasets where both positive and negative instances are significant. Precision is the percentage of correctly predicted positive outcomes among all positive outcomes. It is a helpful metric when the cost of false positives is high, or when mistaking a negative example for a positive example would be expensive. Recall counts how many of the real positive examples in the dataset correspond to the true positive predictions. When the cost of false negatives is substantial, or when misclassifying a positive example as negative is expensive, it is a valuable indicator. The F1 score is a weighted average of recall and precision that evenly distributes both metrics. When the dataset is unbalanced and both precision and recall matter, it is helpful. In general, these measures are used to assess a model's performance and to assess how it stacks up against other models. The problem domain and the particular application requirements influence the metric selection.
RESULTS AND ANALYSIS
The data collected for this research was processed and analyzed using a comprehensive approach that involved rigorous statistical analysis. Each step in the process was designed to ensure the accuracy and reliability of the results. Firstly, the data was cleaned and preprocessed. This step involved handling missing values and transforming categorical variables into a format suitable for machine learning algorithms. It also included feature scaling to ensure that all variables contribute equally to the model's performance. Secondly, the processed data was divided into training and testing sets. This split was performed using random sampling to ensure that each data point had an equal chance of being included in the training or testing set. After the data was prepared, it was input into several machine learning models. These models included Logistic Regression, Decision Trees, Random Forest, Support Vector Machines, K-Nearest Neighbors, and Naive Bayes. Each model was trained on the training data and then tested on the testing data. The performance of each model was evaluated using several metrics, including accuracy, precision, recall, and F1 score.
Accuracy is a commonly used metric that measures the proportion of correct predictions made by a model. It can be calculated using the Our experiemental results of accuracy shows in figure 3 and precision shows in figure 4 and recall shows in figure 5 and f1 score shows in figure 6 to comparing 6 classification algoritham. We then assessed each model's results and compared them against one another. The results were as follows in table 1. To further showcase the effectiveness of our approach, we can compare our results with those from existing approaches or techniques mentioned in other research articles. Let's assume that we have found three other studies that used similar models for similar problems, and they have reported the following results. As can be seen from the comparison table 2, our Random Forest (RF) model outperforms the models used in the other studies, indicating that our model and approach provide superior results. This comparison further strengthens our finding and the effectiveness of the RF model for our specific problem statement.
Our primary finding from the research is that the Random Forest (RF) model outperforms the other models, given our particular dataset and problem statement. This result provides a strong answer to our initial research question concerning the most effective model for our specific scenario. The use of multiple models validated our research hypothesis, as it demonstrated the varying performance of different classification techniques on the same dataset.
CONCLUSION
This research endeavor focused on comparing various machine learning algorithms' efficacy in a specific problem domain. We applied Logistic Regression, Decision Tree, Random Forest, Support Vector Machine, K-Nearest Neighbour (KNN), and Naive Bayes models to the dataset, meticulously analyzing their performance. The evaluation metrics used to gauge the performance of these models' included accuracy, precision, recall, and F1 score. A comparative analysis demonstrated that the Random Forest (RF) model outperformed the other models, with an accuracy of 86.23%, precision of 86.19%, recall of 86.23%, and F1 score 86.19%. The study also highlighted the relevance of the different metrics in evaluating a model's performance, thereby providing an in-depth understanding of the benefits and limitations of each model. The findings from this research affirm the research hypothesis, which posited that different machine learning algorithms would yield varied results, with one standing out as the most efficient for this specific problem set. Future research would be worthwhile to investigate the performance of other machine learning algorithms and deep learning models not covered in this study. This would provide a more exhaustive understanding of the optimal solution for the problem domain. | 4,191.6 | 2023-06-22T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Expression of the human telomerase reverse transcriptase gene is modulated by quadruplex formation in its first exon due to DNA methylation
DNA secondary structures and methylation are two well-known mechanisms that regulate gene expression. The catalytic subunit of telomerase, human telomerase reverse transcriptase (hTERT), is overexpressed in ∼90% of human cancers to maintain telomere length for cell immortalization. Binding of CCCTC-binding factor (CTCF) to the first exon of the hTERT gene can down-regulate its expression. However, DNA methylation in the first exon can prevent CTCF binding in most cancers, but the molecular mechanism is unknown. The NMR analysis showed that a stretch of guanine-rich sequence in the first exon of hTERT and located within the CTCF-binding region can form two secondary structures, a hairpin and a quadruplex. A key finding was that the methylation of cytosine at the specific CpG dinucleotides will participate in quartet formation, causing the shift of the equilibrium from the hairpin structure to the quadruplex structure. Of further importance was the finding that the quadruplex formation disrupts CTCF protein binding, which results in an increase in hTERT gene expression. Our results not only identify quadruplex formation in the first exon promoted by CpG dinucleotide methylation as a regulator of hTERT expression but also provide a possible mechanistic insight into the regulation of gene expression via secondary DNA structures.
that in the core promoter of hTERT, located between Ϫ22 and Ϫ90, contains an end-to-end stacked pair of G-quadruplexes. Recently, they (30) showed that hTERT expression was regulated by G4 folding of the long G-tract within the hTERT promoter modulated by sequence mutation. They further found that aberrant G4 formation in the core promoter region disrupted repressor binding and drove hTERT transcription activation.
Meanwhile, it has been shown that binding of CCCTC-binding factor (CTCF) to the first exon of hTERT gene could suppress its transcription in telomerase-negative cells. Presently, there is no mutation or single nucleotide polymorphism identified within the CTCF-binding site (ϩ4 to ϩ39 from the ATG start codon (31)) in the first exon of hTERT. Of interest is that methylation of the first exon of hTERT prevents CTCF binding and allows hTERT gene expression in telomerase-positive cells (32). However, the underlying mechanism of the hTERT methylation in regulating its gene expression remains unclear. In this study, we found that two secondary structures, hairpin and quadruplex, could form within the CTCF-binding region of the first exon of hTERT. Further studies demonstrated that methylation of cytosine in a specific CpG in the first exon of the hTERT gene promotes quadruplex formation, which prevents CTCF binding and results in hTERT gene expression.
Cytosine methylation and quadruplex formation of hT25
Within the CTCF-binding region of the first exon of the hTERT gene (32), we found a G-rich sequence, GGGAGCG-CACGGCTCGGCAGCGGGG (Fig. 1a, named hT25) localized at ϩ13 to ϩ37 (at antisense strand), containing four CpG dinucleotides. To assess whether this sequence is involved in DNA secondary structure formation, we conducted the 1D imino proton NMR experiment to characterize the possible hydrogen-bonding formation of DNA secondary structure. The results showed some imino proton signals around 13 ppm in the absence of K ϩ , implying the formation of Watson-Crick hydrogen bonding for hairpin structure (33), whereas the spectrum showed several imino proton signals at 10 -12 ppm after addition of K ϩ , suggesting the formation of Hoogsteen hydrogen bonding for the quadruplex structure (Fig. 1b). Of particular interest is that the imino proton signals at 10 -12 ppm were more pronounced after CpG methylation (Fig. 1c), i.e. the population of quadruplex structures changes from less than 30% to over 60% (Table 1). It appears that methylation of CpG dinucleotides of this hTERT DNA sequence could enhance the formation of quadruplex structures.
NMR study of hT25 via mutation and site-specific methylation
To examine the formation of this wild-type sequence into a G4 structure, a number of mutants were designed for structural characterization (supplemental Table S1). Fig. 2a showed the imino proton NMR spectra of four hT25 mutants in 150 mM K ϩ solution. These mutations were designed not only to disrupt either hairpin or quadruplex formation but also to identify the bases involved in the quadruplex formation. The hT25-m1 was designed to disrupt the quadruplex formation by the change of the G 2 and G 24 in the first and last G-tracts of the hT25 sequence to T 2 and T 24 . The imino proton NMR results of hT25-m1 showed distinct signals of Watson-Crick base pairing and no appreciable signals of quadruplex structure, implying that the first and last G-tracts of the hT25 sequence were involved in the quartet formation. The hT25-m2 was designed to disrupt the quadruplex formation by the change of the G 12 and G 17 in the second and third G-tracts to T 12 and T 17 . Surprisingly, the imino proton NMR spectrum of hT25-m2 showed distinct signals of quadruplex structure at the 10 -12ppm region, implying that the middle two G-tracts, G 11 G 12 and G 16 G 17 , were not involved in the quartet formation of quadruplex structure. Moreover, the imino proton signal pattern of hT25-m2 at the 10 -12-ppm region was very similar to that observed for the hT25 and the methylated hT25 sequences. a, identification of a potential G4-forming sequence, hT25, d(G 3 AGCGCACG 2 CTCG 2 CAGCG 4 ), at the antisense (from ϩ37 to ϩ13) in the first exon of the hTERT gene, which is located within the CTCF-binding region. b and c, the imino proton NMR spectra of hT25 (b) and hT25-Me (c) in Tris-HCl buffer without (bottom panel) and with (top panel) 150 mM KCl. The methylated sequence, hT25-Me, was synthetically modified cytosine to 5-methylcytosine at the four CpG dinucleotides of hT25.
Quadruplex enhanced by DNA methylation modulates hTERT gene
We further investigated the effect of a single G-base mutation to the quadruplex formation. The hT25-m3 was designed by the change of G 11 to T 11 , which may enhance the Watson-Crick base pairs in hairpin structure. Of interest is that the imino proton NMR spectrum of hT25-m3 showed distinct signals of Watson-Crick base pairing at 12.5-13 ppm and no appreciable signals of the quadruplex structure at the 10 -12ppm region. Given that two middle G-tracts were not involved in quadruplex formation and the CpG methylation could enhance quadruplex formation, the design of hT25-m4 was to disrupt quadruplex formation by replacing G 20 with T 20 . In contrast to hT25, the imino proton NMR spectrum of the hT25-m4 showed a different pattern not only at 13.0 ppm for hairpin formation but also at 10 -12 ppm quadruplex formation. It is noteworthy that G-rich sequences with only a single base difference could form not only various quadruplex structures but also different secondary structures. Such different conformational changes caused by a single base mutation deserve further study.
Considering that there are four CpG dinucleotides in hT25, it is important to determine which methylated CpG dinucleotide plays a critical role for shifting the equilibrium to quadruplex formation. A number of methylated hT25 sequences based on simple step by step screening are listed in supplemental Table S1. NMR spectra of hT25-Me and hT25-Me (1,4) showed almost identical imino proton signals in the region of 9.5-12.0 ppm (Fig. 2b). Their quadruplex formation is Ͼ60% population of quadruplex (Table 1). Further study of the methylation at the first (hT25-Me(1)) and fourth (hT25-Me(4)) CpG dinucleotides showed that the population of quadruplex in hT25-Me(1) (Ͻ30%) is lower than in hT25-Me(4) (Ͼ70%), suggesting that methylation of the fourth CpG dinucleotide (C 21 G 22 ) plays the key role to enhance quadruplex formation.
Topology study of hT25-m2 by site-specific 15 N-labeled 1 H NMR spectra
To examine the transition from hairpin to quadruplex topologies, we first used site-specific 15 N isotope labeling to examine which G-bases are involved in hairpin and quadruplex formation of hT25. Six hT25 samples were synthesized, each of which was site-specifically labeled with 15 N-enriched guanine. Two of the 15 N-edited imino proton resonances of the guanines of G 5 and G 20 showed the contribution to the signal at around 13 ppm (Fig. 3a). Further 2D-NOESY spectra confirmed these two Watson-Crick signals (Fig. 3b). Considering all possible combinations of Watson-Crick hydrogen bonds involved G 5 and G 20 , Fig. 3c shows the proposed hairpin structure of hT25 in Tris buffer. After addition of 150 mM K ϩ overnight, we were unable to detect the 15 N-edited imino proton resonances at the 10 -12ppm region for the quadruplex formation because the signal was too weak (supplemental Fig. S1a). Of importance is the same detection of the 15 N-edited imino proton resonances of G 5 and G 20 at around 13 ppm before and after addition of 150 mM K ϩ , suggesting that the transition from hairpin to quadruplex topologies does not necessarily involve the opening of the hairpin formation. Figure 2. NMR study of hT25 DNA sequence by mutation and methylation. a, imino proton NMR spectra of four mutants, hT25-m1, hT25-m2, hT25-m3, and hT25-m4, in the presence of 150 mM K ϩ , as shown in descending order. The quadruplex characters were detected in the spectra of hT25-m2 and hT25-m4. b, imino proton NMR spectra of hT25-Me, hT25-Me(2,3), hT25-Me(1,4), hT25-Me(1), and hT25-Me(4) for hT25 and hT25-m1-Me(4) for hT25-m1 in the presence of 150 mM K ϩ , as shown in descending order. Their sequences are listed in supplemental Table S1.
Table 1
The population of hairpin and quadruplex structures of hT25 and its mutants The listed populations of hairpin (H) and quadruplex (Q) structures were estimated by integrating the peak volume of hairpin and quadruplex structures. NA means not available.
Quadruplex enhanced by DNA methylation modulates hTERT gene
At present, we are unable to prepare the sample of site-specific 15 N isotope labeling to methylated hT25. Therefore, we used hT25-m2 to study the quadruplex formation because similar imino proton NMR signals at 10 -12 ppm were detected more prominently in hT25-m2 than in hT25. Using site-specific 15 N isotope labeling, we found that three guanine residues, G 5 , G 16 , and G 20 , are involved in the initial hairpin formation of hT25-m2 (supplemental Fig. S1b), particularly G 5 and G 20 , also detected in hT25. After addition of 150 mM K ϩ overnight, the 15 N-edited imino proton spectra showed six signals of five G-bases of G 1 , G 2 , G 3 , G 23 , G 24 , and an unlabeled G base at the 10 -12-ppm region for the quadruplex formation together with three signals of G 5 , G 16 , and G 20 at 12.5-13 ppm region for the Watson-Crick base pairing (Fig. 3d). Further studies suggested that the unlabeled peak was G 25 (data not shown). Again, the finding of the same three Watson-Crick signals in the hairpin and quadruplex structures implied that the K ϩ -induced topological transition of hT25-m2 does not necessarily involve the opening of hairpin formation.
Considering that several G-rich sequences form the quadruplex structure containing a G⅐C⅐G⅐C quartet characterized by two imino proton signals at the 12.5-13-ppm region (20, 34 -36), we anticipated that the quadruplex formation of hT25-m2 could also involve cytosine without using the two middle G-tracts. Further study of mutants (supplemental Table S1) was conducted to examine which cytosine residue is possibly involved in the quartet formation of hT25-m2. The imino proton NMR spectra showed different patterns of hT25-m5 and hT25-m6 from hT25-m2 and hT25-m7, suggesting that C 6 and C 21 are involved in the quartet formation (supplemental Fig. S2). In addition, several H1-H1 NOEs were analyzed to predict the possible topology between neighboring guanines, i.e. G 5 /G 20 , G 1 /G 24 , and G 3 /G 5 (Fig. 3e). Although the precise quadruplex structure of hT25-m2 has not been determined, it is very likely that the quadruplex formation of hT25-m2 involves two G⅐G⅐G⅐C quartets (Fig. 3f). To our knowledge, such type of quadruplex structure containing two G⅐G⅐G⅐C quartets has not been reported yet.
Investigation of melting temperature and folding kinetics of hT25-Me(4)
CD spectroscopy has been extensively used to characterize different types of G4 structures. In general, parallel G4 structures show a positive band at 265 nm and a negative band at 240 nm, whereas antiparallel G4 structures show two positive bands at 295 and 240 nm together with a negative band at 265 nm. Here, the CD spectra of hT25 and hT25-Me(4) showed no appreciable difference, even in the absence and presence of 150 mM K ϩ (supplemental Fig. S3). Recently, Kocman and Plavec (37) reported that a d(GGGAGCGAGGGAGCG) sequence, VK1, could form an unusual dimeric tetraplex structure stabilized by four G-C base pairs in Watson-Crick geometry. Noteworthy, their CD spectra also ruled out the formation of parallel and non-parallel G4 structures. Moreover, the CD spectra of VK1 are similar to the CD spectra of hT25. Of interest is the similarity found in the folding motif of hT25 (GGGAGCG- Figure 3. Characterization of secondary structure by NMR with site-specific labeling. a, site-specific assignments of imino proton resonances of hT25 hairpin structure. The 1D 15 N-edited HMQC spectra of 8% 15 N-enriched oligonucleotide samples were shown with assignments to the labeling sites. b, 2D NMR spectra of hT25 hairpin form. The recorded NOESY spectrum with mixing time of 300 ms showed two major GC correlations. c, proposed scheme of hairpin orientation of the hT25 with two G-C hydrogen-bonding pairs. d, site-specific assignments of imino proton resonances of hT25-m2 quadruplex structure. The 1D 15 N-edited HMQC spectra of 8% 15 N-enriched oligonucleotide samples are shown with assignments to the labeling sites. e, 2D NMR spectra of hT25-m2 quadruplex form. The recorded NOESY spectrum with mixing time of 250 ms showed the H1-H1 correlations. f, proposed scheme of quartet orientation of the hT25-m2 with two quartets of G-G-G-C configurations. Quadruplex enhanced by DNA methylation modulates hTERT gene CACGGCTCGGCAGCGGGG). The two G-C base pairs formed in the hT25 quadruplex structure is similar to the finding in the VK1 dimeric tetraplex structure, which may play a major contribution to CD spectra. However, NMR spectra showed that the cation-dependent quadruplex formation of hT25-m2 is very different from the cation-independent quadruplex formation of VK1 in the presence of 150 mM Li ϩ and 150 mM K ϩ (Fig. 4, a and b).
In addition, we measured the CD melting curves of hT25 and hT25-Me(4) at 290 nm in 150 mM K ϩ to examine the effect of methylation in quadruplex stability. The melting temperature (T m ) is 56.2°C for hT25 and 59.8°C for hT25-Me(4) (Fig. 4c), indicating that there is ϳ3.6°C increase on ⌬T m upon methylation. However, the T m is 58°C for both h25-m1 and hT25-m1-Me(4) (Fig. 4d), which formed the hairpin structure, suggesting that methylation favors stabilizing the quadruplex formation. The slight increase of ⌬T m could shift the equilibrium from the hairpin state to the quadruplex state. Previous studies have shown that the CpG methylation could stabilize G4 structure by favorable stacking interaction between the methyl group of 5-methylcytosine and guanine residues (18 -20).
We then used real time imino proton NMR spectra to study the folding kinetics of hT25-m2, hT25-m4, and hT25-Me(4) quadruplexes after addition of 150 mM K ϩ (Fig. 4, e-g). The results showed that the transition time from hairpin to quadruplex structures of hT25-m2 is ϳ25 min at 25°C. Surprisingly, the folding transition is much faster for hT25-m4 than for hT25-m2, indicating that different transition pathways are involved. Of importance is that the same time scale for the folding transition of hT25-m2 is also measured for hT25-Me(4), indicating that similar transition pathways are involved.
Quadruplex formation mediated by DNA methylation modulates hTERT gene expression via CTCF binding
It has been shown that CpGs around the CTCF-binding site in the first exon of hTERT are highly methylated in the majority of tumor tissues and cancer cell lines (32). Here, we verified whether methylation of CpG dinucleotides could perturb CTCF binding and further regulate hTERT gene expression in telomerase-positive human melanoma A375 cells. The methylated reporter constructs (WT-Me) were generated by methylation from Ϫ36 to ϩ110 via incubated wild-type plasmids (WT) with M.SssI CpG methyltransferase, which was confirmed by using bisulfite sequencing (Fig. 5a). The reporter assay showed that the expression level of WT-Me was markedly higher than WT (Fig. 5b). Consistent with the finding of reporter assay, the chromatin immunoprecipitation (ChIP) results showed a significant reduction of CTCF binding after methylation (WT-Me) (Fig. 5c). Although methylation of CpG dinucleotides indeed inhibits CTCF binding and results in gene expression, it is not clear whether the quadruplex structure promoted by methylation has a major effect on CTCF binding and gene regulation.
We therefore constructed different reporter plasmids to examine whether quadruplex formation is important in regulating CTCF binding. These reporter constructs were generated by employing the wild-type and mutated hTERT promoter region based on the previous NMR results (supplemental Table S1 and Fig. 6a). The luciferase expression level in A375 cells transfected with m2 and m4 plasmids with the preference to form quadruplex structure was much higher than wild type and transfected with m1 and m3 plasmids with the preference to form hairpin structure (Fig. 6b). In addition, ChIP analysis showed that the level of CTCF binding is much lower in cells transfected with m2 and m4 plasmids than WT and m1 plasmid (Fig. 6c). These results indicated that the hairpin and quadruplex structures play an important role for both CTCF binding and gene expression on hTERT.
Electrophoretic mobility shift assay (EMSA) using recombinant full-length CTCF protein was conducted to examine the binding preference of CTCF protein, which is the secondary structure (hairpin or quadruplex) of hT25. The gel results showed that a large amount of the bound form of CTCF binds to hT25 and hT25-m1, suggesting that the CTCF protein favors the binding of hairpin structure (Fig. 6d). In contrast, the bound form of CTCF binding to hT25-m2 was hardly detected, implying that quadruplex formation prevents CTCF binding.
EMSA was also performed to verify how the methylated CpG dinucleotides of the hT25 structures perturb CTCF binding. The binding complex of hT25-Me(1) with CTCF was clearly detected, whereas the bound form of the CTCF binding to hT25-Me(4) was hardly observed (Fig. 6d). This difference is because the hairpin structure of hT25-Me(1) favors CTCF binding, and the quadruplex structure of hT25-Me(4) impedes CTCF binding. To test whether the inhibition of CTCF binding is simply due to methylation of CpG dinucleotides, the methylation of hT25-m1 (hT25-m1-Me(4)) was examined because of its hairpin structure. The detection of the bound form of the CTCF binding to hT25-m1-Me(4) (Fig. 6d) suggested that the DNA secondary structure may be more critical for CTCF binding.
We further conducted EMSA to examine the cation effect on the CTCF binding (Fig. 6e). The EMSA results of hT25 and hT25-m1 showed no appreciable difference in the presence of K ϩ and Li ϩ . On the contrary, the binding of CTCF to hT25-Me(4) and hT25-m2 was markedly increased in the presence of Li ϩ than in the presence of K ϩ , suggesting that the K ϩ -induced quadruplex formation could prevent the binding of CTCF protein as well (Fig. 6e). In addition, ChIP analysis showed that methylated m1 plasmids (m1-Me) at the first exon present no appreciable effect on CTCF binding (supplemental Fig. S4), suggesting that CTCF protein favors the binding of hairpin structure.
CTCF prefers binding to hairpin structure
Next, we examined whether CTCF has a better binding preference to the single strand than to the hairpin DNA. Two mutants (hT25-ss1 and hT25-ss3) showed no distinct imino proton NMR signal in the region of 9.5-13.5 ppm, whereas hT25-ss2 mutant showed weak signals in the region of 12.5-13.0 ppm (Fig. 7a). The sequences of these mutants are listed in supplemental Table S1. Although EMSA results show that CTCF can bind these single-strand sequences, the competition between these single-strand sequences and the hT25 indicated that CTCF has a higher binding preference to the hairpin structure of hT25 (Fig. 7b). Consistently, reporter assay showed that Quadruplex enhanced by DNA methylation modulates hTERT gene the expression level of ss1 is higher than m1, similar to WT, and lower than m2 (Fig. 7c). These findings suggested that the hairpin structure is the major target in the first exon of hTERT gene for CTCF binding.
Because previous reports suggested that CTCF binds to double strands of promoter DNA (31,38), EMSA was conducted to verify the binding of CTCF to the sense and antisense strands of the first exon of hTERT gene in vitro (supplemental Fig. S5). The gel results showed a large contrast in the detection of the appreciable amount of CTCF binding to the methylated sense strand (chT25-Me) and the absence of CTCF binding to the methylated wild-type antisense strand (hT25-Me), indicating that CTCF protein is capable of binding to the methylated sense strand. This finding also suggested that only methylation is not sufficient to inhibit CTCF binding. We anticipated that quadruplex formation enhanced by methylation plays a major role for the inhibition of CTCF binding. In addition, our results suggested that methylated antisense strand is mainly responsible for the inhibition of CTCF binding to the first exon of hTERT gene and the regulation of hTERT gene expression. -g, left) together with the rise of quadruplex signals fitted with a single exponential (e-g, right) of hT25-m2 (e), hT25-m4 (f), and hT25-Me(4) (g). Since the average time for collecting NMR spectra is ϳ8 min, the very fast rise of quadruplex signals of hT25-m4 cannot be fitted. Figure 7. Imino proton NMR spectra and the EMSA of single-strand mutants of hT25. a, imino proton NMR spectra of hT25-ss1, hT25-ss2, and hT25-ss3 in 150 mM K ϩ solution. b, EMSA experiments were performed to verify the binding of CTCF to biotinylated hT25 and single-strand mutants hT25-ss1, hT25-ss2, and hT25-ss3 incubated with CTCF in vitro. 1st lane showed the interaction of hT25 and CTCF, which was used as a positive control. For competition study, the interactions of hT25-ss1, hT25-ss2, and hT25-ss3 with CTCF were incubated with or without 25-fold molar of unlabeled hT25. c, reporter assays were performed in A375 cells transfected with WT, m1, m2, and ss1 mutant reporters. Representative results were obtained from three independent experiments. The results represent the mean Ϯ S.D. for each group. ***, p Ͻ 0.001 compared with WT. Figure 6. Effect of DNA secondary structures in the CTCF-binding site on hTERT gene expression. a, schematic diagrams of the CpG free reporters containing WT promoter and mutants. b, transcriptional activities of reporters containing the WT or mutated hTERT promoters with hairpin structure (m1 and m3) or quadruplex structure (m2 and m4) were analyzed in A375 cells. The luciferase activities of reporters with mutated promoters were compared with that of WT. c, binding of CTCF to the reporters with WT or mutated promoters was further analyzed by ChIP assay using antibody against CTCF. The results represent the mean Ϯ S.D. from three independent experiments. **, p Ͻ 0.01; ***, p Ͻ 0.001 compared with WT. d, EMSA experiments were performed to verify the binding of CTCF to biotinylated hT25, hT25-m1, hT25-m2, hT25-Me(1), hT25-Me(4), and hT25-m1-Me incubated with CTCF in vitro. Representative results from three independent experiments are shown. e, EMSA experiments were performed in the presence of KCl or LiCl condition for biotinylated hT25, hT25-Me(4), hT25-m1, and hT25-m2 incubated with CTCF in vitro.
Quadruplex enhanced by DNA methylation modulates hTERT gene
DNA methylation at the 5-position of cytosine is associated with multiple cellular processes that regulate gene expression in the mammalian genome. In addition to the methylation of CpG dinucleotides, demethylation enzyme, such as ten-eleven translocation (TET) 5mC-hydroxylases, could convert the 5-position of cytosine to 5-hydroxymethylcytosine (5hmC), which offers a means of dynamic regulation of DNA methylation (39,40). It has been shown that 5hmC plays an important role for the regulation of genes involved in embryonic development, cellular differentiation, and stem cell programming (41,42). By incorporating 5hmC into the hT25, we further examined whether the hydroxymethylation could influence the CTCF binding onto the hT25. The imino proton NMR spectrum of hT25-hydroxymethyl (hT25-hMe(4)) with modified 5hmC at the fourth CpG dinucleotide suggested that hT25-hMe(4) can form a hairpin structure (supplemental Fig. S6). The EMSA results showed the binding of CTCF to the hT25-hMe(4) but not hT25-Me (supplemental Fig. S6b) and hT25-Me(4) (Fig. 5d). These findings indicated that hydroxymethylation at hT25 does not influence the formation of hairpin structure and the following CTCF binding.
Discussion
Here, we demonstrated that G-rich sequences with only a single base difference could shift base pairs for different hairpin formation, which might lead to different secondary structures. In contrast to the transition from hairpin to quadruplex structures of hT25 after addition of K ϩ , no appreciable imino proton NMR signal detected in a single G-base mutation by replacing G 11 to T 11 of hT25-m3 indicated the unfavorable transition from hairpin to quadruplex structures. It is likely due to the more stable hairpin structure of hT25-m3 because of the formation of four consecutive Watson-Crick base pairs between C 10 T 11 G 12 C 13 and G 17 C 18 A 19 G 20 . In contrast, the single G-base mutation by replacing G 20 to T 20 of hT25-m4 disrupted the original G 20 -C 6 base pairs of hT25 hairpin structure. Of interest is that an imino proton NMR signal located around 13 ppm shows no appreciable change, and the folding transition to quadruplex structure is fast after addition of 150 mM K ϩ . In comparison, Gray and Chaires (43) used a stopped-flow method to obtain a single folding time of 20 -60 ms for K ϩ -induced quadruplex formation of single-stranded human telomeres. In addition, several mutants of WT22 (GGGCCAC-CGGGCAGGGGGCGGG) in the WNT1 gene promoter without formation of hairpin structures showed fast quadruplex formation within 2 min at 37°C (33). The fast folding kinetics of hT25-m4 is similar to the transition from a singlestranded DNA. Nevertheless, verification of hT25-m4 structures with and without K ϩ for elucidating the transition pathway deserves more study.
Considering that two G-C base pairs of G 5 -C 21 and G 20 -C 6 are involved in both the hairpin formation and quadruplex formation, a possible scenario is proposed for the K ϩ -induced quadruplex formation of hT25-m2 that results from a simple flip back of a hairpin form to fold into a quadruplex form with two G⅐G⅐G⅐C quartets. The proposed model is supported by the transition kinetics. The transition time from hairpin to quadruplex structures of hT25-m2 via a simple flip (ϳ25 min) is much faster than the transition time from hairpin to quadruplex structures of WT22 via unfolding-refolding (ϳ550 min) at 25°C (33). To our knowledge, such a simple transition from hairpin to quadruplex structures for a genomic G-rich sequence has not been previously documented.
DNA methylation plays an important role in regulating hTERT expression. Benhattar et al. (32) found that partial hypomethylation in the core promoter region together with the hypermethylation in the CTCF-binding site of the first exon can lead to hTERT expression in telomerase-positive tumor cells. This finding was further supported by those studies in telomerase-positive cancer cells treated with 5-aza-2Ј-deoxycytidine or trichostatin A in which DNMT1 down-regulation correlates with the CpG islands demethylation, CTCF binding, and the repression of hTERT transcription (32,44). By using the dimethyl sulfate-methylation interference assay, Renaud et al. (31) demonstrated that G 20 , G 23 G 24 , and G 25 were recognized by CTCF binding to the first exon of hTERT. Here, we showed that two secondary structures, hairpin and quadruplex, could form within the CTCF-binding region of hTERT first exon. The CTCF-contacting guanines are involved in the hairpin formation. Particularly, CpG methylation in the CTCF-binding site promotes quadruplex formation, which prevents CTCF binding and leads to hTERT expression. In addition, a transition pathway from hairpin to quadruplex topologies of hT25-m2 was proposed. Similar imino proton NMR spectra of hT25-Me suggested that hT25-Me could also adopt a similar quadruplex structure. It is feasible that such simple transition from hairpin to quadruplex topologies for a genomic G-rich sequence could perturb DNA-protein interaction.
G-quadruplex structures found in promoter regions are generally considered as negative regulators of gene expression (8). Zakian and co-workers (9) reported the potential function of quadruplex in gene regulation via stimulating activator binding or inhibiting repressor binding. The recent findings of Hurley and co-workers (30) showed that an aberrant G4 formation of a long G-tract mutation within hTERT promoter region could disrupt repressor binding and result in overexpression of hTERT. Here, our findings demonstrated that quadruplex formation enhanced by CpG dinucleotide methylation in the first exon of hTERT could impede CTCF binding and lead to hTERT expression. Therefore, in addition to transcriptional inhibition, quadruplex formation could also act as a stimulatory factor in modulating gene expression (9,45).
In summary, we demonstrated that methylated cytosine may directly participate in "quartet" formation. Here, methylation of a single cytosine at the specific CpG dinucleotide of the hTERT gene is capable of shifting the equilibrium from hairpin structure to quadruplex structure via a simple flipping process (Scheme 1). Our results showed that DNA methylation alone is not sufficient to inhibit CTCF binding to the first exon of hTERT, suggesting that quadruplex formation promoted by CpG methylation plays a major role in preventing CTCF binding and further regulating gene expression. These findings provided mechanistic insight to explain how the hypermethylated hTERT promoter can lead to its expression in most telomerasepositive tumors.
DNA preparation
All unlabeled oligonucleotides were purchased from Bio Basic (Ontario, Canada). The DNA concentrations were determined by the absorption at 260-nm peaks using a UV-visible absorption spectrometer. The oligonucleotides were dissolved in 10 mM Tris-HCl (pH 7.5) without and with 150 mM KCl, followed by heat denaturation at 95°C for 5 min and slowly annealed to room temperature.
NMR spectroscopy
All NMR experiments were performed on a Bruker AVIII 500 MHz NMR spectrometer equipped with a prodigy probe head and on a Bruker AVIII 800 MHz NMR spectrometer equipped with a cryoprobe. The 1D imino proton NMR spectra were recorded using a WATERGATE (46) or a jump-return pulse sequence (47) for water suppression. The 1D 15 N-1 H SOFAST-HMQC spectra were used for unambiguous assignments of individual imino proton resonances using a series of site-specifically 15 N-labeled NMR samples, where 8% of 15 N-labeled guanine was introduced into one of the 11 G-quartet-forming guanine residues as described previously (48,49). The strand concentrations of the NMR samples were typically at 100 -200 M with specific salt conditions and an internal reference of 0.1 mM 4,4-dimethyl-4-silapentane-1-sulfonic acid.
Cell culture
Human melanoma A375 cells (American Type Culture Collection, Manassas, VA), telomerase-positive cells, were grown in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum. Cells were cultured at 37°C in an incubator supplemented with 5% CO 2 . A375 cell line was authenticated by using the PromegaGenePrint10 system (Promega, Madison, WI) and analyzed by ABI PRISM 3730 GENETIC ANALYZER and GeneMapper software version 3.7. (Applied Biosystems, Carlsbad, CA). The DNA-methylated pattern of the promoter and the first exon region of the hTERT gene in these cells is similar to the pattern in telomerase-positive cells (32).
Promoter reporter assay
To verify the transcriptional activity, the hTERT sequence, including Ϫ1024 to ϩ119 (translation start site as ϩ1), was cloned into the pCpGfree-basic-Lucia vector (Invivogen, San Diego). To verify the effect of the DNA methylation in the first exon on the hTERT gene expression, methylated and mutant reporters were also generated and transfected into telomerasepositive A375 cells. Luciferase activity of these constructs was determined by using the QUANTI-Luc luciferase system (Invivogen). Details are provided in the supplemental Materials and methods.
Bisulfite genomic sequencing
To determine the methylation status of the reporter plasmid, genomic DNAs of A375 cells transfected with reporter plasmids were extracted. The bisulfite modification method and following sequences were employed for determining the methylation status of cytosine residues in DNA (50). Details are provided in supplemental Materials and methods. Scheme 1. Proposed mechanism of CTCF binding to the first exon of hTERT gene for transcriptional regulation. CTCF favors binding hairpin structure, whereas quadruplex formation enhanced by CpG methylation impedes CTCF binding and further leads to gene expression.
Quadruplex enhanced by DNA methylation modulates hTERT gene Chromatin immunoprecipitation
Chromatin immunoprecipitation (ChIP) assays were performed as described previously with minor modifications (31). Briefly, A375 cells transfected with reporter plasmids were treated with 1% formaldehyde in phosphate-buffered saline. The cross-linked nuclei were sonicated to yield DNA fragments in the range of 200 -1000 bp. ChIP grade polyclonal antibody against CTCF (Product code: ab70303, Abcam, Cambridge, MA) was incubated overnight with the nuclear lysates. Immune complexes were then collected with protein A magnetic beads (Millipore, Temecula, CA). Using the forward primer (5Ј-TGCGCACGTGGGAAGCCCTG-3Ј, complementary of nucleotides Ϫ38 to Ϫ1 of hTERT gene) and the reverse primer (5Ј-T-GAGGGCAAACAGCACCTTGATTTCC-3Ј, complementary of nucleotides of pCpGfree-basic-Lucia vector backbone), the semi-quantitative real-time PCR was performed to analyze the CTCF-binding efficiency.
Electrophoretic mobility shift assay
EMSA was performed as described previously with minor modifications (51,52). 5Ј-Biotin-labeled oligonucleotides (supplemental Table S1) were incubated with 0.2 g of CTCF fulllength recombinant protein (Abnova, Jhongli City, Taiwan) in binding buffer containing 50 mM KCl, 10 mM Tris, 5 mM MgCl 2 , 0.1 mM ZnSO 4 , 2 mM DTT, 0.05% Nonidet P-40, 2.5% glycerol, and 50 ng/l of double-strand competitor DNA poly(dI-dC) (Thermo Fisher Scientific, San Jose, CA). After incubation for 20 min at room temperature, samples were analyzed on a 5% non-denaturing polyacrylamide gel in 0.5ϫ TBE buffer at 100 V for 1 h. Gels were transferred to positively charged nylon membrane in 0.5ϫ TBE at 380 mA for 40 min. Nylon membranes were immunoblotted and followed by a CCD camera for detecting biotin signals (Thermo Fisher Scientific). | 7,897.4 | 2017-10-30T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Reinforcement learning-based particle swarm optimization for sewage treatment control
To solve the problem of high-energy consumption in activated sludge wastewater treatment, a reinforcement learning-based particle swarm optimization (RLPSO) was proposed to optimize the control setting in the sewage process. This algorithm tries to take advantage of the valid history information to guide the behavior of particles through a reinforcement learning strategy. First, an elite network is constructed by selecting elite particles and recording their successful search behavior. Then the network is trained and evaluated to effectively predict the particle velocity. In the periodic wastewater treatment process, the RLPSO runs repeatedly according to the optimized cycle. Finally, RLPSO was tested based on Benchmark Simulation Model 1 (BSM1) of sewage treatment, and the simulation results showed that it could effectively reduce the energy consumption on the premise of ensuring qualified water quality. Furthermore, the performance of RLPSO was analyzed using the benchmarks with higher dimension, which verifies the effectiveness of the algorithm and provides the possibility for RLPSO to be applied to a wider range of problems.
Introduction
The activated sludge method is a biological sewage treatment method commonly used in the wastewater treatment processes (WWTP) [1,2]. Through biochemical reaction, the pollutants in the sewage are adsorbed, decomposed and oxidized, so the pollutants are degraded and separated from the sewage to achieve the purification of the sewage [3][4][5][6]. To ensure that the effluent water quality reaches the standard, it is necessary to fill the aeration tank with appropriate oxygen through the blower to maintain the concentration of dissolved oxygen (S O ) in the aerobic area, and use the reflux pump to maintain the concentration of nitrate nitrogen (S NO ) in the anoxic zone [7]. However, the operation of blower and reflux pump requires a large amount of energy loss, which inevitably increases the operation cost. At the same time, from the perspective of biochemical reaction mechanism, suitable S O and S NO are helpful to ensure the successful progress of nitrification and denitrifying reactions [8,9]. Therefore, it is necessary to dynamically optimize S O and S NO and construct the control strategy aiming at reducing the energy consumption (EC) in the sewage treatment process on the premise of ensuring qualified effluent quality (EQ).
With the characteristics of nonlinearity, time variation and strong coupling, the control issues in the WWTP have been extensively investigated. The main challenge of WWTP is to construct an optimal control strategy with the aim of reducing EC while ensuring qualified EQ. For example, Vrečko presented a PI-based control strategy including feedforward control and a step-feed procedure, which was applied to WWTP [10]. Furthermore, Vrečko et al. presented a model predictive controller (MPC) for ammonia nitrogen, which gives better results in terms of ammonia removal and aeration energy consumption than PI controller [11]. Mulas proposed a dynamic matrix-based predictive control algorithm, which is able to decrease the energy consumption costs and, at the same time, reduce the ammonia peaks and nitrate concentration [12]. Han et al. proposed an efficient self-organizing sliding-mode controller (SOSMC) to suppress the disturbances and uncertainties of WWTP [13]. However, in the above algorithm, the concentration setting values of the key variables in sewage process are fixed or changed according to the preset trajectories, without considering the real-time influence of sewage quality and flow rate.
Sewage treatment is a complex dynamic reaction process. To reduce EC under the statue of meeting EQ standards, more and more intelligent algorithms are presented to dynamically optimize the setting values of key variables in WWTP. For examples, Hakanen et al. designed a multiobjective interactive wastewater treatment software based on differential evolution (DE), using variables such as the So setpoint in last aerobic zone and the methanol dose as decision variables [14]. Han et al. proposed a Hopfield neural network method (HNN) based on Lagrange multiplier for the optimal control of pre-denitrification WWTP [15]. Yang used an artificial immune network-based combinatorial optimization algorithm (Copt-ai Net) to determine the optimal set values of So and S No [16]. In [17], an adaptive multi-objective evolutionary algorithm based on decomposition (AMOEA/D) is developed with the usage of EC and EQ as objectives to be optimized.
However, sewage treatment is a cyclical process, that is, optimization calculations should be performed in intervals, which can result in high fitness evaluations (FEs) cost for optimization. In the above intelligent control algorithm, sewage treatment information is not fully utilized. The subsequent optimization does not extract useful information from the previous optimization process, and the previous optimization does not play a guiding role for the subsequent optimization.
In the cycle optimization process, information storage and reuse can improve computing efficiency and sewage treatment effect. Inspired by reinforcement learning mentioned in [18,19], and considering the simple operation and fast convergence of particle swarm optimization algorithm (PSO) [20][21][22][23], we propose a wastewater treatment control method based on reinforcement learning particle swarm optimization (RLPSO). This method introduces a reinforcement learning strategy in the particle update. First, select the elite particles, record their concentration setting values and adjustment trends, and construct an elite particle set. Then an elite network was trained and used as the strategy function to predict the particle velocity. Finally, a simplified evaluation method is utilized to calculate the state value function which is used to update the elite network model.
The remainder of the paper is organized as follows. The next section introduces the international Benchmark Simulation Model 1 (BSM1) of WWTP and optimization objective function. The subsequent section describes RLPSO in detail. Then the experiment results and analysis are shown. The final section provides the conclusion and outlook.
Wastewater treatment processes optimization
In WWTP, the main reaction is carried out by biological reactor and secondary sedimentation tank. The biological reactor consists of five units. The first two units are anaerobic zones, which mainly complete denitrification reaction, while the last three are aerobic zones, which mainly complete nitrification reaction. To evaluate and compare different optimal control strategies, the Benchmark Simulation Model 1 (BSM1) [24][25][26] was developed by the IWA (International Water Association) and COST (European Cooperation in the Field of Science and Technology), shown in Fig. 1. In BSM1, there are two control loops, S O and S NO . The first control loop tunes the dissolved oxygen concentration in the fifth unit S O by changing the oxygen transfer coefficient K La5 . The second control loop tunes the nitrate nitrogen level in the second unit S NO by changing the internal recirculation flow rate Q a . The two control loops adopt proportional integral controller (PI). However, due to the influence of weather or users, sewage quality keeps changing. If S O or S NO is set at a constant value, it is difficult to maintain the optimal balance between EQ and EC. Therefore, it is necessary to dynamically optimize the set values of S O and Fig. 1 The architecture of the BSM1 S NO and construct the optimized control strategy aiming at reducing EC on the premise of ensuring qualified EQ.
Aeration energy (AE) and pumping energy (PE) consumption accounts for more than 70% of total energy consumption, so the EC of the optimization problem is defined as the sum of AE and PE: According to the BSM1 mechanism model, AE and PE are defined as follows, respectively [27]: where K Lai is the oxygen conversion coefficient and V i is the volume of the ith biological reactor, respectively. S o,sat is the saturation concentration for oxygen. T is the evaluation cycle. Q a , Q r , and Q w denote the internal recycle flow rate, return sludge recycle flow rate and waste sludge flow rate, respectively. Z a , Z r , and Z w are the corresponding components concentrations.
EQ represents the fine to be paid for the discharge of water pollutants to the receiving water body. According to the definition of BSM1, the equation of EQ is [27] where SS, COD, S NO , S NKj and BOD 5 are suspended solid concentration, chemical oxygen demand, nitrate concentration, Kjeldahl concentration, and biochemical oxygen demand, respectively. EQ value will impact the operation cost of WWTP if the effluent discharge fee is executed strictly.
In addition to EQ, the five effluent parameters should meet the following standards specified in BSM1 [28]: where N tot = S NO + S NKj . S NH denotes influent ammonium.
In summary, the constrained objective optimization function of the WWTP is where c is the weight coefficient and the set values of S O and S NO are the decision variables. Since sewage treatment is a min f = c ⋅ EC + EQ , dynamic and periodic optimization process, we proposed a RLPSO control strategy to minimize the objective optimization function (6) by dynamically adjusting the set values of S O and S NO , to improve the sewage treatment efficacy and reduce the operating cost.
Reinforcement learning-based particle swarm optimization
Particle swarm optimization PSO originated from the study of the behavior of preying on birds, its basic idea is that whole swarm of birds will tend to follow the bird which found the best path to food [29]. To search an optimum, PSO defines a swarm of particles to represent the potential solutions to an optimization problem. Each particle begins with an initial position randomly and flies through the D-dimensional solution space. The flying behavior of each particle can be described by its velocity and position as the following.
is the position vector of the ith particle; P i = (p i1 , p i2 ,…, p id ,…, p iD ) is the best position found by the ith particle; P g = (p g1 , p g2 ,…, p gd ,…, p gD ) is the global best position found by the whole swarm. c 1 , c 2 are two learning factors, usually c 1 = c 2 = 2 [29]; r 1 , r 2 are random numbers between (0, 1) [30]; is the inertia weight to control the velocity, which may decrease linearly starting at 0.9 and ending at 0.4 or ∈ (0, 1) [30].
WWTP is a process of periodic optimization. This is because the WWTP is a complex system with large lag, which is difficult to operate in real time. Therefore, it is necessary to set the cycle time and carry out the optimization calculation in each cycle. However, PSO has the characteristics of random initialization to improve the diversity, and only consider the individual optimum and global optimum during the optimization, ignoring the inherent properties of the system. If PSO is directly applied to WWTP, information from previous cycles does not provide any guidance for subsequent optimization processes, which will lead to low efficiency. To improve the treatment effect, it is necessary to record the influence of the set values of S O and S NO on the sewage parameters, and reuse the information to the optimization process, which will provide reference data for the next optimization calculation. So we consider adding a prediction item to Eq. (7), as shown in the following: where v id is the dth dimensional predicted velocity of particle i by the strategy function μ, and r is the prediction coefficient. According to Eq. (9), the velocity direction of particles is determined by four parts: inertial velocity, individual historical optimum, global optimum, and prediction item. On the one hand, it draws the advantages of PSO, which is both self-cognition and group sociality, on the other hand, the prediction item infuses PSO with historical information, which is more suitable for repeated cycle optimization problems. To determine the prediction item v id , we introduce reinforcement learning (RL) [31] strategy to PSO.
Reinforcement learning strategy
Reinforcement learning interacts with the environment through a trial-and-error mechanism and learns optimal strategies by maximizing cumulative rewards. Reinforcement learning agent mainly includes four basic elements: environment, state (s), action (a) and reward (R) [31]. During operation, the agent determines an action a according to the current state s through the strategy function μ, executes the action, and enters the next state. At the same time, the system returns the value R to reward or punish the action. The process runs repeatedly to maximize the expected benefits of the agent.
In the similar way, reinforcement learning-based PSO (RLPSO) includes four basic elements shown in Fig. 2. The agent is a particle in population and the environment is the WWTP in the paper. The state s is the position X of each particle in the population; the action a is the velocity V prediction strategy, which is determined by the strategy function μ. The reward value R is related to the fitness value f of the optimization problem. Therefore, to obtain the particle velocity prediction v id , we need to establish the strategy function μ according to reward value R.
In the RLPSO, the particle agent predicts the speed according to the strategy function μ: In this paper, the strategy function μ is described as an elite network model. By learning the information of elite particles, the elite network model was trained. The process is mainly divided into three steps: elite particle set construction, strategy function training and elite network model evaluation. The details are described as follows.
Elite particle set construction
Elite network model is trained with elite particle information to guide the search of offspring population. The first step in the kth iteration is to select elite particles based on the reward value R(k). In the iteration process, the reward value R(k) is determined according to the fitness variation value, as shown in the following equation: where f (k) is the fitness value of the kth iteration, k = 0, ..., K − 1 . K is the maximum number of iterations of each run. Only the particles with reward value R(k) = 1 are selected as the elite particles, and then the position X i (k) before the update of the elite particles and the speed V i (k) after the update are saved to construct the elite particle set Ω e .
Strategy function training
The elite particle set Ω e is used to save the position x of the elite particle before the update and the speed v after the update. RLPSO uses a limited capacity elite particle set to store elite particles. Suppose the number of elite particle set Ω e is N e . Ω � e is the newly generated elite particle set, and its number is N ′ e . If N e + N � e exceeds the finite capacity value N em , all the elite particles Ω e + Ω � e are sorted according to the fitness value, only the first N e items are stored into Ω e again, and the original data is overwritten. The elite particle set Ω e is used as a data set. In the data set, the particle position is input and the speed is output, and then a neural network model is trained to obtain the elite network model Φ. The trained elite network model Φ is used as the strategy function μ to guide the particle operation. With the elite network model Φ, the particle velocity can be predicted according to the particle position X i:
Elite network model evaluation
With the continuous update and change of the elite particle set Ω e , the elite network model will be evaluated after the training. During the evaluation process, the new model and the original model are used to guide the particle optimization process. To better reflect the influence of the strategy function μ on the particle velocity update, RLPSO velocity update equation is simplified as When the termination conditions k ≥ K is satisfied, the optimal fitness value obtained by the guidance of the new elite network model is set to f * 1 , and the optimal fitness value obtained by the original network model is set to f * 2 . If f * 1 > f * 2 , it means that the prediction effect of the new network model is better. Set the new network reward value R(K) = 1 after the iteration, otherwise R(K) = −1.
Considering the randomness of particles, the above evaluation process is repeated M times to estimate the state value function where ⌢ V (X) represents the average reward that can be obtained after the particle X moves through the strategy function μ. If ⌢ V (X) > 0 , the new model is considered better than the original model, and the new network is used to replace the original network. If network. By comparing the two models, we determine the prediction model required by the subsequent algorithm.
Algorithm procedure
The algorithm procedure is described below. RLPSO 1. Initialize particle position X i and velocity V i , i = 1, 2, ..., N 2. Let Run = 1. Update the particle position and velocity according to Eqs. (7) and (8). In the iterative process ( k < K ), select the particle with reward value R(k) = 1 as the elite particle, establish the elite particle set Ω e and train the elite network model Φ 3. Randomly generate N particles. Let r ≠ 0 , use the elite network model Φ to predict the particle velocity v id , and update the particle position and velocity according to Eqs. (8) and (9)
Simulation experiment of RLPSO based on BSM1
The proposed RLPSO is simulated on BSM1 platform and compared with PI controller, CPSO [32], SLPSO [33], PSO [34], APSO [35], DE [36], HNN [15], Copt-ai Net [16] and AMOEA/D [17]. The simulation conditions are based on the sunny and good weather in the BSM1. The parameters of the HNN, Copt-ai Net and AMOEA/D algorithms are determined by the original papers. Besides, the other algorithms parameters are set as follows.
The selection time of simulation data is 14 days. The sampling interval is 15 min, and the optimization period is 2 h. So a total of 168 runs are conducted for each algorithm. During the optimization, one difficulty in employing RLPSO in BSM1 is the huge time consumption for fitness evaluations (FEs), the algorithm is required not only to satisfy the optimization accuracy, but also to accelerate the convergence speed. Therefore, for RLPSO, the population size N is set to 10, r = 0.3 , = 0.4 , D = 2, and K max = 40. c 1 and c 2 are 2. We can figure out that FEs is 67,200. Early experiments show that RLPSO with the setting parameters is nearly convergent in the 40th iteration, and meets the requirements of EQ and EC. For the convenience of comparison, the inertia weights of the PSO-based algorithms are all selected as 0.7. In DE algorithm, the mutation rate is 0.5 and the crossover probability is 0.9. The population size and iterations number of these algorithms are the same as RLPSO. Table 1 shows the comparison of EQ and EC under several strategies. As can be seen from Table 1, compared with PI strategy, all of these intelligent algorithms can reduce EC by optimizing the set values of S O and S NO . Among them, the EC obtained by PSO algorithm is lower than RLPSO, which is 3652.40 kWh/d. However, S NH concentration via PSO is 4.19 mg/L, which exceeds the limit of 4 mg/L. Similarly, S NH concentration obtained by DE and APSO also exceeds the standard. Besides, EC obtained by CPSO, SLPSO, HNN, and Copt-ai Net is obviously higher than that of RLPSO, which proves that RLPSO is superior to these algorithms.
We can also see from Table 1 that the EC obtained by AMOEA/D is slightly lower than RLPSO, but its EQ is higher than RLPSO. The performance of the two algorithms is comparable. But it should be noted that, in the AMOEA/D strategy, the population size N is 100, and K max = 300. We can calculate that in each optimization cycle, the FEs of RLPSO is just 1/75 of AMOEA/D. RLPSO can obtain EC similar to AMOEA/D with significantly lower FEs, which prove that RLPSO is more suitable for sewage treatment process.
RLPSO simulation experiment based on benchmark functions
To further study the performance of RLPSO, the RLPSO algorithm is analyzed on the high dimensional general benchmarks. Six different types of benchmark functions (Rastrigin, Griewank, Ellipsoid, Rosenbrock, Sphere, and Ackley functions) are used to study the performance of RLPSO compared with CPSO, SLPSO, PSO, APSO and DE. In the algorithm, the population size N = 10, and the dimension D is set to 10 and 20 dimensions, respectively. Each algorithm runs 50 times. The maximum number of iterations per run is 200. Other parameters of different algorithms are the same as the BSM1 experiment. In the experiment process, f * j represents the optimal fitness value obtained in the jth run, j = 1,2,…50. Figures 3, 4, 5, 6, 7 and 8 show the boxplots comparison of f * obtained by various algorithms. As can be seen from significantly. However, it can be seen from Figs. 9a-12a that f* of RLPSO tends to converge. This is because RLPSO relies on elite neural network to transmit information between different runs, and the previous optimization can play a guiding role in the subsequent optimization. It should be noted that, as can be seen from Figs. 13a-14a and 9b-14b, RLPSO still has fluctuations. This is because elite network with fixed structure was selected in the training process, which resulted in a decline in RLPSO performance for more complex or high-dimensional benchmarks. Nevertheless, the fluctuation range of RLPSO is significantly weaker than that of CPSO or DE. Tables 2 and 3 list the best, worst, mean and standard deviation value of f * for RLPSO, CPSO, SLPSO, PSO, APSO and DE. It can also be seen from the tables that the best value of RLPSO is weaker than that of DE in Rastrigin function, but its mean or standard deviation is better. For the other benchmarks, all performance statistics of RLPSO are optimal, which proves the accuracy, robustness and effectiveness of the RLPSO algorithm.
Conclusions
In this paper, we proposed an RLPSO algorithm to solve the WWTP problem. On the one hand, this method is based on the theory of reinforcement learning. Through continuous interactive attempts of environment and action, the method adjusts the strategy according to the feedback information, and finally judges the optimal concentration setting value under various conditions. On the other hand, this method is based on swarm intelligence algorithm PSO. In the application of WWTP, it is helpful to improve the diversity distribution of solutions to find the global optimal concentration setting value. Besides, the method has an elite network with memory function. In order to improve the treatment effect, it is necessary to record the influence of the set values of S O and S NO on the sewage parameters, and reuse the information, which provides reference data for the next optimization calculation.
In summary, the RLPSO algorithm proposed in this paper can not only meet the effluent standard, but also reduce the operating cost and provide a feasible solution for the actual sewage treatment plant. In the further, we will continue to study the sewage treatment system, carry out data mining [37,38], and seek for a better optimal control method. In addition, we will use RLPSO to solve more practical problems, such as robot control [39,40] and sEMG-based human-machine interaction [41]. permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 5,542.6 | 2021-05-28T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
On the effect of metal loading on the reducibility and redox chemistry of ceria supported Pd catalysts.
The effect of Pd loading on the redox characteristics of a ceria support was examined using in situ Pd K-edge XAS, Ce L3-edge XAS and in situ X-ray diffraction techniques. Analysis of the data obtained from these techniques indicates that the onset temperature for the partial reduction of Ce(IV) to Ce(III), by exposure to H2, varies inversely with the loading of Pd. Whilst the onset and completion temperatures of the reduction of Ce(IV) to Ce(III) are different, both samples yield the same maximal fraction of Ce(III) formation independent of Pd loading. Furthermore, the partial reduction of Ce is found to be concurrent with the reduction of PdO and demonstrated that the presence of metallic Pd is necessary for the reduction of the CeO2 support. Upon passivation by room temperature oxidation, a full oxidation of the reduced ceria support was observed. However, only a mild surface oxidation of Pd was identified. The mild passivation of the Pd is found to lead to a highly reactive sample upon a second reduction by H2. The onset of the reduction of Pd and Ce has been demonstrated to be independent of the Pd loading after a mild passivation with both samples exhibiting near room temperature reduction in the presence of H2.
Introduction
Ceria (CeO 2 ) has extensive uses in the field of catalysis 1 as well as other important industrial applications. [2][3][4][5] Platinum group metals (PGMs) 6 and transition metals 7 can be loaded onto a ceria support either separately, or in conjunction with other metals e.g. bimetallic systems. Once dispersed, the metals appear as either single particles, or atomic clusters. [8][9][10] The catalytic properties of the material change as a function of the particle or cluster size. As the particle volume decreases, the surface area increases, resulting in an increase in their efficiency. 11 An added benefit of a reduced volume is reduced cost. 12 Additionally, the combination of PGMs and their support can improve the oxygen storage capacity (OSC) and redox operation. 13,14 The OSC of the ceria combined with the catalytic properties of the noble metal nanoparticles makes for a highly useful material. 15 It has been suggested that the support enhances the activity of noble metal catalysis due to strong metal support interactions (SMSIs). 16,17 There are several ways by which the SMSI effect can be explained e.g. formation of PGM-O-Ce bonds, 18 alloy formation, 19 diffusion of the PGM into the support and/or encapsulation of the PGM by the support, 20 and partial reduction of ceria by the PGM creating bridging hydroxyl moieties. 21 In general, the reduction of an oxide support changes the chemical properties of the noble metals dispersed on its surface. In addition to being a reversible process, 22 the reducibility of the support is improved. H 2 dissociates into atomic hydrogen by the noble metal while nearby oxygen atoms located on the support surface are removed. The localization of oxygen vacancy electrons results in the reduction of Ce(IV) to Ce(III). [23][24][25] Charge transfer can occur via electronic interactions between the respective components in the system. [26][27][28][29] Chemical interactions, such as redox reactions, complicate these interactions and often influence the catalytic performance and reducibility of the loaded metal. [30][31][32] Among the various PGMs, the use of Pd on ceria has been well documented such as in automotive exhaust catalysis, 33 steam reforming, 34 methanol synthesis, 35 abatement of methane, 36 C-H bond activation 37 and in various catalytic oxidation processes. [38][39][40][41] In order to understand the reduction properties of palladium ions and their influence on the reactivity of a ceria support, it is necessary to study both the short and long range structures of the system and to obtain element specific information probing the supported Pd metal ions. This is accomplished by combining suitable, complimentary structural methods. This includes X-ray absorption spectroscopy (XAS), which comprises X-ray absorption near edge structure (XANES) and extended X-ray absorption fine structure (EXAFS), an element specific technique and useful to determine short-range geometric and electronic structural information. On the other hand, X-ray diffraction (XRD) is a powerful technique in determining the long-range structural order, respectively, present in a given system, but is less informative on dispersed metal ions and nano-crystalline metal particles which are present in small concentrations. Therefore, it is ideal to use a combination of these methods to probe the structure of a catalytic system to determine how the metal ions and supports dynamically interact in catalysis. Here we report the use of in situ Pd K-edge, Ce L 3 -edge, and XRD techniques to determine the effect of the concentration of palladium on the reactivity of the Pd/CeO 2 system.
Experimental
The high-surface area ceria support was obtained from Rhodia-Solvay. The surface area of the support material was measured using BET analysis of 130 m 2 g À1 . The supported Pd catalysts were made by an incipient wetness impregnation method. The appropriate amounts of Pd nitrate solution (Johnson Matthey) were used to load palladium onto the ceria support. Materials were dried at 105 1C and calcined at 500 1C and these asprepared catalysts were used for further in situ and ex situ characterization studies. XRD performed on the as-received samples is shown in Fig. S1 in the ESI † demonstrating phase purity and an average ceria crystallite size of approximately 5 nm and is reported in Table S1 (ESI †) and has previously been reported. 42 ICP analysis quantifying the Pd content of the 1 wt% and 5 wt% Pd samples and the impurity content of the high surface area (HSA) ceria support is reported in Tables S2 and S3 (ESI †) respectively.
In situ XAS data of the 1 and 5 wt% Pd supported on ceria were acquired at the Ce L 3 -edge (5723 eV) and Pd K-edge (24350 eV). XAS spectra were collected in step scan acquisition mode using transmission geometry at the BM26A beamline, 37 ESRF, equipped with a Si(111) double crystal monochromator. Measurements of all the samples were carried out in a transmission mode using ionization chambers. In a typical experiment on the Ce L 3 -edge, pellets were made from 4 mg of the ceria samples mixed with 40 mg of fumed silica. On the Pd Kedge typically 100 mg of the ceria samples were used. The samples were purged under N 2 prior to exposing the catalyst in a flow of 5% H 2 /N 2 . Data were obtained at room temperature followed by measurements at various temperatures during the temperature ramp from room temperature to 450 1C at 5 1C min À1 . The samples were then cooled in 5% H 2 /N 2 to room temperature whilst data were collected at various temperatures during this process. Between the first and second cycles of H 2 treatment the samples were exposed to synthetic air at room temperature for 30 minutes. The second reduction cycle was only performed in part, heating to 100 1C in 5% H 2 /N 2 at 5 1C min À1 on the Pd K-edge, as our aim was to monitor the reduction at the initial stages of the reaction. Data processing and analysis were performed using Athena. 43 Linear combination fitting (LCF) analysis of the Ce L 3-edge XANES was performed using a freshly calcined sample consisting of the bare ceria support and cerium nitrate as reference materials representing Ce(IV) and Ce(III), respectively. Similarly, LCF analysis on the Pd K-edge was performed using a Pd metal foil and PdO as reference materials.
X-ray diffraction patterns were obtained from the 11-ID-B beamline at the APS, using a 2D detector. The wavelength used was l = 0.1430 Å with an obtainable Q range of 0.55 and 27.93 Å À1 . The samples were prepared using a sieve fraction between 100 and 150 mm to aid in gas flow through the sample. These were loaded into a 0.9 mm (internal diameter) fused silica capillary with a sample bed length of approximately 1 cm, and quartz wool plugs were used on either side of the sample to inhibit the movement of the sample with gas flow. Metal furnace heating elements mounted above and below the capillary were used to control the temperature. 44 All the samples were measured under a 3.5% H 2 /He atmosphere during continuous heating and cooling at a rate of 10 1C min À1 between 30 1C and 450 1C. The samples were also held at 450 1C for 10 minutes prior to cooling. Between the first and second cycles of H 2 treatment the samples were exposed to 5% O 2 /He at room temperature. XRD patterns were azimuthally integrated using FIT2D. 45 The XRD patterns were then refined using the GSAS software 46 with a batch Rietveld refinement method between 1.8 and 221 2Y.
Results and discussion
First, we discuss the results of temperature programmed reduction experiments and then the XAS analysis followed by XRD. In Fig. 1 the TPR profiles are reported as normalized by the integral intensity for 1 wt% Pd/CeO 2 , 5 wt%Pd/CeO 2 and the high surface area CeO 2 support to highlight the temperature at which H 2 consumption occurs. The 5 wt% Pd/CeO 2 sample exhibits a maximum of H 2 uptake at approximately 65 1C, while the 1 wt% Pd/CeO 2 sample is shifted to approximately 155 1C, and HSA CeO 2 begins to show H 2 uptake only above 300 1C.
From these measurements it is clearly evidenced that the higher the content of Pd present in the sample the maximum in the H 2 consumption appears at lower temperature. However, while TPR measurements give a strong indication that the content of Pd strongly influences the temperature of reduction, the method is insensitive to the individual components of the sample and cannot elucidate whether there is a synergistic effect between the reduction of Pd(II) to the metallic state and the formation of oxygen vacancies and reduction to Ce(III) in the CeO 2 support.
Pd K-edge XAS
To address the question of the role of Pd in promoting the reduction of CeO 2 we have employed XAS at both the Pd K-edge and Ce L 3 -edge to provide detailed insight into the electronic structure changes of Pd and Ce ions along with XRD to monitor the geometric structural evolution. In Fig. 2 the Pd K-edge XANES spectra of 1 and 5 wt% Pd/CeO 2 catalysts recorded during reaction with hydrogen and heating to 450 1C, followed by cooling in a H 2 /N 2 atmosphere and subsequently room temperature exposure to synthetic air, finally followed by reduction up to ca. 100 1C (see Fig. 2A and B) are shown. To analyse the Pd K-edge data multivariate curve resolution (MCR) was performed using an alternative least squares approach. 47,48 While typically analysis is performed using a linear combination fitting (LCF) method, here the nano-crystalline nature of the supported Pd leads to significant dampening of the oscillation in the post-edge region which results is a significant misfit between experimental and fitted data. MCR combats this by being able to computationally separate the pure spectral components present in a dataset consisting of multiple spectra obtained during an in situ experiment. Using MCR analysis it was possible to extract the significant components which, when compared to standard reference materials, can be readily identified as oxidic Pd(II) and nano-crystalline metallic Pd(0) species, see Fig. S2 (ESI †).
The XANES data of both the 1 wt% Pd/CeO 2 and 5wt% Pd/ CeO 2 samples in their respective initial states clearly resemble that of PdO thus confirming the presence of Pd(II) in the Example of stacked (offset for clarity) Pd K-edge data showing the MCR analysis of the 1 wt% Pd/CeO 2 (A) and 5 wt% Pd/CeO 2 (B) samples after the initial reduction in 5% H 2 /N 2 at 450 1C, cooled to 30 1C in 5% H 2 / N 2 , after passivation with synthetic air and after the second reduction in 5% H 2 /N 2 spectra in black, red, blue, green and purple respectively. The raw data are shown as points whereas the MCR fitting is given as solid lines. The results of MCR analysis giving the proportion of oxidic (black), metallic (red) and the lack of fit % (blue) for the 1 wt% Pd/CeO 2 (C) and 5 wt% Pd/CeO 2 (D) samples.
as-prepared catalysts (black curves in Fig. 2A and B). Upon reduction by heating in 5% H 2 /N 2 , the XANES data of both 1 and 5wt% samples are similar to those of the Pd metal, both in the edge position and on the top of the edge and are interpreted as the formation of nano-crystalline Pd particles on the surface of the CeO 2 support (red curve). While at 450 1C, both samples are shown to be metallic, the reduction onset temperatures are notably different. The 1 wt% Pd/CeO 2 sample requires a temperature more than 100 1C before the Pd is reduced to a metallic state, see Fig. 2C. However, the majority of the 5wt% Pd/CeO 2 sample is reduced by 60 1C, Fig. 2D. These findings demonstrate that the reducibility of Pd is likely hindered by a greater dispersion of Pd on the CeO 2 surface.
To investigate this point further, EXAFS analysis was performed on the data collected after reduction and cooling to 30 1C, while remaining in the H 2 atmosphere. Curve fitting of the EXAFS can be used to estimate the Pd particle size through the determination of the average Pd-Pd coordination number. The results of the EXAFS fitting using Artemis 49 are given in Table 1. By using the empirically derived Hill equation, 50 coordination numbers of 5.9 and 7.8 equate to approximately 0.4 nm and 0.7 nm spherical particle size for the 1 wt% Pd and 5 wt% Pd samples respectively. The EXAFS analysis confirms the well dispersed nature of Pd in these supported samples even after reduction in hydrogen with a smaller Pd particle size obtained for 1 wt% Pd/CeO 2 . Figures demonstrating the quality of fitting to the R-and k-space are given in the ESI, † Fig. S3.
After the samples were cooled to room temperature, passivation in synthetic air was performed which results in a partial oxidation of Pd(0) to Pd(II). The degree of oxidation is shown to depend on the Pd content and may be related to the metallic Pd cluster size as the 1 wt% Pd/CeO 2 sample oxidises to a greater degree than the 5 wt% Pd/CeO 2 sample. The white line intensity in Fig. 1A and B demonstrated that the two samples differ with an increased intensity observable for the 1 wt% Pd/CeO 2 sample. However, neither sample can be observed to undergo a full oxidation with respect to Pd ions, within the 30 minute exposure to synthetic air at 30 1C. From the MCR analysis reported in Fig. 1C and D it is revealed that 1 wt% Pd/CeO 2 converts to approximately 45% Pd(II), while for 5 wt% Pd/CeO 2 the degree of oxidation is only 30%. A direct overlay of the different spectra is given in supplementary Fig. S4 (ESI †) for clarity. A possible explanation for this result is that the smaller Pd particles found on 1 wt% Pd/CeO 2 have a greater surface area to volume ratio resulting in more surface atoms being exposed and readily oxidized. A second reduction step was then performed by heating in 5% H 2 /N 2 with both samples showing a rapid reduction back to metallic Pd at temperatures below 100 1C, as seen in the final state (purple curve) in Fig. 2A and B.
Ce L 3 -edge XAS
Whilst XAS experiments performed at the Pd K-edge give access to the oxidation state and local coordination geometry of the Pd in the CeO 2 supported Pd samples, analysis of the Ce L 3 -edge provides information on the oxidation state Ce in the CeO 2 support. In Fig. 3A and B we show the Ce L 3 -edge XANES measured in the initial state, reduced at 450 1C and after cooling in 5% H 2 /N 2 at 30 1C. In Fig. 3C and D we show the results of LCF analysis of the Ce L 3 -edge data giving the fractions of Ce(III) during the in situ reduction experiment; the example fitting is shown in Fig. S5 (ESI †).
It appears that, whilst both samples show partial reduction of Ce(IV) to Ce(III), the reduction onset temperature is dependent on the Pd content in the sample. The 5 wt% Pd/CeO 2 sample shows that the maximal extent of reduction, B15% Ce(III), is achieved below 150 1C. However, the 1wt% Pd/CeO 2 sample has an onset of reduction occurring above 150 1C with the maximum Ce(III) content being achieved only at approximately 350 1C. In our previous work, 51 we were able to identify that in pure ceria the reduction process has an onset temperature 4300 1C and is temperature reversible implying a mechanism without the formation of oxygen vacancies. Based on the degree of reduction observed from the Ce L 3 -edge XAS, approximately 15 mol% Ce 3+ ions, the stoichiometry of the reduced ceria is estimated as CeO 1.85 and is broadly in the range expected for this class of materials as reported elsewhere. 1 When considering the CeO 2 supported Pd samples, the reduction process is clearly demonstrated to be promoted by Pd to lower temperature and is temperature irreversible due to the consequence of oxygen vacancy formation. These results demonstrate that the reduction to Ce(III) is promoted by Pd and directly agrees with the Pd K-edge analysis and suggests that after reduction of the initial Pd(II), the extraction of oxygen from the ceria lattice occurs. This suggests that either reverse oxygen spill-over to the Pd ceria interface or the spill-over of hydrogen from the Pd surface drives the reduction of the ceria support. Furthermore, when compared to temperature programmed reduction experiments, shown in Fig. 1, only a single event for H 2 uptake is observed. This suggests that as soon as PdO is reduced the reduction of ceria proceeds.
In situ XRD
To further support the above presented results, experiments were conducted with X-ray diffraction. Here XRD has been analysed to determine the lattice parameter of the CeO 2 fluorite phase and where possible the evolution of the metallic Pd phase during the in situ reduction experiments. In Fig. 4A we show the refined CeO 2 lattice parameter from Rietveld analysis for the 1 wt% Pd/CeO 2 and 5 wt% Pd/CeO 2 samples in blue and black respectively. The temperature is denoted in the red curve and related to the right Y axis. Typical best fits for the Rietveld analysis are given in Fig. S6 and S7 (ESI †) for the 1 wt% Pd/CeO 2 and 5 wt% Pd/CeO 2 samples respectively showing a very strong agreement between the data and the fluorite CeO 2 model at all stages of the reduction experiments.
It is clear from Fig. 4A that the onset of an increase in the lattice parameter takes place above room temperature, after Figures (C) and (D) give the instantaneous lattice expansion derived from the heating portion of first and second reduction cycles respectively. admitting 3.5% H 2 /N 2 into the capillary. While both samples exhibit a strong lattice expansion upon heating in H 2 , the slight horizontal offset during the first reduction cycle, Fig. 4A, between the 1 wt% Pd/CeO 2 and 5 wt% Pd/CeO 2 samples evidences a difference in temperature at which the lattice expansion occurs. After the initial rapid expansion of the fluorite CeO 2 lattice, an approximately linear expansion can be seen with continuous heating up to 450 1C. The initial rapid increase in the lattice parameter is likely to be associated with the formation of Ce(III) ions, as indicated in the Ce L 3 -edge XANES analysis. The formation of Ce(III) leads to an expansion of the ceria lattice due to their larger ionic radii as compared to the Ce(IV) ions. 52 However, secondary to the expansion due to the formation of Ce(III) ions within the fluorite lattice, the linear thermal expansion also occurs. For ceria the linear thermal expansion coefficient is reported to be approximately 10 Â 10 À6 K À1 . Upon cooling in a hydrogen atmosphere, the respective lattice parameters are observed to decrease linearly which is suggested to be due to the thermal contraction. On reaching room temperature, the respective lattice parameter values remain higher than the initial starting values suggesting an irreversible structure change, consistent with the Ce L 3 -edge XAS analysis findings.
By considering the instantaneous expansion coefficient it is possible to deconvolute the two competing expansion effects and is described by: where L refers to the lattice parameter at room temperature, dl/dT to the change in the lattice parameter per change in temperature and a(T) to the expansion coefficient at temperature T. While the linear thermal expansion of ceria acts as a base line with a constant value of approximately 10 Â 10 À6 K À1 over the whole temperature range, the expansion due to the dynamic formation of Ce(III) results in distinct peaks at the instantaneous expansion coefficient. As such it is possible to directly extract the temperature at which the greatest rate of Ce(III) is formed and provide insight into the lattice restructuring associated with the CeO 2 reduction. Fig. 4C gives the instantaneous expansion coefficient for the 1 wt% Pd/CeO 2 and 5 wt% Pd/CeO 2 samples during the first heating cycle.
Remarkably, this methodology returns curves that closely resemble those obtained from TPR measurements, shown in Fig. 1, and therefore demonstrates that the nonlinear expansion, due to the reduction of ceria, coincides with the reduction of the supported Pd particles. This method also clearly demonstrates that the temperature for the reduction, during the first reduction cycle, of ceria directly correlates with the wt% loading of Pd; higher Pd loadings lead to lower temperature reduction of the ceria support. It should also be remarked that respective Pd loaded samples show an expanded ceria lattice after cooling down in a reducing atmosphere as compared to their starting structures. The expanded final lattice provides further evidence that the reduction of ceria, promoted by Pd, is the result of a temperature irreversible process and confirms the results obtained from the Pd K-edge and Ce L 3 -edge XAS experiments. However, when considering the second reduction cycle performed after mild passivation through room temperature reaction with air, shown in Fig. 4B, it is worth noting that the lattice parameter of CeO 2 returns to its starting value as soon as air was introduced after the first cycle of reduction, suggesting a complete reoxidation of reduced Ce(III) at this stage. Upon reintroducing 3.5% H 2 /N 2 after the reoxidation, both 1% and 5% samples exhibit a near room temperature partial reduction of Ce(IV) to Ce(III). The respective lattice parameters rapidly increase to almost the same value observed for their first cycle, at a temperature well below 100 1C. Subsequent increases in temperature linearly increase the lattice parameter similar to that observed for the first cycle of reduction. The instantaneous expansion coefficient clearly shows that an inflection (see Fig. 4D) in the ceria lattice parameter is observed at approximately 50 1C for both the 1 wt% and 5 wt% loadings of Pd. When considering the instantaneous lattice expansion coefficient this process can be observed to occur rapidly resulting in an instantaneous expansion coefficient maximum of approximately 250 Â 10 À6 K À1 . This confirms the previously discussed results from the Pd K-edge XAS measurements, and the mild passivation of the Pd by room temperature oxidation can be readily reduced and thus can promote the reduction of ceria.
After close inspection of the XRD patterns for the 5 wt% Pd/ CeO 2 sample a very weak reflection associated with the (111) reflection of metallic Pd can be seen at 2y of approximately 3.65 1. The formation of the metallic Pd phase is evidenced in the colourmap given in Fig. 5A showing a slight increase in intensity (on a logarithmic scale) with a change from dark blue to light blue colour. To extract information on the lattice parameter of the metallic Pd from the (111) reflection a custom peak fitting tool was written in python. At the start of the experiment the Pd was noted as being oxidic Pd(II) from XAS measurements and therefore no metallic Pd is expected to be present allowing for the baseline correction to be achieved through subtraction of the first dataset corresponding to the initial starting material. Fig. S8 (ESI †) gives the surface colour map of the baseline corrected data which were used for further analysis. Examples of the quality of the peak fitting are given in Fig. S9 (ESI †). Attempts to also perform the same analysis on 1 wt% Pd/CeO 2 were made; however, no peak related to the Pd(111) reflection could be clearly detected as a consequence of the lower Pd content and expected lower crystallinity. The colour map for the 1 wt% Pd/CeO 2 sample is given in supplementary Fig. S10 (ESI †).
The baseline corrected data for the 5 wt% Pd/CeO 2 sample were used for sequential fitting with a single Gaussian peak to model the position and area of the peak giving insight directly into the Pd lattice parameter evolution and relative crystallinity throughout. The results of the Gaussian peak fitting are shown in Fig. 5B for the first and second reduction cycles. On first inspection the formation of crystalline Pd 0 was found to occur above approximately 100 1C with a rapid increase in the peak area occurring during the initial phase of heating. Given that XRD analysis is only sensitive to the long range structure, the initial reduction to a disordered Pd may be missed and could explain the temperature inconsistency between XAS which showed almost full Pd 0 at approximately 60 1C and the XRD analysis which reveals crystalline Pd 0 formation at around 100 1C.
With further heating of the sample, the Pd cubic lattice parameter is observed to increase with a maximum of 3.918 Å, in line with the expected lattice parameter taking into account thermal expansion. Concurrent to this, the relative crystallinity, observed from the peak area shown in Fig. 5B in blue, is observed to increase suggesting a partial sintering of the metallic Pd phase during heating. During cooling the lattice parameter is noted to decrease until around 50 1C before a rapid and substantial increase during further cooling to room temperature. This expansion is due to the formation of a PdH x phase known to occur when Pd is treated under a hydrogen containing atmosphere at low temperature. Upon exposure to oxygen at room temperature the PdH x phase is removed and the Pd lattice parameter relaxes to 3.896 Å. In the Pd K-edge XAS spectra shown in Fig. 2 the exposure to oxygen leads to a partial oxidation to Pd(II) and from the XRD analysis it is found that this process is concurrent with only a very minor decrease in the Pd(111) reflection peak area. This result evidences that the passivation is only a surface oxidation process with the core remaining metallic. During the second reduction cycle, the lost intensity of the Pd(111) reflection is rapidly recovered suggesting that the surface PdO species is reduced as soon as hydrogen is reintroduced into the sample, in line with the results obtained from the Pd K-edge XAS.
Conclusions
In summary, combined in situ multi-edge XAS and X-ray scattering studies enabled us to determine the effect of palladium loading on a high-surface area support on the reactivity of both Pd(II) and Ce(IV) ions. The results clearly indicate that there is a promotion of the partial reduction of ceria due to the presence of Pd. The temperature at which the promoted reduction temperature of the ceria is also noted to inversely depend on the wt% loading of the Pd onto the ceria support; a higher loading of Pd leads to a lower reduction temperature of the Pd and the ceria support. This finding could be attributed to the strong interaction of metal ions with the support preventing the reduction of the initial PdO to metallic Pd. The promotion of the CeO 2 reduction is interpreted as the extraction of oxygen from the ceria lattice and is suggested to be due to a hydrogen spill over mechanism or reverse oxygen spill over mechanism.
More importantly, the findings obtained from the Pd K-edge XAS and XRD studies clearly demonstrate that passivation in air at room temperature only partially oxidises the supported Pd while a complete reoxidation of the reduced ceria (CeO 1.85 ) is observed. Based on this ceria lattice parameter change, we conclude that the restoration of the full stoichiometric fluorite phase CeO 2 takes place. Furthermore, the second reduction cycle, observed by XRD, demonstrates that the reduction of ceria, observed indirectly by analysis of the lattice parameter, is promoted to near room temperature irrespective of the Pd wt% loading. The Pd K-edge data corroborate this result showing that the Pd also undergoes near room temperature reduction by hydrogen following the mild passivation irrespective of the Pd wt% loading. These results indicate that with an initial pretreatment cycle and subsequent passivation, the promoted reduction of ceria can be reduced to near room temperature without the requirement of high Pd wt% loadings and may have a direct impact on the relevant fields of catalysis where these classes of materials are currently utilised.
Author contributions
AHC and HRM performed the experiments, analysis, interpretation and manuscript drafting. DT and JF provided the samples used in the study. AL and KB optimised the beamlines for the experiments performed at ESRF and APS respectively and provided user support throughout. TIH and GS contributed to the drafting of the manuscript, interpretation of results and supervision of both AHC and HRM.
Conflicts of interest
The authors do not have any conflict of interest to declare. | 7,053.2 | 2022-01-12T00:00:00.000 | [
"Chemistry",
"Physics"
] |
Dynamically stable radiation pressure propulsion of flexible lightsails for interstellar exploration
Meter-scale, submicron-thick lightsail spacecraft, propelled to relativistic velocities via photon pressure using high-power density laser radiation, offer a potentially new route to space exploration within and beyond the solar system, posing substantial challenges for materials science and engineering. We analyze the structural and photonic design of flexible lightsails by developing a mesh-based multiphysics simulator based on linear elastic theory. We observe spin-stabilized flexible lightsail shapes and designs that are immune to shape collapse during acceleration and exhibit beam-riding stability despite deformations caused by photon pressure and thermal expansion. Excitingly, nanophotonic lightsails based on planar silicon nitride membranes patterned with suitable optical metagratings exhibit both mechanically and dynamically stable propulsion along the pump laser axis. These advances suggest that laser-driven acceleration of membrane-like lightsails to the relativistic speeds needed to access interstellar distances is conceptually feasible, and that their fabrication could be achieved by scaling up modern microfabrication technology.
Introduction
The concept of harvesting radiation pressure to propel spacecraft dates to at least some 400 years ago, when Kepler observed that the gas tails of comets point away from the sun as if blown by a solar wind (1).The physics of radiation pressure became known when Maxwell published his theory of electromagnetism in the 19 th century, giving rise to formal development of the concept of solar lightsails by Tsiolkovsky, Tsander, and others in the early 20 th century (2).Efforts to field solar lightsail spacecraft have led to recent successes including the JAXA IKAROS (3), NASA NanoSail-D (4), and the Planetary Society LightSail missions (5).
Whereas sunlight provides a relatively weak force for accelerating spacecraft in Earth's vicinity (~10 μN/m 2 for a perfect reflector at 1 AU), far greater accelerating forces can be produced if a high power density laser is focused onto a lightsail.Simple analysis suggests that laserpropelled lightsails can in principle be accelerated to relativistic velocities, offering a promising pathway for interstellar exploration using ultralight space probes (6)(7)(8).Due in part to the announcement of the Breakthrough Starshot Initiative in 2016, which seeks to enable this capability within the next generation (9,10), recent investigations have explored the viability of laser-driven lightsails as a basis for interstellar spacecraft propulsion (8,(11)(12)(13).A major challenge for such lightsails is the need to maximize reflectance while minimizing weight and limiting optical absorption to extremely low values, prompting multilayer or nanophotonic designs (14)(15)(16)(17)(18).Given the extreme rates of acceleration and the distances over which this acceleration will occur, it is necessary that such lightsails are designed to be structurally and dynamically stable, such that they can be propelled along the pump laser beam optical axis (19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30)(31)(32)(33) with shapestable configuration.Several designs for rigid-or constrained-body beam-riding lightsails have been proposed, but to date no studies have considered the mechanical and beam-riding stability of meter-scale unsupported flexible membranes for interstellar propulsion.Notably, to achieve the target velocity of ~0.2c, the Starshot mission concept calls for ~1 g lightsail that is several meters in diameter; thus the membrane must be on the order of 100 atomic layers thick on average, including all framing or stiffening, suggesting that the flexibility of the lightsail must be taken into account in its design.
Here we consider the selection of materials, the structural and photonic design, and dynamic mechanical stability of flexible lightsail membranes, to investigate whether interstellar lightsail spacecraft can be realized with real materials, considering their finite stiffness and strength.We identify key material properties required for relativistic flexible lightsails, then develop a multiphysics simulation approach to explore the deformation and passively stabilized acceleration of spinning flexible lightsails with either specular scattering concave shapes or flat membranes with embedded metagrating nanophotonic elements.
Materials considerations
The Breakthrough Starshot Initiative (9) has challenged a global community of scientists and engineers to design a ~1-gram interstellar probe that will travel 4.2 light-years to reach Proxima Centauri B, the nearest known habitablezone exoplanet, within ~20 years of launch, as well as the necessary propulsion, communication, and instrumentation systems for such a mission.To accelerate the spacecraft to the required speed of ~0.2c, a ~10 m 2 lightsail weighing ~1 g would be propelled by an earthbased laser at incident power densities approaching ~10 GW/m 2 , experiencing ~10,000 Gs of acceleration for ~1000 seconds (8,10).A lightsail suitable for this mission must address immense engineering obstacles that will challenge the limits of materials science and engineering.One such challenge is that the lightsail must have reasonably high optical reflectance to produce thrust from the accelerating beam, yet must exhibit near-zero optical absorption (~1 ppm or less) and high thermal emissivity to prevent overheating.Recent studies have identified a handful of dielectric and semiconductor materials as potentially viable candidates (11)(12)(13), and nanophotonic designs made from these materials (or their combinations), have been reported to offer favorable reflectance, low absorption, and high emissivity (14)(15)(16)(17)34).In addition to achieving suitable optical properties over a wide temperature range, lightsail materials and designs must offer adequate mechanical strength and stiffness to endure the acceleration conditions necessary for interstellar propulsion.Table 1 shows key room-temperature mechanical properties and structural performance metrics for several of the candidate lightsail materials identified by previous studies, as well as for two common solar sail materials: aluminum and polyimide.More detailed properties are references provided in Table S1.
Among the candidates are several bulk crystalline dielectrics and semiconductors, including Si, quartz (SiO2), and diamond, which are hard, brittle, and have among the highest moduli and theoretical strengths of known bulk materials.Despite this such materials are rarely used in bulk structural applications, and are notorious for brittle failure in tension due to cracks initiated at surface defects.In practice, attainable specimen strength is limited almost entirely by the ability to fabricate device structures with defect-free surfaces.Although many decades of materials science and engineering have enabled each of these materials to achieve remarkable degrees of purity and scale of manufacture, present-day technology has yet to produce pure, defectfree, submicron-thick membranes over 10 m 2 areas.Two-dimensional crystals represent another class of candidate materials, among which MoS2 appears particularly promising for lightsail applications owing to its high strength and refractive index (15).The reported tensile strength for micron-scale suspended membranes of mono-or bi-layer MoS2 is nearly three times higher than that of any other material listed in Table S1.(35) Understanding the achievable strength and optical transparency of MoS2 films fabricated over large or nonplanar surfaces, at relevant layer thicknesses, and at elevated temperatures, is of considerable interest.
Another interesting class of materials for lightsail development is that of amorphous or nanocrystalline deposited thin films, including silicon nitride.Such thinfilm materials are widely used for modern MEMS.Promisingly, sub-micron thickness silicon nitride membranes have been fabricated at wafer scale, and further patterned with photonic crystal designs for nearunity reflectance (36,37).Ultralow extinction coefficients on the order of 10 -6 at near-infrared wavelengths can be achieved with high-stress stoichiometric silicon nitride (Si3N4), which is commonly employed in MEMS and cavity optomechanics applications (38,39).With favorable mechanical properties including high modulus and tensile strength, and potential research synergies between the fields of cavity optomechanics and optical levitation, Si3N4 is a particularly promising candidate material for lightsail development.
Ultimately, considerable effort will be required to develop any suitable materials system(s) to the scale of manufacture required for the interstellar lightsails proposed by the Starshot initiative, and careful consideration must be paid to the resulting mechanical and optical properties of the lightsail materials over a wide range of operating temperatures.
Stability considerations
In addition to possessing adequate optical and mechanical properties to endure the forces and optical intensities of the propulsion laser beam, the overall lightsail design must provide for adequate stability during acceleration.
Our work addresses two key aspects of stability: beamriding stability, the ability of the lightsail to follow along the beam axis without external guidance, and structural stability, the ability of the lightsail to survive the acceleration sequence without collapse, disruptive deformation of its shape, or tensile failure of its constitutive materials.These challenges and potential solutions are depicted schematically in Figure 1.
It is tempting to assume that the lightsail should be propelled by a beam of uniform laser intensity, to minimize thermal gradients and force nonuniformities that could distort the lightsail shape.This is the operating regime for solar sails, which navigate via active local control of solar reflectance or other attitude control mechanisms (1, 2).However, uniform plane-wave illumination is impractical for laser-propelled interstellar lightsails, as it would require a laser source of inconceivable power and aperture area to overcome diffraction of the beam over the extreme distance of acceleration.Assuming the propulsion laser would be constructed no larger than necessary to achieve the target mission velocity, the system must operate at or near the diffraction limit during the final phase of acceleration.We therefore restrict our study of beam-riding stability to static, weakly-focused low-order Gaussian beam intensity profiles.Other beam profiles such as higher-order Gaussian beams or doughnut beams (21,25) may be useful at earlier stages of acceleration when the propulsion system is not limited by diffraction.
Passive beam-riding stability is necessary for relativistic lightsail acceleration, because it is not feasible to provide closed-loop propulsive corrections by modulating or adjusting the propulsion beam in response to observations of the lightsail, owing to the large acceleration distances and final lightsail velocity.Limited by the speed of light, the round-trip delay between lightsail observation and the arrival of corrective modulation from the laser source in an active feedback loop would range up to several minutes at the end of the acceleration phase, whereas non-beamriding lightsails can veer off course on a timescale of milliseconds.Additionally, atmospheric turbulence and practical technological limitations will cause at least some perturbation to the desired position and profile of the beam (40).Thus, although some initial prescriptive corrective actions may be feasible from the laser source, the lightsail itself must ultimately be capable of aligning its acceleration trajectory to the beam axis without groundbased intervention, based solely on the local beam gradient.The challenge of steering the spacecraft then becomes primarily that of correctly pointing and slewing the direction of the ground-based laser source during acceleration.
A simple lightsail structure such as a flat specularly reflective disk is dynamically unstable and will eventually tilt and veer away from the beam.Several approaches to achieving beam-riding stability are depicted in Fig. 1A.Certain geometrically concave reflector shapes, including cones (21)(22)(23), hyperboloids (19), paraboloids, and other parametric shapes (32) have been predicted to offer stable beam-riding behavior, while other normally unstable convex shapes such as spheres can follow a stable trajectory by using more complex higher-order beam profiles (21).In addition to shaped specular lightsails, non-specular surfaces can be employed to produce restoring forces and torques, even for flat lightsails, by tailoring asymmetric optical properties to effect transverse forces (18,(24)(25)(26)(27)(28)33).Non-specular surfaces have been developed for solar lightsails to achieve greater maneuverability including enhanced lateral and rotational forces (41).
Our present study addresses only marginal (undamped) beam-riding stability, in which the lightsail exhibits bounded, oscillatory displacement and tilting about the beam axis in response to a finite beam-lightsail misalignment, during acceleration.Continuous perturbations to the beam-lightsail alignment during propulsion, e.g., due to atmospheric turbulence, can cause the oscillatory motion of the lightsail spacecraft to grow in magnitude, which could eventually cause marginally stable lightsails to escape the beam.Furthermore, for nonrigid structures, and flexible membranes in particular (42), the energy buildup in acoustic modes (shape distortions) could also destabilize or overstress the lightsail.Therefore, interstellar lightsails will likely require either active or passive means of damping their beam-riding oscillations and shape vibrations to achieve asymptotically stable propulsion along the desired cruise trajectory.Passive damping approaches might include the use of structures with damped internal degrees of freedom (31), employing nonlinear optical materials (30), or utilizing materials with highly varying temperaturedependent optical properties to enable hysteresis of the restoring forces.Active optical control surfaces for improved beam-riding stability have been demonstrated for solar lightsails (43), but developing such control surfaces to operate under the extreme beam intensities and low mass budget proposed for interstellar lightsail propulsion remains an unsolved challenge.
Turning our attention to structural stability, the interstellar lightsail must be capable of surviving the acceleration forces without collapsing upon itself or experiencing mechanical failure.This is a substantial challenge for the Starshot concept, which calls for meter-scale lightsail membranes of average thickness below 100 nm.Table 1 shows the allowable average thickness for each membrane type.This is not intended to suggest that lightsails should be constructed from uniformly thick continuous membranes, or to impose an upper limit for structural thickness.Optimized lightsail designs will likely incorporate multiple materials (16) and complex spatial patterning, e.g., perforations (15,17,30,37) or optical resonators (18,24,25,33,34), so as to maximize reflectance, emissivity, and tensile strength.However, with such limited mass budget, the finite strength and structural rigidity of the lightsail must be considered.
A prior study addressed tensile strength requirements by treating the lightsail as a rigid parametrically shaped shell, finding that certain surface curvature ranges minimize stress (44).Another recent study presented 2D analytic and finite-element models of deformation instabilities in uniformly illuminated lightsail membranes (45).In the absence of external constraints, the behavior of unsupported or loosely supported flexible membranes subject to nonuniform forces is considerably complex (42).
In general, thin unsupported membranes will collapse and crumple upon themselves when subject to focused laser propulsion, as depicted in Fig. 1B.A curved surface offers greater structural rigidity than a flat membrane, while also conferring the benefits of improved stress distribution that make thin curved shells useful in structural applications.However, open concave shapes such as cones and paraboloids are still prone to collapsing by elongation, an intrinsic instability for such shapes.Structural reinforcement such as framing could be added, but only at the cost of reducing the membrane mass.Potential approaches for structural reinforcement include microlattices (46), gas-filled envelopes (47,48), annular tensioning, fractal supports (49), tensegrity structures (50), or lamination with low-density or corrugated backing layer(s).Ultimately, given mass and material constraints, even a structurally rigidified lightsail will likely deform during acceleration, potentially changing the distribution of stress within the membrane or altering its beam-riding properties.An additional challenge for any structural materials is that the proposed lightsail membranes are generally partially transparent.Thus, even if placed behind the lightsail surface, the frame or backing materials may still be exposed to a high laser intensity, limiting materials selection.
As an alternative to structural support, spin-stabilization may be employed to prevent shape collapse.This effectively rigidifies the lightsail via inertial tensioning, and also gyroscopically stabilizes the lightsail to resist tilting, all while avoiding the added mass and complexity of structural reinforcement.For this reason, our work to date has focused on spin stabilized lightsails.However, spin-stabilization greatly complicates the dynamics of the lightsail, particularly for flexible membranes which are prone to complex instabilities (42), and is not necessarily effective for all structures under all conditions.Perhaps most counterintuitively, gyroscopic effects can disrupt the beam-riding behavior of certain lightsail designs that would be dynamically stable under non-spinning (rigidbody) conditions, particularly in the case of angular misalignment between the beam axis and the spin axis (21,22).Thus, the use of spin-stabilization to prevent shape collapse in ultrathin flexible lightsails can be a challenging design objective.
To provide first-order insights into the general viability of constructing large-area structurally stable lightsails from the candidate materials, we have defined two figures of merit in Table 1.The first is the stationary burst diameter (Dmax), which is the maximal diameter at which a flat circular membrane of areal density 0.1 g/m 2 , rigidly clamped at its perimeter, can sustain a pressure of 67 Pa applied to one side without rupturing (51).This is the effective photon pressure of 10 GW/m 2 illumination, assuming unity reflectance.Practical lightsail designs may have lower reflectance; may incorporate multiple materials or inhomogeneous patterning, owing to the need to optimize tradeoffs between reflectance, thermal properties, and strength; and would need to operate at substantially elevated temperatures.Dmax is thus only intended to serve as an order-of-magnitude indicator of the viability of large-area perimeter-supported membranes for this application.The assumed 'stationary' perimeter constraint provides an overestimate of the required membrane tensile strength, since any viable perimeter structure would not be stationary, but instead must have an extremely small mass that would accelerate along with the lightsail.But interestingly, even this simplified calculation suggests that while conventional solar lightsail materials such as aluminum and polyimide (Kapton) are far too weak to span meter-scale areas between structural supports, some candidate membrane materials (Si3N4 and MoS2) are in principle strong enough to span 10 m 2 areas (Dmax > 3.6 m) with perimeter support only -even in the stationary case.This is an encouraging conclusion for the development of structurally supported lightsails.
The second figure of merit in Table 1 is the maximum spin speed (fmax) at which a flat, 10 m 2 circular membrane could be spun without rupturing due to tensile failure.This is relevant because, for the designs considered here, relatively high spin speeds are required to produce both shape stability and beam-riding stability, often approaching the materials' tensile limits.The viability of spin stabilization depends on the spin speed, the acceleration conditions, and the specific design of the lightsail.For this reason, we have developed multiphysics numerical simulation methods to investigate the dynamic stability of flexible lightsails.
Mesh-based simulator for flexible lightsails
To model realistic flexible lightsail membranes of various shapes and optical designs, a triangular surface mesh is constructed (Fig. 2A).Each vertex is assigned a mass based on the local membrane thickness, the area of the adjoining triangles, and the material density.Elastic behavior of the membrane is captured by the edges, each of which is assigned a linear elastic coefficient (i.e., spring constant) based on the mesh geometry and local material properties.This approach omits the negligible bending stiffness and the specific shear modulus of the material, but provides reasonable first-order insights into the behavior of ultrathin membranes under tensile loading, which is the predominant type of loading in lightsail applications.In future efforts, non-isotropic material properties and the full elastic behavior of the lightsail material(s) could be considered.
Light-matter interactions are evaluated over each enclosed triangular mesh element, where incident light produces photon pressure forces, optical absorption heats the lightsail, and thermal radiation cools the lightsail (Fig. 2B).The heating, cooling, and optical forces calculated at each triangular element are distributed to the adjoining nodes, which represent the temperature distribution, momentum, and shape of the structure.Thermal conduction is calculated along the mesh edges based on the local material properties and mesh geometry, whereas temperature is calculated at each node based on its mass and the specific heat of the material.As the temperature distribution is known throughout the structure, we also include the effects of linear thermal expansion, which contributes to thermal strain.
In the simplest type of optical interaction, the force of photon pressure acting on a triangular element is governed by the effect of specular reflection from the surface (Fig. 2C), with the resulting force occurring normal to the surface.The photon pressure is calculated based on the local beam intensity, the relative polarization, the incidence angle to the surface, and the local membrane properties.Future efforts could also consider the optical effects of local temperature, strain, or the time-varying state of active control surfaces, as well as beam profiles that vary in time or distance from the source.Our present work has studied only the first seconds (up to 10 s) of acceleration following an initial beam-lightsail misalignment, which is adequate for observing marginally stable behavior over many periods of oscillation, determining steady-state temperature distributions, and identifying many types of instabilities.The present model does not address relativistic effects necessary to model the full acceleration duration to interstellar mission velocities.
In the next section, we first assume constant values for the reflectance, transmittance, and correspondingly absorptance to model the basic behavior of curved and flat specular lightsails.Then, we will introduce improvements to the optical calculations, including angle-dependent reflectance and absorption based on Fresnel coefficients, and considering the effects of multiple reflections of light within concave curved lightsail.Finally, we present simulations of non-specularly reflecting surfaces such as diffractive metagratings (Fig. 2D), which allow flat lightsails to achieve beam-riding stability.With future work, this basic simulation approach could be adapted to study lightsails made from optical metasurfaces (Fig. 2E) with a wide range of optical behaviors.
To simulate acceleration of the lightsail, and to assess its apparent stability, we implement a finite-difference timedomain approach wherein we calculate the forces and heat flow acting at each mesh vertex, then evaluate the resulting changes in position, velocity, and temperature over a time step Δt.With sufficiently small Δt, we can simulate the propagation of membrane vibrational modes, and can obtain reasonable predictions of the lightsail dynamic behavior during the initial acceleration phase.Thermal and mechanical membrane failures can be detected when a nodal temperature exceeds a threshold value or when the strain in an edge exceeds the tensile limit of the material.
Dynamics of flexible curved lightsails
Fig. 3 depicts the simulated behavior of flat versus curved (paraboloid) lightsails and the effects of spin stabilization, using optical and mechanical properties roughly corresponding to a 43 nm thick Si membrane (0.1 g/m 2 ) whose properties are parameterized at room temperature.We first consider a 1-meter diameter flat lightsail, illuminated by a λ = 1550 nm Gaussian beam profile, with 4 GW/m 2 peak intensity and a 0.5-meter beam waist, offset by 80 mm from the center of the lightsail.This lightsail size was chosen as a compromise between computational cost and the desire to simulate large macroscopic structures with reasonable mesh accuracy.Because these lightsails are smaller than 10 m 2 in area as proposed for Starshot, they can be spun faster than the values shown in Table 1; the unloaded maximum spin speed for the flat membrane in this case is ~470 Hz.Without spin stabilization, the flat lightsail membrane is structurally unstable and collapses upon itself as expected.Spin stabilization (fspin = 135 Hz) prevents the lightsail from collapsing, but lacking any means for beam-riding stability, the lightsail quickly veers away from the beam axis.A paraboloid shape can offer beam-riding stability according to rigid-body calculations, but in the flexible mesh simulation, the membrane quickly becomes elongated and collapses upon itself.With inadequate spin stabilization (fspin = 90 Hz), the shape collapse is delayed but not prevented, in this case leading to tensile failure.With adequate spin stabilization (fspin = 135 Hz), the shape remains stable, and beam-riding stability is achieved throughout the 1 s duration of the simulation.Animations of all five cases are available in Supplementary Video 1.
To facilitate comparison, all lightsails in Fig. 3 have the same surface area and thus the same total mass.As a result, the paraboloid lightsails are smaller in diameter, and thus accelerate more slowly than the flat lightsails due to their smaller aperture area.Therefore, a drawback of deeply curved shapes is that they tend to be heavier than flat lightsails of the same aperture area and thickness.Also, the sloped peripheral surfaces of the paraboloids do not propel the lightsail along the z direction as efficiently, since some of the photon pressure is directed radially.Light reflected from these edge areas might in fact impinge somewhere on the opposite side of the lightsail, thus imparting additional photon pressure there, potentially affecting the acceleration and stability of the lightsail.We thus improved our simulation code by considering multiple reflections within the lightsail using a simplified raytracing approach, and by calculating reflectance and absorption based on Fresnel coefficients, thus better modelling the angle dependence of light interaction (Fig. 4).Animations of these and other raytracing-based simulations are shown in Supplementary Video 2. Fig. 4A compares the shape and temperature behavior of a 1-meter diameter spin-stabilized paraboloid lightsail representing a 43-nm thick Si membrane, with and without the effects of multiple reflections within the lightsail, with simulation conditions being otherwise the same as for the stabilized paraboloid shown in Fig. 3. Due to the modest reflectivity of silicon (0.45 for λ = 1550 nm at normal incidence), the effects of reflected light can substantially disrupt the lightsail stability.Considering only the effects of the incident light beam, the lightsail trajectory appears stable, similar to that shown in Fig. 3; but upon introducing the effects of secondary reflections, the lightsail shape and trajectory become unstable.While the secondary reflections do increase the total photon pressure on the lightsail, resulting in faster acceleration, reflected light striking the opposite side of the lightsail counteracts the restoring forces and torques produced by the first reflection, thus destabilizing the lightsail.Also evident from the temperature profiles is the localized heating caused by the focusing of reflected light, with the peak temperature increasing from ~700 K to ~1000 K.
Increased temperatures are problematic for lightsails because materials generally weaken or decompose at elevated temperatures.An upper temperature limit may be imposed by material sublimation or decomposition, as even small amounts of material loss could substantially weaken or alter such thin lightsails (15).If we limit the mass loss to 1%, for a 1 g, 10 m 2 lightsail for 1000 s, literature values predict a limiting temperature of ~1300 K for crystalline Si (52), suggesting that the projected temperatures above are acceptable.However, for semiconductor materials, free-carrier absorption increases dramatically with temperature as the bandgap narrows, which may lead to a thermal runaway situation at a much lower threshold temperature.Furthermore, two-photon absorption may trigger thermal runaway above certain laser intensities, regardless of initial temperature.A recent analysis of an optimized Si-based nanophotonic lightsail estimated the threshold temperature for thermal runaway to be only 400-500 K, and placed an upper limit on beam intensity at ~5 GW/m 2 (53).Thus, our simulations predict unsurvivable temperatures for Si membranes, and unsurvivable light intensities in the regions of focused secondary reflections.Nonetheless, we can conclude that spin-stabilization can prevent shape collapse of flexible curved lightsails.While multiple reflections within deeply curved lightsails can increase the acceleration rate, they also increase the risk of localized hotspots from focused light and can also reduce or disrupt beam-riding stability.However, this only affects curved shapes which are deep enough to encounter multiple reflections over the range of tilt angles and shape deformations experienced during acceleration.Another challenge is that curved lightsail shapes would likely be more difficult to fabricate at the meter scale, and for crystalline materials, would introduce weaknesses at joints, grain boundaries, or wherever weaker crystal planes are exposed.We are thus motivated to investigate flat membranes as an alternative to curved shapes, owing to likely easier fabrication and scale-up, and to the lack of internal secondary reflections within the lightsail.However, we note that not all curved shapes are destabilized by internal reflections, and that shallower spin-stabilized curved shapes can achieve stability without encountering conditions that produce secondary internal reflections (21) (Supplementary Video 2).
Since Si exhibits thermal runaway at relatively low threshold temperatures and beam intensities (53), we turned our attention to a different material for the flat lightsails.Even if radiative cooling of Si-based lightsails could be improved, using any material with such a low runaway threshold temperature appears problematic, since any local defect, contamination, or brief localized focusing of light exceeding the two-photon absorption threshold, could initiate catastrophic thermal runaway spreading across the entire lightsail.Si3N4 is used extensively in other high-temperature applications, and its larger optical bandgap (~5 eV) and lower free-carrier absorption are attractive.Furthermore, amorphous Si3N4 films of excellent optical quality can be deposited using LPCVD, suggesting an easier route for fabrication over large or complex surfaces (36,37).A drawback to Si3N4 is its relatively low refractive index (n ~ 2), resulting in lower reflectance and less efficient diffraction.
It is difficult to estimate the practical limiting temperature for Si3N4 lightsails based on its properties reported in literature, owing to the diversity of its applications, the varying stoichiometry, density, and stress produced by chemical vapor deposition methods, and the relative complexity of the N-Si system at high temperatures.As an upper limit, we estimate the temperature at which vacuum decomposition would occur (again choosing a threshold of 1% decomposition over 1000 s) to be ~1600 K, based on decomposition rates for crystalline powders of Si3N4 (54).Practical thermal limits would likely be much lower, as the decomposition evolves nitrogen, leaving elemental silicon at the material surface, which could dramatically increase optical absorption and lead to thermal runaway.Other high-temperature risks include weakening, changes to stress distribution, activation of traps or defects, or crystallization of the material.Further experimental measurements are necessary to accurately determine the limiting temperatures and power densities for Si3N4 lightsails.
Optical design for passive stabilization of flat lightsails
Passive stabilization of lightsail dynamics requires the presence of restoring forces and torques.The previously discussed concave curved shapes achieve this via their shape alone, but flat specular lightsails cannot achieve beam-riding stability because specular reflection only produces forces normal to the surface.One approach to obtain beam-riding designs for flat lightsails is to make use of engineered optical anisotropy.In diffractive gratings with symmetric unit cells, such optical anisotropy can be achieved with nematic liquid crystals (55).Alternatively, optical anisotropy can be created by designing asymmetric diffractive metagratings, e.g., with the unit cells comprising two resonators of dissimilar widths (24,33).In such structures, anisotropic scattering of incident light into the grating diffraction orders manifests in optical forces transverse to the membrane.Moreover, optical metasurfaces comprising subwavelength scatters in the form of disks (18), blocks (25), or spheres (29) can be used to shape the wavefronts of scattered light, redirecting incident photon momentum in anomalous ways to produce beam-riding stability.
We describe stable designs for flat lightsails by designing asymmetric diffractive metagratings, patterned from Si3N4 as shown in Fig. 5.A specifically designed pair of mirror-symmetrically arranged metagratings can passively stabilize translations and rotations along one axis (24,33).Consequently, we employ two distinct and perpendicularly arranged metagrating designs to enable stabilization of translations along both x and y, and rotations θ about yBF (pitch) and ϕ about xBF (roll).As shown in Fig. 5A, a circular lightsail is partitioned into four sectors, forming two orthogonal pairs of symmetrically opposed wedges.We assume a linearly polarized incident beam, with its electric field aligned with the body-frame y-axis yBF.Thus the blue sectors (1/6 of the lightsail area) experience transverse-electric (TE) polarization, and the brown sectors (1/3 of the lightsail area) experience transverse-magnetic (TM) polarization, and the specific asymmetric metagratings for each sector (Fig. 5B) provide stabilizing forces and torques for their respective design planes and polarization.For spinstabilized lightsails, we assume that the beam polarization rotates synchronously with the spinning lightsail.Electromagnetic simulations were performed to determine the optical response of metagrating unit cells.
For a laser propulsion wavelength of λ = 1064 nm, we identified self-stabilizing metagrating designs using linearized stability analysis.While non-spinning designs are marginally stable if the eigenvalues of the Jacobian matrix derived from the lightsail equations of motion are purely imaginary, for spinning lightsails as linear-time periodic systems, we must employ Floquet theory to assess stability of the designs (56,57).Specifically, our chosen unit cell designs for lightsails spinning at 120 Hz produce absolute values of eigenvalues equal to 1, i.e., , which is a sufficient and necessary condition for marginal stability.We study the initial acceleration of our marginally stable lightsail designs, subject to an initial alignment error, which allows us to verify the beam-riding stability predicted by linearized rigid-body Floquet analysis, and importantly, to investigate whether these spinning sails retain their beam-riding stability when the assumption of rigidity is removed.
The two metagrating designs each support m = ±1 diffraction orders in addition to the specular order in reflection and transmission.Asymmetry in the intensities of the diffracted orders provides the mechanism for lateral restoring forces, while asymmetry in the angular dependence of optical thrust provides the mechanism for restoring torques.Assuming a Gaussian beam with a width equal to 40% of the lightsail diameter, i.e., w = 0.4D, we calculated the normalized optical forces and torques induced on a rigid lightsail of the proposed design, over a range of incidence angles (θ, ϕ) and translational offsets (x, y).These induced forces do not depend on acceleration distance z and yaw tilt ψ because we neglect beam divergence and assume synchronous rotation of the polarization.Stabilizing behavior is evident from the negative slopes of Fx and Fy versus x and y, respectively, with zero crossings (equilibrium positions, indicated by gray isolines) present near the beam center (x, y = 0) over the full ±10° range of plotted tilt angles θ and over a ~ ±5° range of roll angles ϕ, respectively (Fig. 5C, 5D).The relative insensitivity of lateral equilibrium position to tilt angle appears to be beneficial for improving stability in the spinning case.
Restoring torques limit angular rotation relative to the optical axis, although the situation is less straightforward for the spinning case.Beam-center optical torques about x and y are shown in Fig. 5E, exhibiting stabilizing polarity and derivative over a ±6.5° range of pitch and roll.While the TE metagrating provides a larger torque about y, the TM metagrating yields slightly stronger optical forces along y.We note that τx(ϕ) is markedly nonlinear beyond ~ ±1.5°, which restricts conclusions drawn from linear stability analysis to this angular range.Rotations beyond ±1.5° will give rise to nonlinear dynamics, resulting in possible coupling to and between distinct frequency components.Our time-domain numerical simulations allow this behavior to be studied by considering the full angle-resolved optical response of the metagratings.
Dynamics of metagrating-based lightsail
To verify our predictions about dynamical stability of rigid lightsails patterned with the composite metagrating design reported here, we numerically solved the equations of motion.The dynamics of flexible lightsails with the same metagrating motif were also simulated using our mesh-based modeling approach.The lightsail diameter is D = 1 m, for which the chosen composite metagrating design yields a total mass of m = 0.867 g.A Gaussian propulsion beam with a peak intensity of I0 = 1 GW/m 2 and a width of 0.4D = 40 cm was assumed.
We present here an exemplary case of passive stabilization of a flexible metagrating lightsail, in which an initial translational offset of x = y = 5 cm in the lightsail position relative to the beam optical axis and an initial (pitch and roll) tilt of θ = ϕ = −2° was assumed (Fig. 6).In the Supplementary Information, we also present results for passive stabilization of a flexible metagrating lightsail being only initially displaced (Fig. S3), but not tilted relative to the beam optical axis.Snapshots of the flexible lightsail position, orientation and shape every 0.5 s are shown in Fig. 6A; an animation of the simulation is available as Supplementary Video 3.For the studied duration of t = 5 s, the lightsail oscillates about the beam axis while remaining relatively flat and level, with no visibly apparent shape distortion thanks to the sufficiently large tensioning forces arising from spin-stabilization.Due to the finite absorptivity of Si3N4, the center region of the lightsail reaches a maximum temperature of 959 K.In contrast, the peripheral area remains significantly cooler (Fig. S4a), heating up to a maximum temperature 489 K.The slower heat up process on the edge of the lightsail can be attributed to limited heat transport from the hot center of the lightsail, owing to the low thermal conductance of the silicon nitride membrane.Thermal conduction dominates over direct absorption as a source of heating in the peripheral areas of the lightsail, due to the underfilling laser beam.The peak temperature appears sufficiently below the vacuum decomposition temperature of Si3N4 (54), although this temperature is likely too hot for most payloads.Increasing the assumed hemispherical emissivity of 0.1 for thin Si3N4 membranes would be desirable, for example with additional metasurface designs for selective thermal radiation in the mid-infrared regime, or addition of other material layers (15,16,58).Our simulation predicts a maximum strain of 0.091% in the Si3N4 membrane (Fig. S5A).With a Young's modulus of 270 GPa, such strain translates to a tensile stress of approximately 246 MPa, which is > 40 times lower than the reported 6.4 GPa tensile limit of Si3N4.(Table 1).Therefore, a meter-sized flexible lightsail is expected to exhibit mechanical stability in its propulsion phase despite being subject to large thermal gradients, spin tensioning, and nonuniform beam intensity.
Examining the trajectory of flexible and rigid lightsails indicates that their motion is bounded and thus the dynamics appear to be marginally stable as expected (Fig. 6B).During the entire 5 s duration of simulated propulsion, the lightsails remain within 180 cm of the beam center, as they traverse triangle-like trajectories in the x-y plane.Comparing the trajectory of the flexible lightsail to that of the identically patterned rigid version, both exhibit similar behavior consistent with marginal stability.Plotting the oscillatory displacement of the lightsail centers-of-mass along x, y and the radial distance r versus time (Fig. 6C) better reveals the slight deviations in trajectory.In the beginning, both flexible and rigid lightsails follow almost indiscernible trajectories.After 0.8 seconds, differences in x and y become more visible, but do not grow continuously over the studied time duration, ruling out the accumulation of numerical errors due to insufficiently small time stepping as a possible reason.Instead, we attribute the small differences in position to the role of shape distortions in flexible lightsails and the effect of thermal expansion.
To elucidate the influence of temperature and thermal strain in the flexible lightsail simulations, we simulated propulsion under conditions of zero absorptivity and emissivity to keep the lightsail temperature constant at 300 K (Fig. S6A).The resulting trajectory is again very similar that of the flexible and the rigid lightsail but does not match either perfectly.However, a closer look reveals a closer resemblance in dynamics between the thermally inactive flexible lightsail and the rigid lightsail, which suggests that thermal effects play a bigger role than shape distortions, both of which exist due to the non-uniformity of the laser and optical pressure as well as the resulting non-uniform temperature distribution and thermal strain.We also observe oscillatory motions with multiple frequency components for both translations and rotations.Examining the lightsail tilt angles θ and ϕ versus time for the rigid lightsail (Fig. 6D, 6E), we observe a fastoscillating component at 240 Hz for both tilt angles associated with the assumed 120 Hz spin speed due to its two-fold cyclic symmetry (see insets) superimposed upon multiple slower nutation/precession frequencies.Throughout the simulation, although the pitch and roll angles grow larger than the initial tilt offset, both θ and ϕ remain bounded between ±7°.For the flexible lightsail tilt, we present the distribution of pitch and roll angles for all mesh triangles across the lightsail surface as normalized time-domain histograms in Fig. 6F and Fig. 6G.We can observe overwhelmingly similar and bounded rotation dynamics for the flexible lightsail, proving again the effectiveness of spin stabilization.At closer look at shorter time scales reveals subtle differences in the time evolution of pitch and roll angles, as indicated by an angular spread of tilt angles of ~1°.
The simulations provide a high-fidelity numerical approximation of the initial lightsail trajectory, stress distribution, and shape evolution, which is sufficient to characterize the general beam-riding and structural behavior of stable lightsail designs, and to definitively identify unstable designs.The specific design presented here appears marginally stable for the chosen initial conditions throughout the 5 s duration of acceleration.However, substantial deviations from this design and set of chosen parameters can produce unstable behavior.Decreasing the spin frequency from 120 Hz to 80 Hz, increasing the beam diameter from 0.4D to 0.5D, or increasing the gap between resonators by 20% for both TE and TM unit cells all result in unstable dynamics (Fig. S7), which highlights the importance of judiciously choosing the beam width, spin frequency and optical design for passive stabilization.
Conclusions
We have presented time-domain multiphysics simulations of flexible lightsail membranes undergoing the initial stages of acceleration toward relativistic velocities due to radiation pressure propulsion.In this work we have explored both the lightsail beam-riding stability and dynamic structural stability.Specifically, we have shown proof-of-concept examples of flexible, meter-scale lightsails, spin-stabilized to tension the lightsail, that exhibit a stable shape without any stiffening elements.We have observed that certain concave specularly reflecting lightsail shapes such as paraboloids can enable both beamriding stability and shape stability, and have also demonstrated passively stabilized flat lightsail designs based on Si3N4 metagratings.The latter is of particular interest for experimental lightsail development, owing to the favorable mechanical strength and low optical absorption of Si3N4, and its ability to be fabricated in planar thin-film form at the wafer scale.Specifically, we have demonstrated that high-speed spin stabilization at 120 Hz is largely effective in rigidifying a flexible metagrating-based lightsail to exhibit similar dynamics compared to its rigid counterpart, while the subtle differences between flexible and rigid metagrating lightsails can be explained by both structural deformations and thermal effects.
We note that the size and average illumination intensity for the designs reported here fall below the nominal design targets proposed by the Breakthrough Starshot program for interstellar missions.Furthermore, the dimensions our metagratings causes them to be heavier than the nominal target of ~0.1 g/m 2 .Further optimization of the metagratings and lightsail structure, potentially including the addition of other materials, will be necessary to produce a full-scale Starshot lightsail design.Our present design represents an important first step towards this goal, and the simulation tools reported here will likely be useful in achieving this goal.Future work should be directed towards modelling the temperature dependence of optical reflectivity, absorptivity, and emissivity, in order to better understand the upper limits of achievable acceleration--a key factor in determining the viability of interstellar exploration via laser-propelled lightsails.For many materials, experimental efforts may be needed to probe high temperature properties.Other second-order effects may also be worthy of investigation, such as the effects of strain on optical properties.Our simulation approach may also be useful in addressing other challenges for interstellar lightsail development, such as payload integration and codesign of the propulsive laser system.Despite numerous simplifications, we have addressed the most relevant physics for flexible lightsail acceleration and flight, including first-order linear elastic behavior, heat flow, and optical scattering.We have presented timedomain simulations of stable lightsail structures undergoing up to five seconds of acceleration.Future work may allow longer simulation durations, but regardless of the chosen simulation duration, it is difficult to infer absolute stability from time-domain simulations of marginally stable lightsails, so a more useful future application of our approach might be the improvement and optimization of lightsail designs.Our present lightsail patterning was selected based on parametric optimization under rigid-body Floquet theory, but the complexity of flexible lightsail dynamics suggests that a more advanced optimization approach based on numerical time-domain simulations may yield more favorable designs, particularly as increasingly complex building blocks and physical behaviors are modelled.Future refinements, such as implementing temperature-dependent optical properties, or improving numerical time-stepping with implicit and higher-order methods, may allow for studies of acceleration over a longer period.Nevertheless, study of the initial seconds of lightsail acceleration provides considerable insight into flexible lightsail design.In connection with the work reported here, we have published an open-source version of our simulation code (59) to further expand effort by the lightsail community to develop new and improved designs for interstellar propulsion, optical levitation, and long-range optical manipulation of macroscopic objects.
Materials and Methods
Electromagnetic response of the TE and TM metagrating designs were simulated in COMSOL Multiphysics assuming periodic Floquet boundary conditions.For highstress stoichiometric silicon nitride, we assumed a refractive index of Re(n) = 2 and an extinction coefficient of Im(n) = 2 × 10 -6 at λ = 1064 nm (59).The TE and TM metagrating unit cells shown in Fig. 5B are defined by w1 TE = 600 nm, w1 TM = 520 nm, w2 TE/TM = 200 nm, d TE = 1600 nm, d TM = 1350 nm and an gap of 190 nm and 200 nm between resonators for the TE and TM unit cells, respectively.The resonators' height and substrate thickness were chosen to be 400 nm and 200 nm, respectively.The process of identifying these selfstabilizing unit cell designs, which was based on Floquet theory, i.e., evaluation of the absolute values of the eigenvalues of the monodromy (state transition) matrix, is described in more detail in the Supplementary Information.Except for the resonator height and substrate thickness, all geometrical parameters were varied systematically to select and compare suitable metagrating designs.By sweeping the incidence angle between ±25° for both pitch (θ) and roll (ϕ) tilt, angle-dependent optical pressures can be obtained via integration of the Maxwell Stress tensor around the respective unit cell.We used the exported look-up-tables of optical pressures as inputs to our rigid and flexible membrane dynamics simulations.In the former case, optically induced forces and torques can be derived assuming a Gaussian beam characterized by its peak intensity I0 and beam width w.For a given set of initial conditions (position, velocity, angular orientation, and angular frequency), the coupled equations of motion were evolved numerically using MATLAB's ode45 solver to obtain the trajectory and time-dependent displacement and tilt of propelled rigid lightsails described by their centers-of-mass.Normalized relevant quantities can be converted to real-life values by specifying I0, the lightsail diameter D and calculating the normalized time constant t0 = (mc/I0) 1/2 , where m is the total mass of the lightsail.
For more detailed description of the modeling and dynamical simulation of flexible curved and flat lightsails, we refer to the Supplementary Information.The MATLAB code has been made available on GitHub.
Tabulation of Material Properties
We have collected a number of candidate material property values from the literature for the purpose of simulating the structural dynamics of lightsails.These appear as table S1 below.This is not intended as an exhaustive list or ranking of candidate materials for the interstellar lightsail, and importantly, it should be noted that the published properties of these materials can vary greatly depending on the method of fabrication, as well as the test geometry and method of measurement.Furthermore, most properties are reported based on room-temperature measurements, whereas during acceleration, lightsails will operate at elevated temperatures where material properties have been less comprehensively studied.We did not attempt to model temperature-dependent mechanical properties in the present study, although it would be straightforward to add this capability in the future.
Ultimately, further characterization of the lightsail material(s), as fabricated and over their intended operating temperature range, will be required to draw conclusions about the viability of any specific lightsail design.
Aluminum and polyimide are typical materials used for solar sails; we include them as a point of comparison.Note that the stringent requirements of ultralow optical absorption preclude the use of even the most reflective of metals for the interstellar lightsail application.It also seems unlikely that polymers could be used structurally in this application, owing to their low strength and limited temperature range.Other materials offering exceptional mechanical strength such as graphene and carbon nanotubes can also likely be ruled out owing to their high optical absorption.However, there are likely a wide range of dielectrics and wide-bandgap semiconductors which may prove useful in lightsail applications, in addition to those shown in Table S1.
For materials such as crystalline silicon, SiO2 and diamond, the highest recorded strengths have been achieved by small (< 50 μm diameter) filaments of high-purity materials with pristine surfaces, tested in bending over a small mandrel to further limit the stressed surface area and thus the chances of encountering a surface defect.It is uncertain if such high strengths could be achieved in a membrane geometry.Furthermore, crystalline materials, whether bulk or 2D, may exhibit reduced strength if used to fabricate arbitrarily curved lightsail surfaces such as spheres, cones, or paraboloids, owing to relative weakness of certain crystal planes, or the inability to perfectly join crystal surfaces at domain boundaries.
Mesh-based simulator for flexible lightsails
We have developed a time-domain simulation code for studying the dynamic behavior of lightsails under acceleration.This is facilitated by modelling the lightsails as a discrete mesh, wherein the nodes represent mass, inertia, temperature, and shape; the edges represent the stiffness and thermal conductivity of the material; and enclosed triangles represent the surface area through which light interacts with the lightsail.Nodes of the mesh are assigned positions along the desired surface, with their spacing chosen to yield approximately uniform edge length and aspect ratio among the triangles.An example simulation mesh for a paraboloid lightsail is plotted in Fig. S1.This code has been open-sourced at: https://github.com/Starshot-LightsailSimulations begin with generation of the mesh.Currently, supported meshes must have two-dimensional topology, but can represent any three-dimensional surface so long the surface is not self-shading at any time.The provided mesh generator script provides parametric options to generate round, square, or hexagonal lightsails, with either flat, spherical, parabolic, or cone/pyramid vertical profiles.Non-round footprints can specify either smooth or faceted vertical profiling.Region and texture mapping is also performed in the mesh generator.This assigns varying mechanical and optical properties to various regions of the lightsail.Regions can be defined in either Cartesian or polar mapping schemes.
The simulation process is outlined in Fig. S2.Briefly, evolution of the shape and position of the sail is calculated iteratively in the time domain, using a fixed time step calculated to be substantially smaller than any vibrational modes of the mesh (typically, 1/20 th to 1/10 th of the period of the highest resonant frequency).Modeled physics include: Mechanical response based on linear elastic theory, tensile failure detection, radiative cooling, thermal conduction, thermal expansion, ray-tracing for specular surfaces, and calculation of optical forces and absorption using either fixed values of reflectance and absorption, a 1D look-up table (LUT) to represent calculations of specular behavior using the transfer matrix method, or a 2D LUT representing the angle-dependent response of nanophotonic or metagrating surfaces.
Upon mechanical or failure of the membrane, the simulation can then be terminated, or allowed to proceed to determine the margin by which the chosen conditions will exceed the material capabilities.Alternately, to enable cursory depictions of the progression of such failures, we can delete the affected elements from the ongoing simulation mesh at the moment of failure, but this is not intended to accurately model the dynamics of tensile or thermal failures.Specifically, our simulator does not model collisions between the collapsed lightsail elements, neglects beam occlusion effects for inverted shapes, and treats tensile failure simplistically; thus the fully collapsed and tattered shapes are not simulated accurately.The images of mechanical failure are included to better show the general progression of the shape instabilities.In Fig. 3 of the main text, we chose a hexagonal perimeter shape for the flat membranes to better illustrate the collapse.
The present approach cannot be used to study the scenario of polarization mismatch, which is necessary to evaluate whether our spinning lightsails are stable in non-rotating beams, or whether lightsails can self-synchronize their rotation to that of the beam during acceleration.Moreover, due the accumulation of numerical errors introduced by explicit time stepping in our code, simulations cannot be performed over indefinite timescales to definitively prove marginal stability.Where we adopted the following notation for partial derivatives, & .
In our case with only pitch-and roll-restoring behavior and translational stability, many of the matrix elements are either zero, very small and thus approximately zero, or can be calculated analytically, leaving us with a Jacobian matrix of full rank that has a reduced dimension given by By numerically evaluating the remaining nonzero matrix elements of the Jacobian matrix, the presence of real parts in any of its eigenvalues indicates exponential growth of the respective solution to the equations of motion and thus instability of the laser-propelled system.Due to the lack of damping terms in the system's equations of motion, eigenvalues with real parts will always come in pairs of positive and negative real part.
The case of spinning rigid lightsails requires a more careful stability analysis, where the absolute values of the complex eigenvalues of the monodromy matrix, which can be obtained from numerical integration involving the system's Jacobian matrix, determine whether spinning lightsails are stable or not.Importantly, is no longer assumed to be zero (or close to zero), but takes on a finite value, i.e., times our desired spinning frequency, instead.Similarly, the yaw angle will vary between and during a period of and thus be time-dependent.To underline these differences, we evaluate with constant and to be To further simplify, we remind ourselves that can be linearly expanded around the "equilibrium" as Noting that Which means that is not a true equilibrium.Nevertheless, evaluating the second term on the right-hand side of the Taylor-expanded equation above yields From this, it follows that constant As for the first case, we observe multiple frequency components within the simulated trajectories and tilt angles (Fig. S3D-2G), with the most noticeable one being the again slow frequency component at 240 Hz superimposed upon slower frequencies of approximately 2.5 Hz and 0.6 Hz.The observation of displacement along x and y being more tightly confined can also be made for the pitch and roll angles of the rigid lightsail, as they remain bounded within ±1.3° during the simulated timespan, suggesting a lesser degree of deformation and vibration in the membrane.The temporal evolution of pitch and roll angle distributions of the flexible lightsail again follows closely θ and ϕ of the rigid lightsail, confirming that spin stabilization at 120 Hz is sufficiently fast enough to treat our flexible lightsail as quasi-rigid.Nevertheless, we note that a finite angular spread of pitch and roll angles of ~1° can be observed for all mesh elements constituting the flexible lightsail.Finally, due to the discretized surface of the flexible lightsail, signs of mesh elements on the perimeter experiencing larger rotations remain visible in the insets of Fig. S3F and S3G despite truncating histogram bins with only few elements (less than 10 within bins of width 0.05°).
Temperature & strain analysis of passively stabilized flexible metagrating-based lightsails
As mentioned in the main text, our flexible lightsail simulator stores several variables of interest for postprocessing and analysis, including the peak and average temperature of the lightsail during propulsion and the maximum strain on the lightsail due to mechanical forces and thermal expansion, downsampled by a factor of 8 for memory management.Due to the underfilling beam width of w = 0.4D, regardless of whether the lightsail is initially only translated or also tilted, the difference between the peak, average and minimum temperature of points on the lightsail can be several hundreds of Kelvin (Fig. S4).While the center of the lightsail heats up to a peak temperature of just below 1000 K during propulsion, its perimeter or edge points experience a temperature rise of less than 200 K, the difference of which results in an average temperature in between these two extremes.Including an initial tilt to the simulated trajectories induces more variation in especially both peak (center) and minimum (edge) temperatures of the accelerated lightsail.
Figure 1 .
Figure 1.Conceptual illustrations of design approaches.Designs for achieving (a) beam-riding stability, and (b) structural stability, in lightsail membranes.In panel (a), the red arrow depicts the accelerating beam position, the orange arrows indicate the direction of reflected light, and the blue arrows indicate the force of radiation pressure.
Figure 2 .
Figure 2. Modeling flexible lightsails and light-matter interaction with a mesh-based time-domain simulator.(A) Ultrathin and meter-scale lightsails and their deformations can be modeled by a mesh comprising masses m (nodes) connected by springs with stiffnesses k (edges), enclosing triangles of area A. Light-matter interactions are calculated for each mesh triangle based on discretization of the incident light as localized beam I0.Modeled behaviors include (B) absorption of light and thermal emission, which heat and cool the structure, driving heat flow, thermal expansion, and changes in material properties; (C) specular reflection and transmission of light, producing photon pressure, and in some cases, causing reflected light to impinge other triangles; (D) optical diffraction from periodic wavelength-scale surface patterning, producing transverse directional forces from photon pressure, and (E) optical wavefront shaping such as beam steering with subwavelength optical metasurfaces.
Figure 3 .
Figure 3. Simulation results for flat versus curved specular lightsails, with and without spin stabilization.Illumination is in the +z direction starting shortly after t = 0, with a Gaussian profile (I0 = 4 GW/m 2 , Rwaist = 0.5 m, λ = 1.55 μm), offset by 80 mm from the initial lightsail centers.Left: Surface renderings show temperature, shape, and lateral position of each lightsail at the indicated times during simulation.Surface shading was applied to enhance depiction of shape.The vertical magenta lines show the beam centerlines.All lightsail images appear at the same scale; however, their vertical positions have been shifted for presentation.Right plots: The distance between the lightsail center of mass and the beam centerline (above), and the lightsail z velocity (below), plotted versus time.Animations of all five simulations are available as Supplementary Video 1.
Figure 4 .
Figure 4. Effects of multiple internal light reflections within spin-stabilized flexible paraboloid lightsail.Simulated shape (a), acceleration (b), peak temperature (c), and trajectory (d) of a 1-m diameter paraboloid lightsail, with and without the effects of internal light reflection within the lightsail.Paraboloid lightsails and acceleration conditions are similar to those in Fig. 3. Animations of these and other raytracing-based simulations are available as Supplementary Video 2.
Figure 6 .
Figure 6.Acceleration dynamics of a flexible and a rigid spinning lightsail based on the same composite metagrating pattern.Lightsails are initially offset by x = y =50 mm from the beam center and rotated by θ = ϕ = −2°.(A) Snapshots of the beamriding flexible lightsail's position, angular orientation, temperature and shape at different times.(B) Lightsail trajectory throughout the 5 s simulation duration.(C) Lightsail xand y-position and radial distance r from the beam center versus time, exhibiting bounded and oscillation around the equilibrium at x, y = 0. (D), (E) Evolution of pitch θ and roll ϕ, respectively, of the rigid lightsail versus time, showing multi-frequency oscillation around the equilibrium at θ, ϕ = 0°.(F), (G) Distribution of θ and ϕ angles, respectively, of all mesh elements comprising the flexible lightsail versus time, showing both bounded oscillations and limited angular spread, with minor shape distortion observed through the range of surface tilt angles at any given time.For (D) -(G), insets show fast-frequency oscillations within a reduced time window (0.1 s).An animation of this simulation is available as Supplementary Video 3
Fig. S1 .
Fig. S1.Three-dimensional surfaces can be constructed using Delaunay triangulation to model the simulation mesh of a paraboloid lightsail.
Table 1 .
Figures of merit for mechanical strength of candidate lightsail materials.
Table S1 .
Summary of published mechanical properties of candidate lightsail materials Supplementary InformationDynamically Stable Radiation Pressure Propulsion of Flexible Lightsails for Interstellar Exploration SI-9 | 13,528.2 | 2023-01-21T00:00:00.000 | [
"Engineering",
"Physics",
"Materials Science"
] |
CoeViz: A Web-Based Integrative Platform for Interactive Visualization of Large Similarity and Distance Matrices
Similarity and distance matrices are general data structures that describe reciprocal relationships between the objects within a given dataset. Commonly used methods for representation of these matrices include heatmaps, hierarchical trees, dimensionality reduction, and various types of networks. However, despite a well-developed foundation for the visualization of such representations, the challenge of creating an interactive view that would allow for quick data navigation and interpretation remains largely unaddressed. This problem becomes especially evident for large matrices with hundreds or thousands objects. In this work, we present a web-based platform for the interactive analysis of large (dis-)similarity matrices. It consists of four major interconnected and synchronized components: a zoomable heatmap, interactive hierarchical tree, scalable circular relationship diagram, and 3D multi-dimensional scaling (MDS) scatterplot. We demonstrate the use of the platform for the analysis of amino acid covariance data in proteins as part of our previously developed CoeViz tool. The web-platform enables quick and focused analysis of protein features, such as structural domains and functional sites.
Introduction
Similarity and distance matrices (SMs and DMs) are common data structures to represent interrelationships within a given set of objects. These matrices can be used for the identification of clusters of the objects, inference of networks and communities, estimation of density of distribution, and other applications requiring quantitative measures of relatedness between the objects. While the field of analyzing and visualizing these matrices is well established, challenges remain for presenting large datasets and providing interactive means for data browsing and analysis.
Heatmaps, dendrograms, circular relationship diagrams, networks, and dimensionality reduction scatterplots are popular methods for visualizing similarity and distance matrices. Heatmaps resemble a grid, with each cell colored according to the distance between a given pair of objects. The colors are normally a gradient of shades to represent the min-max range of all distances in the matrix. The main diagonal can be left blank or contain additional information pertaining to a given single object (e.g., size, weight, or any other individual quantitative property). While such visualization is of tree leaves by refocusing other visual components to a newly selected residue, and auto-scrolls to a tree leaf when the user selects a residue in other views. We also added a new component-3D MDS scatterplot-that allows the user to view a distance matrix interactively in 3D and identify groupings of residues. All visual components are now synchronized and automatically update all views upon changing focus in one component.
The developed web-based platform for interactive visualization of similarity and distance matrices consists of four major interconnected components: heatmap, dendrogram, circular diagram, and three-dimensional MDS scatterplot. Figure 1 illustrates interaction of these components in CoeViz. Their specific implementation is described below. Data 2018, 3, 4 3 of 10 tree leaves by refocusing other visual components to a newly selected residue, and auto-scrolls to a tree leaf when the user selects a residue in other views. We also added a new component-3D MDS scatterplot-that allows the user to view a distance matrix interactively in 3D and identify groupings of residues. All visual components are now synchronized and automatically update all views upon changing focus in one component. The developed web-based platform for interactive visualization of similarity and distance matrices consists of four major interconnected components: heatmap, dendrogram, circular diagram, and three-dimensional MDS scatterplot. Figure 1 illustrates interaction of these components in CoeViz. Their specific implementation is described below. Figure 1. A diagram of the interaction of the visualizations components in CoeViz. Each arrow indicates how the user can navigate through the visualizations from one method to another. Onedirectional arrows indicate that the viewed data can be updated in the targeted module only upon the change in focus of another module. Two-directional arrows indicate that data visualized is synchronized both ways. External viewers are dedicated to protein sequence/structure view and include POLYVIEW-2D [2], POLYVIEW-3D [3], and Jmol [4].
The heatmap presents covariance data (or similarity data, in general) based on one of the implemented covariance metrics [1]. The color gradient spans from white (representing no covariance, 0), through blue (moderate, 0.5) to red (high covariance, 1). The main diagonal contains frequencies of given amino acids observed at the individual positions in a given MSA. Heatmaps are zoomable from single pixel per position to large grid cells presenting detailed information, such as row and column indexes, corresponding residue labels and covariance scores. For quick navigation, heatmaps can be dragged with a mouse to pan to another part of the grid or be refocused using either a navigation pane or another visualization component. Figure 2 shows the same heatmap at different zoom levels. One-directional arrows indicate that the viewed data can be updated in the targeted module only upon the change in focus of another module. Two-directional arrows indicate that data visualized is synchronized both ways. External viewers are dedicated to protein sequence/structure view and include POLYVIEW-2D [2], POLYVIEW-3D [3], and Jmol [4].
The heatmap presents covariance data (or similarity data, in general) based on one of the implemented covariance metrics [1]. The color gradient spans from white (representing no covariance, 0), through blue (moderate, 0.5) to red (high covariance, 1). The main diagonal contains frequencies of given amino acids observed at the individual positions in a given MSA. Heatmaps are zoomable from single pixel per position to large grid cells presenting detailed information, such as row and column indexes, corresponding residue labels and covariance scores. For quick navigation, heatmaps can be dragged with a mouse to pan to another part of the grid or be refocused using either a navigation pane or another visualization component. Figure 2 shows the same heatmap at different zoom levels. tree leaves by refocusing other visual components to a newly selected residue, and auto-scrolls to a tree leaf when the user selects a residue in other views. We also added a new component-3D MDS scatterplot-that allows the user to view a distance matrix interactively in 3D and identify groupings of residues. All visual components are now synchronized and automatically update all views upon changing focus in one component. The developed web-based platform for interactive visualization of similarity and distance matrices consists of four major interconnected components: heatmap, dendrogram, circular diagram, and three-dimensional MDS scatterplot. Figure 1 illustrates interaction of these components in CoeViz. Their specific implementation is described below. Figure 1. A diagram of the interaction of the visualizations components in CoeViz. Each arrow indicates how the user can navigate through the visualizations from one method to another. Onedirectional arrows indicate that the viewed data can be updated in the targeted module only upon the change in focus of another module. Two-directional arrows indicate that data visualized is synchronized both ways. External viewers are dedicated to protein sequence/structure view and include POLYVIEW-2D [2], POLYVIEW-3D [3], and Jmol [4].
The heatmap presents covariance data (or similarity data, in general) based on one of the implemented covariance metrics [1]. The color gradient spans from white (representing no covariance, 0), through blue (moderate, 0.5) to red (high covariance, 1). The main diagonal contains frequencies of given amino acids observed at the individual positions in a given MSA. Heatmaps are zoomable from single pixel per position to large grid cells presenting detailed information, such as row and column indexes, corresponding residue labels and covariance scores. For quick navigation, heatmaps can be dragged with a mouse to pan to another part of the grid or be refocused using either a navigation pane or another visualization component. Figure 2 shows the same heatmap at different zoom levels. The dendrogram presents results of hierarchical clustering of covariance data transformed into a distance matrix. In the context of protein data, leaves of the tree dendrogram are colored according to physico-chemical properties of amino acids ( Figure 3). The added interactivity of the dendrogram greatly improves navigation through the data and synchronization of visualization. When a leaf in the dendrogram is clicked, it highlights the cell in the main diagonal of the heatmap and opens (or refocuses) the circular relationship diagram for that residue. To account for large proteins, the tree view is scrollable and automatically refocuses on a residue when it is chosen by the user in another interactive visualization component. The dendrogram presents results of hierarchical clustering of covariance data transformed into a distance matrix. In the context of protein data, leaves of the tree dendrogram are colored according to physico-chemical properties of amino acids ( Figure 3). The added interactivity of the dendrogram greatly improves navigation through the data and synchronization of visualization. When a leaf in the dendrogram is clicked, it highlights the cell in the main diagonal of the heatmap and opens (or refocuses) the circular relationship diagram for that residue. To account for large proteins, the tree view is scrollable and automatically refocuses on a residue when it is chosen by the user in another interactive visualization component.
Figure 3.
A fragment of the dendrogram derived from hierarchical clustering of co-varying residues (leaves). Colors reflect physico-chemical properties of amino acids. The color notation is as previously defined [2].
The circular relation diagram (CD) is automatically updated for each newly chosen residue and, by default, displays top 5% of the most co-varying residues with the chosen residue. The number of residues shown can be altered by changing the cutoff of covariance scores ( Figure 4). The diagram can be interactively expanded to show the same data in the table format. One can refocus the view to any residue in the diagram to reveal its own set of the top co-varying residues. Such refocus invokes an instant update of the three other visual components to reflect the change in focus. The CD also enables the external visualization of the residues displayed in the diagram using the POLYVIEW web-based platform: POLYVIEW-2D [2], POLYVIEW-3D [3], and Jmol [4]. The latter two options are available only when a protein 3D structure was used as an input for CoeViz analysis. The Jmol view enables the interactive analysis of the structural arrangement of the selected co-varying residues facilitating the inference of their structural and/or functional relationships. A fragment of the dendrogram derived from hierarchical clustering of co-varying residues (leaves). Colors reflect physico-chemical properties of amino acids. The color notation is as previously defined [2].
The circular relation diagram (CD) is automatically updated for each newly chosen residue and, by default, displays top 5% of the most co-varying residues with the chosen residue. The number of residues shown can be altered by changing the cutoff of covariance scores ( Figure 4). The diagram can be interactively expanded to show the same data in the table format. One can refocus the view to any residue in the diagram to reveal its own set of the top co-varying residues. Such refocus invokes an instant update of the three other visual components to reflect the change in focus. The CD also enables the external visualization of the residues displayed in the diagram using the POLYVIEW web-based platform: POLYVIEW-2D [2], POLYVIEW-3D [3], and Jmol [4]. The latter two options are available only when a protein 3D structure was used as an input for CoeViz analysis. The Jmol view enables the interactive analysis of the structural arrangement of the selected co-varying residues facilitating the inference of their structural and/or functional relationships. The dendrogram presents results of hierarchical clustering of covariance data transformed into a distance matrix. In the context of protein data, leaves of the tree dendrogram are colored according to physico-chemical properties of amino acids (Figure 3). The added interactivity of the dendrogram greatly improves navigation through the data and synchronization of visualization. When a leaf in the dendrogram is clicked, it highlights the cell in the main diagonal of the heatmap and opens (or refocuses) the circular relationship diagram for that residue. To account for large proteins, the tree view is scrollable and automatically refocuses on a residue when it is chosen by the user in another interactive visualization component. Figure 3. A fragment of the dendrogram derived from hierarchical clustering of co-varying residues (leaves). Colors reflect physico-chemical properties of amino acids. The color notation is as previously defined [2].
The circular relation diagram (CD) is automatically updated for each newly chosen residue and, by default, displays top 5% of the most co-varying residues with the chosen residue. The number of residues shown can be altered by changing the cutoff of covariance scores ( Figure 4). The diagram can be interactively expanded to show the same data in the table format. One can refocus the view to any residue in the diagram to reveal its own set of the top co-varying residues. Such refocus invokes an instant update of the three other visual components to reflect the change in focus. The CD also enables the external visualization of the residues displayed in the diagram using the POLYVIEW web-based platform: POLYVIEW-2D [2], POLYVIEW-3D [3], and Jmol [4]. The latter two options are available only when a protein 3D structure was used as an input for CoeViz analysis. The Jmol view enables the interactive analysis of the structural arrangement of the selected co-varying residues facilitating the inference of their structural and/or functional relationships. Three-dimensional view of MDS allows for a global yet compact presentation of relationships between the residues ( Figure 5). From covariance data projected into 3D by MDS, one can identify domains of the protein, some small clusters of functionally relevant residues, and residues standing away from the rest. The 3D view pane provides interactive zoom-in and rotation capabilities, as well as the labeling of selected residues. Current implementation of the MDS view does not allow for the interactive selection of individual residues on the scatterplot to be used for refocusing views in other CoeViz components due to limitations of the R library used. Three-dimensional view of MDS allows for a global yet compact presentation of relationships between the residues ( Figure 5). From covariance data projected into 3D by MDS, one can identify domains of the protein, some small clusters of functionally relevant residues, and residues standing away from the rest. The 3D view pane provides interactive zoom-in and rotation capabilities, as well as the labeling of selected residues. Current implementation of the MDS view does not allow for the interactive selection of individual residues on the scatterplot to be used for refocusing views in other CoeViz components due to limitations of the R library used.
Analysis of Human ESR1
Human estrogen receptor alpha (ESR1) is a multi-domain protein that belongs to the family of nuclear receptors. It represents an interesting object for the amino acid covariance analysis and visualization since its domains, while all serve the purpose of a transcription factor, play distinct molecular functions detailed below. The domains also contain additional functional regions, such as zinc coordinating residues (Zinc fingers) in the DNA-binding domain and ligand binding residues in the transactivation domain AF2.
Full protein sequence of ESR1 (595 amino acids) was submitted for the analysis by CoeViz using the χ 2 covariance metric adjusted for phylogenetic bias in the MSA. Figure 6 shows a heatmap of covariance scores for residues across the entire protein. As can be seen from the figure, the boundaries of the patterns of co-varying residues by and large coincide with the known domains and functional regions of the protein.
We further interrogated as to whether residues involved in distinct functions, such as metal coordination, DNA-and ligand-binding, or those involved in protein-protein interaction can be identified as separate clusters or what other residues they are clustered with.
As was mentioned earlier, ESR1 comprises two Zn fingers in its DNA-binding domain. From each Zn finger, we picked the first residue that is known to coordinate a Zn 2+ ion: C185 and C221 from ZF1 and ZF2, respectively. Figure 7 shows that these residues were clustered with their partners, metal coordinating residues C188, C202, and C205 and C227, C237, and C240, respectively. The same two clusters also contain residues directly binding DNA: H196, K206, R211, R234, and R241. Other DNA binding residues-Y195, Y197, E203, G204, A207, K210, K235, and Q238-did not form a distinct cluster.
Residues involved in direct ligand (estradiol) binding or in protein dimerization and interaction with a co-activator were not clustered together by hierarchical clustering. Still, one can analyze their mutual covariance-based distances using an interactive 3D MDS scatterplot (Figure 8).
Analysis of Human ESR1
Human estrogen receptor alpha (ESR1) is a multi-domain protein that belongs to the family of nuclear receptors. It represents an interesting object for the amino acid covariance analysis and visualization since its domains, while all serve the purpose of a transcription factor, play distinct molecular functions detailed below. The domains also contain additional functional regions, such as zinc coordinating residues (Zinc fingers) in the DNA-binding domain and ligand binding residues in the transactivation domain AF2.
Full protein sequence of ESR1 (595 amino acids) was submitted for the analysis by CoeViz using the χ 2 covariance metric adjusted for phylogenetic bias in the MSA. Figure 6 shows a heatmap of covariance scores for residues across the entire protein. As can be seen from the figure, the boundaries of the patterns of co-varying residues by and large coincide with the known domains and functional regions of the protein.
We further interrogated as to whether residues involved in distinct functions, such as metal coordination, DNA-and ligand-binding, or those involved in protein-protein interaction can be identified as separate clusters or what other residues they are clustered with.
As was mentioned earlier, ESR1 comprises two Zn fingers in its DNA-binding domain. From each Zn finger, we picked the first residue that is known to coordinate a Zn 2+ ion: C185 and C221 from ZF1 and ZF2, respectively. Figure 7 shows that these residues were clustered with their partners, metal coordinating residues C188, C202, and C205 and C227, C237, and C240, respectively. The same two clusters also contain residues directly binding DNA: H196, K206, R211, R234, and R241. Other DNA binding residues-Y195, Y197, E203, G204, A207, K210, K235, and Q238-did not form a distinct cluster.
Residues involved in direct ligand (estradiol) binding or in protein dimerization and interaction with a co-activator were not clustered together by hierarchical clustering. Still, one can analyze their mutual covariance-based distances using an interactive 3D MDS scatterplot (Figure 8).
Comparison with Other Existing Tools
The presented tool is meant to illustrate the general concept of the visualization of large (dis-)similarity matrixes via synchronized orthogonal views. However, since the examples presented here pertain to the covariance data in proteins, a number of existing servers for coevolution analysis in proteins were evaluated. Based on the original publications, where some visualization means for the results were presented, we tried ConEVA [5], EVcouplings [6], and GREMLIN [7] using the same human ESR1 protein.
The ConEVA web-server was not responsive after multiple attempts, so it may be no longer supported. EVcouplings accepted the protein input with the remaining parameters used as defaults.
No results were returned after two days post submission. It is possible that the server is not meant for large or multi-domain proteins. GREMLIN accepted the input with the warning "Note, due to limited resources, your submission may take forever to complete (Jobs Running: 0)." Nevertheless, the server found identical query protein submitted previously by another user and returned results with the input parameters used as specified by that user. Figure 9 contains the output provided by GREMLIN, where covariance analysis is overlaid with the pairwise residue contact information collected through the Protein Databank entries containing homologous protein chains.
Comparison with Other Existing Tools
The presented tool is meant to illustrate the general concept of the visualization of large (dis-)similarity matrixes via synchronized orthogonal views. However, since the examples presented here pertain to the covariance data in proteins, a number of existing servers for coevolution analysis in proteins were evaluated. Based on the original publications, where some visualization means for the results were presented, we tried ConEVA [5], EVcouplings [6], and GREMLIN [7] using the same human ESR1 protein.
The ConEVA web-server was not responsive after multiple attempts, so it may be no longer supported. EVcouplings accepted the protein input with the remaining parameters used as defaults.
No results were returned after two days post submission. It is possible that the server is not meant for large or multi-domain proteins. GREMLIN accepted the input with the warning "Note, due to limited resources, your submission may take forever to complete (Jobs Running: 0)." Nevertheless, the server found identical query protein submitted previously by another user and returned results with the input parameters used as specified by that user. Figure 9 contains the output provided by GREMLIN, where covariance analysis is overlaid with the pairwise residue contact information collected through the Protein Databank entries containing homologous protein chains. Figure 9. Results of the GREMLIN server [7] for human ESR1 overlaid with known residue contacts found in Protein Databank (PDB). Blue filled circles are GREMLIN results (scaled score > 1). The grey/red filled circles underneath are PDB residue contacts (minimal distance < 5 Å). The shade of the circles is based on 10 HHsearch results. Inter-oligomeric contacts in the PDB are in shades of red.
The contact map for ESR1 from GREMLIN is static, with no interactive functionality or mouse hover information provided, which makes it difficult to locate what pair of residues a given pixel/shade represents. It should be noted that GREMLIN does provide an interactive analysis for generated covariance data when a 3D structure is available for a given protein sequence. Collectively, other existing servers either do not provide as versatile visualization techniques as CoeViz does or are not capable of processing large and/or multi-domain proteins in a reasonable time frame.
Discussion
Similarity or distance matrices are a natural way of presenting relationships between objects. However, analysis and visualization of such matrices for large datasets remain challenging. Different clustering algorithms and visualization methods usually have various strengths and weaknesses. To improve the process of visualization and navigation through the data, we have implemented an online platform for interactive visualization that combines a zoomable heatmap, an auto-scrolling hierarchical clustering tree, a scalable circular relationship diagram, and an interactive 3D multidimensional scaling scatterplot. All components are interconnected and synchronized, which greatly facilitates the large data analysis.
The purpose of this work is to demonstrate the concept of interactive multi-faceted analysis of large SMs and DMs. The analysis of covariance data in proteins was used as an illustration of the platform utility; when using the different approaches combined, one could easily browse the data Figure 9. Results of the GREMLIN server [7] for human ESR1 overlaid with known residue contacts found in Protein Databank (PDB). Blue filled circles are GREMLIN results (scaled score > 1). The grey/red filled circles underneath are PDB residue contacts (minimal distance < 5 Å). The shade of the circles is based on 10 HHsearch results. Inter-oligomeric contacts in the PDB are in shades of red.
The contact map for ESR1 from GREMLIN is static, with no interactive functionality or mouse hover information provided, which makes it difficult to locate what pair of residues a given pixel/shade represents. It should be noted that GREMLIN does provide an interactive analysis for generated covariance data when a 3D structure is available for a given protein sequence. Collectively, other existing servers either do not provide as versatile visualization techniques as CoeViz does or are not capable of processing large and/or multi-domain proteins in a reasonable time frame.
Discussion
Similarity or distance matrices are a natural way of presenting relationships between objects. However, analysis and visualization of such matrices for large datasets remain challenging. Different clustering algorithms and visualization methods usually have various strengths and weaknesses. To improve the process of visualization and navigation through the data, we have implemented an online platform for interactive visualization that combines a zoomable heatmap, an auto-scrolling hierarchical clustering tree, a scalable circular relationship diagram, and an interactive 3D multidimensional scaling scatterplot. All components are interconnected and synchronized, which greatly facilitates the large data analysis.
The purpose of this work is to demonstrate the concept of interactive multi-faceted analysis of large SMs and DMs. The analysis of covariance data in proteins was used as an illustration of the platform utility; when using the different approaches combined, one could easily browse the data and infer related objects from the sparse, noisy data. None of the individual methods alone would allow for such efficient data navigation and analysis.
Web Implementation
The client side of the CoeViz interface is based on JavaScript libraries, including D3 and WebGL. The server side runs on Perl, Python, and R scripts.
The heatmap and circular diagram were implemented using the D3 library [8]. D3 is used for manipulating the document object model (DOM), processing the data, providing interactivity, and efficient rendering the graphics on the HTML canvas.
For the dendrogram, a JSON file from the output of the R hclust function is generated using the jsonlite library [9]. Residues are clustered using the complete linkage hierarchical clustering algorithm. The JSON file is then loaded into the CoeViz web page to render an interactive dendrogram with animations using SVG elements.
The MDS scatterplot is generated using the RGL R library [10]. The R cmdscale function reduces the distance matrix to three dimensions and then RGL generates a WebGL code for the interactive HTML visualization.
Heatmaps and MDS plots can be exported as images in PNG format, whereas circular diagrams and dendrograms are exported in SVG format.
The CoeViz web application is available via http://polyview.cchmc.org/. Documentation with interactive examples can be found at http://polyview.cchmc.org/coeviz_doc.html. The JavaScript and R code for the integrated web application is available from http://github.com/frazierbaker/coeviz. The interactive dendrogram component is available standalone at http://github.com/frazierbaker/ d3ndro or as an NPM package under the name "d3ndro." Details on computing MSA and covariance scores can be found in the original CoeViz publication [1] as well as in the documentation web-page specified above.
Annotation of Protein Structure and Function
Protein sequence of human ESR1 has been retrieved from the UniProt database (ID: P03372). The same UniProt entry was used to retrieve information about boundaries of structural domains and functional regions. Resolved parts of the protein structure were retrieved from the Protein Databank [11]: PDB ID 1hcq-DNA-binding domain; PDB ID 3uud-ligand binding domain co-crystallized with its natural ligand estradiol and protein interaction partners. The following tools were used to retrieve additional information about specific residues based on the resolved structures: POLYVIEW-2D [2] for the identification of metal and DNA binding residues and SPPIDER [12] for the analysis of protein-protein interaction sites. | 6,193.6 | 2018-01-13T00:00:00.000 | [
"Computer Science"
] |
The expected number of critical percolation clusters intersecting a line segment
We study critical percolation on a regular planar lattice. Let $E_G(n)$ be the expected number of open clusters intersecting or hitting the line segment $[0,n]$. (For the subscript $G$ we either take $\mathbb{H}$, when we restrict to the upper halfplane, or $\mathbb{C}$, when we consider the full lattice). Cardy (2001) (see also Yu, Saleur and Haas (2008)) derived heuristically that $E_{\mathbb{H}}(n) = An + \frac{\sqrt{3}}{4\pi}\log(n) + o(\log(n))$, where $A$ is some constant. Recently Kov\'{a}cs, Igl\'{o}i and Cardy (2012) derived heuristically (as a special case of a more general formula) that a similar result holds for $E_{\mathbb{C}}(n)$ with the constant $\frac{\sqrt{3}}{4\pi}$ replaced by $\frac{5\sqrt{3}}{32\pi}$. In this paper we give, for site percolation on the triangular lattice, a rigorous proof for the formula of $E_{\mathbb{H}}(n)$ above, and a rigorous upper bound for the prefactor of the logarithm in the formula of $E_{\mathbb{C}}(n)$.
Background and statement of the main result
Consider critical bond percolation on Z 2 . Kovács, Iglói and Cardy [KIC12] studied the expected number of clusters which intersect the boundary of a polygon. The leading order is the size n of the boundary. The prefactor of this term is lattice dependent. Their main interest is in the first correction term (of order log n). Their motivation came from relations with entanglement entropy in a diluted quantum Ising model. Using indirect and non-rigorous methods from conformal field theory and the q-state Potts model (letting q → 1), they derived a (universal) formula for the prefactor of the logarithmic term.
A special case of their result is that of a line segment (treated in Section F of their paper). In their setup the line segment was placed in the full plane and they claim that the prefactor is equal to 5 √ 3 32π . Furthermore they refer to an earlier obtained result by Cardy in [Car01] (see also Yu, Saleur and Haas [YSH08]) where the line segment was placed on the boundary of the half-plane. In the latter case the claim is that the prefactor equals √ 3 4π . Also this latter result was obtained by non-rigorous arguments using q-state Potts models.
This motivated us to try to find rigorous and more direct proofs of these results (starting with the case of line segments). Since the prefactors are believed to be universal it is natural to consider the most well studied percolation model, site percolation on the triangular lattice with p = p c = 1/2.
Because conformal invariance plays a role, it is convenient to identify the plane with the set C of complex numbers. We embed the triangular lattice T in the halfplane H = {z : z ≥ 0} or the full plane C with vertex set {m + nj : m ∈ Z, n ∈ N ∪ {0}} (resp. {m + nj : m, n ∈ Z}), where j = e π 3 i . We denote the probability measure by P H (resp. P C ) and the expectation by E H (resp. E C ). For subsets A, B ⊂ C we denote by A ↔ B the event that there are open vertices x, y on the triangular lattice, with x ∈ A, y ∈ B, which are connected by a path of open vertices. With some abuse of notation we denote, for any x ∈ C, the set {x} by x. A cluster is a maximal collection of connected vertices. Consider the line segment [1, n] on R, containing n vertices. We are interested in where C G is the collection of clusters in the triangular lattice on the lattice G = H, C.
It is easy to derive the leading (of order n) term: see the Remark in Section 1.2. In the case of the half-plane we could obtain a rigorous proof for the earlier mentioned logarithmic correction term. In the case of the full plane we only obtained a logarithmic upper bound for the correction term. (We do not see a method how to prove the precise prefactor 5 √ 3 32π given in [KIC12]; even finding a non-trivial lower bound is, in our opinion, a challenging problem).
More precisely, our main contribution is a rigorous proof of the following:
Some introductory computations
We now describe the first steps of the strategy to derive the result above. This will also give some insight, where the log comes from. First rewrite the number of clusters as follows Remark: It is known that there is no infinite cluster almost surely, hence P G (k ↔ (−∞, 0]) → 0 as k → ∞. Therefore the above computation implies that the leading term of E G (n) is n(P G (1 ↔ (−∞, 0]) − 1 2 ). Let us introduce the following notation: That is, Hence Theorem 1 is equivalent to (a) lim n→∞ L H (n) = 4π . Take ε > 0. We will introduce M = M (n, ε) ∈ N and a sequence a(i) = a(i, n, ε) for 1 ≤ i ≤ M + 1, such that a(M + 1) = n.
With these values we split up the sum in L G (n) in the following terms. For all 1 ≤ i ≤ M , and Then The idea is now, roughly speaking, to choose a(i, n, ε) so that the ratio of two consecutive ones equals 1 + ε and choose M such that a(1, n, ε) goes to infinity as n → ∞, but is of a smaller order than log(n). Then obviously the term f 0 / log(n) is negligible. We will see that M is more or less of the order log(n)/ε. The existence of the limit lim n→∞ L G (n) would follow if we can show that, for ε close to zero, f i is approximately a constant times ε as n → ∞.
In the case that G = H, we will see in Section 3.1 that this strategy indeed leads to the existence, and even the value, of the limit of L H (n) as n → ∞. Unfortunately in the full-plane it only leads to the upper bound stated in Theorem 1 (b), as we will see in Section 3.2. Now we make the above choices precise. We define and Note that then a(1, n, ε) is of order log(n). To examine f i it is useful to rewrite it in terms of an expectation as follows. Let (5) 2 Ingredients from the literature In this section we state some results, which we will use in Section 3 to prove Theorem 1. First some additional notation. We use the following notation for the probabilities of so-called arm-events. Let, for m < n ∈ N and let π 3 (m, n) be the probability of having two disjoint closed paths, and an open path, from [−m, m] 2 to H \ [−n, n] 2 . The following lemma is well known (see for example Theorem 11, Proposition 14 and Theorem 24 in [Nol08]).
Lemma 2 There exist constants C 1 , C 2 > 0 and α ≤ 1/2 such that, for all m < n In fact, much more precise results for these probabilities are known, but will not be used in this paper.
In the rest of this section, for a simply connected domain D C and n ∈ N the notation nD denotes the set {n · u : u ∈ D}. For points a 1 , a 2 on the boundary of D we denote by [a 1 , a 2 ] the part of the boundary of D between a 1 and a 2 in the counter clockwise direction. Furthermore we generalize the notation slightly, namely by P D (and E D ) we will denote the probability measure for percolation restricted to the triangular lattice on D. In this setting two intervals [a 1 , a 2 ] and [a 3 , a 4 ] on the boundary are said to be connected if there are vertices x, y on the lattice inside D, which are connected by an open path, and are such that x has an edge which crosses [a 1 , a 2 ] and y has an edge which crosses [a 3 , a 4 ].
The first theorem is the famous Cardy's formula (proposed in [Car92]), which was proved by Smirnov in [Smi01].
Theorem 3 (Cardy's formula, [Smi01]) Let D C be a simply connected domain and φ : D → H a conformal map. Let a 1 , a 2 , a 3 , a 4 be ordered points on the boundary of D. We have where λ is the cross-ratio This theorem concerns crossing probabilities of generalized rectangles in one 'direction'. The following theorem gives a formula for probabilities of crossings in two directions. It is called after Watts, who proposed the formula in [Wat96]. The first rigorous proof was by Dubédat [Dub06]. An alternative proof was obtained by Schramm (see [SW11]).
Theorem 4 (Watts' formula, [Dub06,SW11]) Let D C be a simply connected domain and φ : D → H a conformal map. Let a 1 , a 2 , a 3 , a 4 be ordered points on the boundary of D. We have where λ is the cross-ratio (8).
The last theorem we state here concerns the expected number of crossing clusters of a rectangle. It was predicted by Cardy [Car01] and by Simmons, Kleban and Ziff [SKZ07]. A proof was given by Hongler and Smirnov in [HS11]. Here N (nD, a 1 , a 2 , a 3 , a 4 ) N (nD, a 1 , a 2 , a 3 , a 4 where λ is the cross-ratio (8).
Proof of Theorem 1
Recall from the introduction that Theorem 1 is equivalent to 4π . Recall the definition (5) of T (i). We begin this section with a lemma which says that, to prove the convergence of L G (n) as n → ∞, it is sufficient to prove the convergence of ε −1 E G [T (i)].
Lemma 6
The following inequalities hold. Thus it is enough to prove that lim sup Hereto, note that it is also easy to see from the definition of M that, for fixed ε > 0 For all ε > 0 we have Next note that lim sup This together with (12) implies (11) and completes the proof of (9). The inequality in (10) follows in a similar way and we omit it.
Proof of Theorem 1 (a)
First note that it is easy to see that {T Therefore It is well-known from standard RSW arguments that P H (T (i) ≥ 1) goes, uniformly in i and n, to 0 as → 0, since the ratio between two consecutive a(i)'s goes to 1 as ε → 0. Hence the 'error term' (i.e. the second term in the r.h.s. of the equation array above) is negligible w.r.t. the main term (i.e. the first term in the r.h.s.). By this, Lemma 6, the fact that a(1) → ∞ as n → ∞, and the ratio between consecutive a(i)'s, it is sufficient to prove that where W k denotes the event that there is an open and a closed path from (−∞, 1] to [k, k(1 + ε)] and the closed path is below the open path.
Let W k be the event that there is an open and a closed path from (−∞, 1] to [k, k(1 + ε)]. (So, informally speaking, W k is the same as W k without the condition on which path is above or below). Using that (by duality), there is either an open path from [1, k] to [k(1 + ε), ∞) or a closed path from (−∞, 1] to [k, k(1 + ε)], we have The limits as k → ∞ of the first probability in the r.h.s. and the probability in the l.h.s. are obtained by Theorem 3 and Theorem 4 respectively, and we get Finally, letW k denote the event obtained from W k by replacing 'open' by 'closed' and vice versa. Since W k andW k have the same probability and W k =W k ∪ W k , we have Since W k ∩W k is contained in the disjoint occurrence of W k and the event that there is an open or closed path from (−∞, 1] to [k, k(1 + ε)], its probability is negligible (as k → ∞ and ε → 0) w.r.t. that of W k , and we get from (17) and (18) that · ε + o(ε).
As we saw (see the argument above (15)) this proves Theorem 1 (a).
Proof of Theorem 1 (b)
We will bound the relevant probabilities (concerning the full plane) by the probabilities of certain connection events in the half-plane. We do this by cutting along the real line from −∞ up to a(i + 1). Let us make the cutting precise. Let We have To bound P C (B(i)) we use the first inequality of Lemma 2 for those k in the definition of B(i) that are 'close to' 1 or a(i), and the other inequality in that lemma for the other k's. More precisely, we fix a constant β ∈ (0, 1), and let r(a(i)) := a(i) β . Then, P C (B(i)) ≤ 4π 1 (r(a(i)), a(i)) + 4 1 2 a(i) k=r(a(i))+1 where the factor 4 comes from symmetry considerations. Hence, there exist constants C 3 , C 4 > 0 such that Note that, since a(1) (the smallest of the a(i)'s) tends to ∞ as n → ∞, and C 3 (x β−1 ) α + C 4 x β tends to 0 as x → ∞, the contribution of P C (B(i)) to the r.h.s. of (9) is 0. , and one preventing it to touch the 'lower piece'; see e.g. Figure 3). So in this case we have S(i) = 2. More generally, a similar observation gives Thus it follows immediately, that To complete the proof we will use Theorem 5. Therefore we consider the domain C\L(i) and scale it by a(i). (As noted before, a(1) goes to ∞ as n → ∞). This gives the conformal rectangle C \ (−∞, 1 + ε) with 'corners' a 1 = 1 + , a 2 = 1 − , a 3 = 0 − , a 4 = 0 + (where, for x < 1+ε, x + and x − denote the 'copy' of x in the upper and the lower half-plane respectively). To apply Theorem 5 we need the cross-ratio, which can be computed as follows: Consider the conformal map ϕ(z) := i √ z − 1 − ε which maps C \ (−∞, 1 + ε) onto the upper half-plane. The cross-ratio is λ(ε) = (ϕ(1 + ) − ϕ(1 − ))(ϕ(0 + ) − ϕ(0 − )) (ϕ(1 + ) − ϕ(0 − ))(ϕ(0 + ) − ϕ(1 − )) . | 3,514 | 2015-05-29T00:00:00.000 | [
"Mathematics"
] |
The distributions, mechanisms, and structures of metabolite-binding riboswitches
Phylogenetic analyses revealed insights into the distribution of riboswitch classes in different microbial groups, and structural analyses led to updated aptamer structure models and insights into the mechanism of these non-coding RNA structures.
Background
Riboswitches are autonomous noncoding RNA elements that monitor the cellular environment and control gene expression [1][2][3][4]. More than a dozen classes of riboswitches that respond to changes in the concentrations of specific small molecule ligands ranging from amino acids to coenzymes are currently known. These metabolite-binding riboswitches are classified according to the architectures of their conserved aptamer domains, which fold into complex three-dimensional structures to serve as precise receptors for their target molecules. Riboswitches have been identified in the genomes of archaea, fungi, and plants; but most examples have been found in bacteria.
Regulation by riboswitches does not require any macromolecular factors other than an organism's basal gene expression machinery. Metabolite binding to riboswitch aptamers typically causes an allosteric rearrangement in nearby mRNA structures that results in a gene control response. For example, bacterial riboswitches located in the 5' untranslated regions (UTRs) of messenger RNAs can influence the formation of an intrinsic terminator hairpin that prematurely ends transcription or the formation of an RNA structure that blocks ribosome binding. Most riboswitches inhibit the production of unnecessary biosynthetic enzymes or transporters when a compound is already present at sufficient levels. However, some riboswitches activate the expression of salvage or degradation pathways when their target molecules are present in excess. Certain riboswitches also employ more sophisticated mechanisms involving self-cleavage [5], cooperative ligand binding [6], or tandem aptamer arrangements [7].
Many aspects of riboswitch regulation have not yet been critically and quantitatively surveyed. To forward this goal, we have compiled a comparative genomics data set from systematic database searches for representatives of ten metabolitebinding riboswitch classes ( Table 1). The results define the overall taxonomic distributions of each riboswitch class and outline trends in the mechanisms of riboswitch-mediated gene control preferred by different bacterial groups. The expanded riboswitch sequence alignments resulting from these searches include newly identified variants that provide valuable information about their conserved aptamer structures. Using this information, we have re-evaluated the consensus secondary structure models of these ten riboswitch classes. The updated structures reveal that certain riboswitch aptamers utilize previously unrecognized examples of common RNA structure motifs as components of their conserved architectures. They also highlight new base-base interactions predicted with a procedure that estimates the statistical significance of mutual information scores between alignment columns.
Riboswitch identification overview
Metabolite-binding riboswitch aptamers are typical of complex functional RNAs that must adopt precise three-dimensional shapes to perform their molecular functions. A conserved scaffold of base-paired helices organizes the overall fold of each aptamer. The identities of bases within most helices vary during evolution, but changes usually preserve base pairing to maintain the same architecture. In contrast, the base identities of nucleotides that directly contact the tar- Table 1 Sources of riboswitch sequence alignments and molecular structures
get molecule or stabilize tertiary interactions necessary to assemble a precise binding pocket are highly conserved even in distantly related organisms. Additionally, many riboswitches tolerate long nonconserved insertions at specific sites within their structures. These 'variable insertions' typically adopt stable RNA stem-loops that do not interfere with folding of the aptamer core.
Nearly all of the riboswitches discovered to date are cis-regulatory elements. For example, bacterial riboswitches are almost always located upstream of protein-coding genes related to the metabolism of their target molecules. Therefore, the genomic contexts of putative hits returned by an RNA homology search can be used to recognize legitimate riboswitches even when a search algorithm returns many false positives. Using this tactic, one can iteratively refine the description of a riboswitch aptamer by incorporating authentic low scoring hits into a new structure model and then researching the sequence database.
Several riboswitches were first identified as widespread RNA elements based on the presence of a highly conserved 'box' sequence within their structures. BLAST searches for the B12 box [8], S box [9], and THI box [10] sequences are effective for discovering many examples of the adenosylcobalamin (AdoCbl), S-adenosylmethionine (SAM)-I, and thiamin pyrophosphate (TPP) riboswitches, respectively. Other search techniques score how well a sequence matches a template of conserved bases and base-paired helices that the user manually devises from known examples of the riboswitch aptamer. The RNAmotif program performs this sort of generalized pattern matching [11]. A third strategy computationally defines and then searches for ungapped blocks of sequence conservation that are characteristic of a given riboswitch and spaced throughout its structure [12]. While these methods can be effective, they generally do not fully exploit the information contained in multiple sequence alignments of functional RNA families to efficiently identify highly diverged members.
Covariance models (CMs) are generalized probabilistic descriptions of RNA structures that offer several advantages over other homology search methods [13]. CMs can be directly trained on an input sequence alignment without time-consuming manual intervention. They also provide a more complete model of the sequence and structure conservation observed in functional RNA families that incorporates: first-order sequence consensus information; second-order covariation, where the probability of observing a base in one alignment column depends on the identity of the base in another column; insert states that allow variable-length insertions; and deletion states that allow omission of consensus nucleotides. This complexity comes at a computational cost, but several filtering techniques have recently been developed that make CM searches of large databases practical [14][15][16]. For example, CMs have been used to find divergent homologs of Escherichia coli 6S RNA [17] and define a variety of regulatory RNA motifs in α-proteobacteria [18]. The Rfam database [19] maintains hundreds of covariance models for identifying a wide variety of functional RNAs, including riboswitches.
In the present study, we used covariance models to systematically search for ten classes of metabolite-binding riboswitches in microbial genomes, environmental sequences, and selected eukaryotic organisms. The riboswitch sequence alignments used to train these CMs were derived from a variety of published and unpublished sources ( Table 1). The genomic contexts of prospective riboswitch hits were examined to confirm that each was appropriately positioned to function as a regulatory element. In general, CMs trained on the input alignments were able to discriminate valid riboswitch sequences from false positive hits on the basis of CM scores alone. The most common exceptions were spuriously high-scoring AU-rich matches to the smaller riboswitch models (for example, the purine riboswitch) and bona fide lowscoring hits with variable insertions at unusual positions in the more structurally complex riboswitch classes.
Prospective riboswitch matches were also examined to ensure that they conformed to known aptamer structure constraints. In certain cases, it was necessary to manually correct portions of the automated sequence alignments defined by the maximally scoring path of each hit through the states of the CM.
For example, CMs model only hierarchically nested base pairs for algorithmic speed [13]. Consequently, the pseudoknotted helices and pairings present in several riboswitches were aligned by hand to achieve the desired accuracy. The automated CM alignments also tend to incorrectly shift nucleotides when deletions of consensus positions result in ambiguity concerning the optimal placement of remaining sequences. The alignments of new RNA structure motifs and base-base interactions described later that were not present in the seed alignments used to train the covariance models were also manually adjusted. Multiple sequence alignments of the resulting curated riboswitch hits are available as Additional data files 1 and 2.
Riboswitch distributions
The phylogenetic distributions of the ten riboswitch classes were mapped from these search results ( Figure 1). Members of the TPP riboswitch class are the only metabolite-binding RNAs known to occur outside of eubacteria. TPP riboswitch representatives are found in euryarchaeal, fungal, and plant species. The AdoCbl riboswitch is the most widespread class in bacteria, but TPP, flavin mononucleotide (FMN), and SAM-I riboswitches are also common in many groups. Glycine and lysine riboswitches have more fragmented distributions. They are widespread in certain bacterial groups, but appear to be missing from others. Finally, the glucosamine-6phosphate (GlcN6P), purine, 7-aminoethyl 7-deazaguanine (preQ 1 ), and SAM-II riboswitches were identified in only a few groups of bacteria. Interestingly, the SAM-I and SAM-II Figure 1 Riboswitch distributions. The dimensions of each square are proportional to the frequency with which a given riboswitch occurs in the corresponding taxonomic group. A phylogenetic tree with the standard accepted branching order for each group of organisms is shown on the left. For bacteria, this tree is adapted from [92] with the addition of Fusobacteria [93]. On the right is a graph depicting the total number of nucleotides from each taxonomic division in the sequence databases that were searched. [20]. In contrast, no evidence of recent horizontal transfer was observed in phylogenetic trees of lysine riboswitch aptamers, despite their disjointed distribution across different taxonomic groups [21].
Riboswitch distributions
Firmicutes (low G+C Gram-positive bacteria) appear to make the most extensive use of the riboswitch classes examined in this study. Every riboswitch except SAM-II is widespread in this clade, and most aptamer classes occur multiple times per genome. For example, Bacillus subtilis carries at least 29 riboswitches (5 TPP, 1 AdoCbl, 2 FMN, 1 glycine, 11 SAM-I, 2 lysine, 1 GlcN6P, 4 guanine, 1 adenine, and 1 preQ 1 ) controlling approximately 73 genes. Experimental and computational efforts to identify riboswitches have been focused specifically on B. subtilis [22,23], so it is possible that the overrepresentation of these ten riboswitch classes in Firmicutes reflects a discovery bias. Indeed, new computational searches are beginning to identify riboswitch classes that are predominantly used by other groups of bacteria [18,24].
As a whole, γ-Proteobacteria employ a mixture of these ten riboswitch classes that is comparable to the diversity found in Firmicute species. However, individual species usually carry fewer riboswitch classes overall and fewer representatives of each class. For example, E. coli has six riboswitches (three TPP, one AdoCbl, one FMN, and one lysine) from the ten classes examined, which regulate a total of sixteen genes.
Deeply branched bacteria such as Deinococcus/Thermus and Thermotoga species also appear to utilize a variety of riboswitches. However, no riboswitch sequences have yet been identified in Aquifex species, and riboswitches also seem to occur only rarely in Chlamydia species, Cyanobacteria, and Spirochetes. However, the sequence database sizes for many of these bacterial groups are relatively small so the observed frequencies will probably need to be revised as more genomic sequences become available.
As expected, representatives of almost all ten riboswitch classes are found in sequences from shotgun cloning projects that target environments supporting diverse bacterial com-munities. These sources of additional sequences have been helpful in some cases for defining consensus structure models and adding statistical merit to mutual information calculations (see below). It is notable that glycine and SAM-II riboswitches are unusually common in Sargasso Sea metagenomic sequences [25]. This data set appears to be contaminated with some non-native Shewanella and Burkholderia sequences [26], but the large number of SAM-II matches probably accurately reflects the abundance of α-Proteobacteria in this environment.
Riboswitch mechanism overview
GlcN6P riboswitches are ribozymes that harness a self-cleavage event to repress expression of downstream glmS genes [5]. Members of this class are unique compared to other riboswitches because they adopt a preformed binding pocket for glucosamine-6-phosphate [27,28] and use the metabolite target as a cofactor to accelerate RNA cleavage [28][29][30]. The nine other riboswitch classes studied here utilize ligandinduced changes in 'expression platform' sequences to control a variety of gene expression processes [1]. The architectures of riboswitch expression platforms can be used to predict their gene control mechanisms on a genomic scale, as described below.
Riboswitches typically contain disordered regions in their conserved aptamer cores that become structured upon metabolite binding. These changes may trigger rearrangements in additional expression platform structures located outside of the aptamer, such that two alternative conformations with mutually exclusive base-paired architectures exist for the entire riboswitch. Some riboswitches operate at thermodynamic equilibrium [31]. They are able to interconvert between these ligand-bound and ligand-free structures in the context of the full-length RNA. Regulation by other riboswitches is kinetically controlled [32][33][34][35]. The relative speeds of transcription and co-transcriptional ligand binding dominate a one-time decision as to which folding pathway to follow. The active and inactive conformations of these riboswitches are trapped in the final RNA molecule and do not readily interconvert on a time scale that is relevant to the gene control system.
In most riboswitches, bases from the aptamer's outermost P1 'switching' helix, which is enforced in the ligand-bound conformation, pair to expression platform sequences to form an alternative structure in the absence of ligand, for example, [36,37]. However, some riboswitches harness shape changes elsewhere in their aptamers to regulate gene expression. AdoCbl riboswitches usually rely on the ligand-dependent formation of a pseudoknot between a specific C-rich loop and sequences outside the aptamer core to exert gene control [20,38,39]. SAM-II aptamers enforce a distal pseudoknot to interface with their expression platforms [18], and preQ 1 riboswitches sequester conserved 3' tail sequences upon metabolite binding [40].
Riboswitches can use ligand-induced structure changes to control gene expression in a variety of contexts. For example, the TPP riboswitches found in eukaryotes reside in introns located near the 5' ends of fungal pre-mRNAs [41][42][43] or in the 3' UTRs of plant pre-mRNAs [41]. Ligand binding modulates splicing of these introns, generating alternative-processed mRNAs that are expressed at different levels. In each example studied, a portion of the P4-P5 stem region pairs near a 5' splice-site, and this pairing is displaced when TPP is bound [43] (A Wachter, M Tunc-Ozdemir, BC Grove, PJ Green, DK Shintani, RRB, unpublished data). In contrast, almost all bacterial riboswitches occur in the 5' UTRs of mRNAs. Metabolite binding to these riboswitches generally regulates either transcription or translation of the encoded genes.
Bacterial riboswitches that regulate transcription usually control the formation of intrinsic terminator stems located within the same 5' UTR. Intrinsic terminators are stable GCrich stem-loops followed by polyuridine tracts that cause RNA polymerase to stall and release the nascent RNA with some probability [44,45]. Certain glycine [6] adenine [46], and lysine [21] riboswitches with ON genetic logic use structural rearrangements triggered by metabolite binding to bury pieces of terminator stems in alternative pairing interactions. However, most riboswitches controlling transcription are OFF switches that add an extra folding element to reverse this logic. Metabolite binding to these riboswitches disrupts an antiterminator, which normally sequesters bases required to form the terminator stem, allowing the terminator to form and repress gene expression. Similar antiterminator/terminator trade-offs occur in bacterial RNAs regulated by proteinor ribosome-mediated transcription attenuation mechanisms [47].
Bacterial riboswitches that regulate translation typically use ligand-induced structure changes to block translation initiation. Unlike riboswitches with transcription control mechanisms, which require very specific terminator structures in their expression platforms, the RNA structures that prevent translation initiation may be more varied. Sometimes, they rely on simple hairpins that sequester the ribosome binding site (RBS) of the downstream gene in a base-paired helix. In these cases, a riboswitch with OFF genetic logic can harness metabolite binding to disrupt a mutually exclusive antisequestor pairing, allowing the sequestor hairpin to form and attenuate translation. More convoluted base-pairing tradeoffs and shape changes may operate in other expression platforms to alter the efficiency of translation initiation in response to ligand binding.
Two variants of these mechanisms that dispense with or combine the elements of a typical bacterial riboswitch expression platform are worth noting. Some riboswitches bury the RBS of the downstream gene within their conserved aptamer cores [48,49]. Thus, ligand binding directly attenuates translation without the involvement of any additional expression platform sequences. Other riboswitches regulate the formation of a transcription terminator located so close to the adjacent open reading frame that its RBS resides within the 3' side of the terminator hairpin [48]. Riboswitches with these dual expression platforms could attenuate transcription and, if termination does not occur, could also inhibit translation.
Metabolite-dependent inhibition of ribosome binding has been proven in vitro for the E. coli AdoCbl riboswitch located upstream of the btuB gene [50]. In addition, in vivo expression assays using translational fusions between AdoCbl riboswitches and reporter genes indicate that control of translation is occurring [38]. However, other co-or post-transcription mechanisms might also contribute to the observed gene expression changes. For example, AdoCbl riboswitches from E. coli and B. subtilis can be cleaved by RNase P [51]. Such findings raise the interesting possibility that differential RNA processing or degradation caused by ligand-induced conformational changes might be the primary mechanism by which some riboswitches regulate gene expression.
There is one interesting instance where a Clostridium acetobutylicum SAM-I riboswitch appears to regulate protein expression through an antisense RNA intermediate [52]. This riboswitch is located immediately downstream, and in the opposite orientation from, an operon encoding a putative salvage pathway for converting methionine to cysteine. It has an expression platform, consisting of a typical terminator/antiterminator arrangement, with OFF genetic logic. Presumably, when SAM (and consequently methionine) pools are low, transcription of the full-length antisense RNA causes inhibition and degradation of the sense mRNA as is observed in some bacterial regulatory systems that employ small RNAs [53]. When SAM levels are high, the SAM-I riboswitch will prematurely terminate the antisense transcript, allowing expression of this operon to recycle excess methionine.
In some instances, riboswitches or their components are found in tandem arrangements. Almost all glycine riboswitches consist of two aptamers that regulate a single downstream expression platform [6]. In the genomic sequences searched here, 88% of the mRNA leaders containing one glycine aptamer also carry a second aptamer. Cooperative binding of two ligand molecules by these glycine riboswitches yields a genetic switch that is more 'digital', that is, more responsive to smaller changes in ligand concentration, than a single aptamer.
Far less common are tandem arrangements of other riboswitch classes such as TPP [7,54,55] or AdoCbl [55]. Fewer than 1% of the UTRs regulated by these riboswitch classes contain multiple aptamers. In these cases, each aptamer appears to function as an independent riboswitch that regulates its own expression platform to yield a more digital, compound genetic switch [7]. Also rare are tandem arrangements wherein representatives of two different riboswitches are in the same UTR. In the metE mRNA leader from Bacillus clausii, a SAM-I and an AdoCbl riboswitch independently control transcription termination to combinatorially regulate expression of this gene in response to two different metabolite inputs [55].
Riboswitch mechanisms
A decision tree was established for computationally classifying the gene control mechanisms of microbial riboswitches ( Figure 2). The five categories assigned are: transcription attenuation; dual transcription and translation attenuation; translation attenuation; direct translation attenuation; and antisense regulation. The same mechanisms have been predicted for TPP [48], AdoCbl [20], FMN [56], and lysine [21] riboswitches in previous comparative studies. The use of the term attenuation here does not imply that a switch operates with OFF genetic logic, that is, gene expression may be attenuated in the ligand-free state and relieved by metabolite binding. Overall, computational assignments by this procedure have an accuracy of 88% when compared to expert predictions of TPP riboswitch mechanisms [48].
It is important to note that the decision tree does not explicitly predict RBS-hiding structures in expression platforms. Rather, it assumes that control of translation initiation is the most likely mechanism for riboswitches not classified into the other categories. It is possible that these riboswitches could operate by mechanisms other than the five assigned by this procedure (as described above). Another caveat is that this prediction scheme considers only intrinsic terminator structures consisting of RNA stem-loops followed by polyuridine tails. These are currently the only structures that riboswitches with transcription attenuation mechanisms are known to reg-Riboswitch mechanism prediction scheme Figure 2 Riboswitch mechanism prediction scheme. ulate. However, some bacteria appear to be able to utilize other structures that may lack a canonical U-tail or consist of tandem hairpins to terminate transcription [57].
Mapping riboswitch mechanism predictions onto a phylogenetic tree (Figure 3) reveals that transcription attenuation dominates in Firmicutes and that translation attenuation is most common in other bacterial groups. The phylogenetic distribution of SAM-II riboswitch mechanisms is an exception. It is the only riboswitch aptamer that appears to be most often associated with regulatory transcription terminators in αand β-Proteobacteria, although the mechanisms by which SAM-II aptamers control gene expression have not yet been experimentally established [18]. Transcription attenuation mechanisms may also be generally overrepresented in Fusobacteria, δ/ε-Proteobacteria, Thermatogae, and Chloroflexi species, although smaller sample sizes make these conclusions less certain.
Mechanisms that rely on sequestering the RBS within the conserved aptamer core are most common for the TPP, preQ 1 , and SAM-I riboswitches. In the first two cases, purine-rich conserved regions near the 3' ends of the riboswitch substitute for RBS sequences. In SAM-I riboswitches, the RBS is incorporated into the 3' side of the P1 stem. Other riboswitch classes also have purine-rich conserved regions near their 3' ends with consensus sequences close to ribosome binding sites. It is not clear why direct regulation of translation attenuation is not more common in these other classes. Perhaps access to the RBS-like sequences in these aptamers is not modulated by ligand binding. Riboswitch regulation by direct translation attenuation appears to be most frequent in Actinobacteria and Cyanobacteria, except for the preQ 1 riboswitch where this mechanism is unusually prevalent, even in Firmicutes and Proteobacteria.
Riboswitch mechanisms
There do not appear to be any additional examples of riboswitches positioned for antisense regulation in this data set. An antisense arrangement may be rare because it inverts the gene control logic of the riboswitch and requires the evolutionary maintenance of a second promoter. A handful of highscoring hits were found that appear to be functional aptamers even though they are not located upstream of genes related to the cognate metabolite. It is possible that these riboswitches affect their target genes by regulating the production or function of trans-acting antisense RNAs or that they have been recently orphaned by genomic rearrangements and are now pseudo-regulatory sequences.
Evaluating structure models
Constructing an RNA secondary structure model using phylogenetic sequence data requires identifying possible basepaired stems and adjusting a sequence alignment to determine whether each proposed stem appears reasonable for all representatives. This recursive refinement process has been used to create detailed comparative models of many functional RNA structures that accurately reflect later genetic, biochemical and biophysical data. However, the presence of stretches of unvarying nucleotides within an RNA structure, the tolerance of stems to some non-canonical base pairs or mismatches, and the non-negligible frequency of sequencing errors in biological databases can introduce enough uncertainty that multiple structures may seem to agree with a sequence alignment and incorrect base-paired elements may be proposed. This problem is compounded if the multiple sequence alignment is incomplete and does not yet capture all of the variation that truly exists at each nucleotide position.
Inconsistencies and ambiguities in some riboswitch aptamer models motivated us to evaluate the statistical support for base pairs in their proposed structures. We chose to use mutual information (MI) scores [58] to mathematically formalize the interdependence between sequence alignment columns that is indicative of base interactions. MI is a normalized version of covariance that represents the amount of information (in bits) gained about what base occurs at a given position from knowing the identity of a base at another position. The prediction of RNA secondary structures and tertiary interactions from covariation in sequence alignments has a long history, and the nuances of calculating and interpreting MI scores have been comprehensively covered elsewhere [59,60].
Fundamentally, columns of interacting bases must be correctly aligned and there must be variation within each column (that is, it cannot be completely conserved) in order to detect mutual information. Even when these preconditions are met, there are two difficulties with directly comparing MI scores to determine which columns in a sequence alignment truly covary. First, sequence conservation derived from the shared evolutionary histories of sequence subsets in an alignment may result in a high residual background MI score between many columns whether or not they are functionally linked. Second, alignments with fewer sequences will have more column pairs with elevated MI scores simply by chance. Simulations addressing the expected magnitudes of these two sources of error in different data sets have been explored recently in the context of protein sequence alignments [61].
In order to better gauge whether MI scores support proposed base interactions in an RNA alignment, we developed a procedure for empirically estimating their statistical significance ( Figure 4). First, a phylogenetic tree is inferred from the observed RNA sequence alignment according to a model that assumes independent evolution at each position and allows for varying per-column mutation rates. Then, resampled alignments with the same topology, branch lengths, and evolutionary rates are generated. MI scores between columns in these test alignments reflect the null hypothesis that there is no covariation between positions. They implicitly correct for the evolutionary history and sample size of the real sequence alignment. Therefore, the p value significance for an observed MI score in the real alignment is the fraction of test alignments with higher MI scores between these two columns.
Riboswitch structures
The consensus secondary structure models of the ten riboswitch classes ( Figure 5) have been updated to reflect information from newly identified aptamer variants. The purine, TPP, SAM-I, and GlcN6P riboswitch consensus structures have been drawn in accordance with their molecular structures (references in Table 1). Other riboswitch structures have been revised to be consistent with the new predictions of structure motifs and base-base interactions explained below. In all cases, previous numbering schemes for the paired helical elements (designated P1, P2, P3, and so on, beginning at the 5' end of each the aptamer) have been maintained, even when these stems do not occur in a majority of the sequences in the updated alignment. Newly discovered paired elements that do not appear in most examples of a riboswitch aptamer have not been assigned numbers.
The results of the mutual information analysis are shown superimposed on the consensus riboswitch structures. Most base-paired helices are supported by at least one contiguous base pair with a highly significant MI (p < 0.001), and almost all contain a base pair with at least a marginal MI significance (p < 0.01). No significant MI scores are present within the P2.1 and P2.2 stems observed in the crystal structures of the GlcN6P-dependent ribozyme [28,30]. However, most of the predicted base pairs in the P2.1 and P2.2 helices are between highly conserved bases that may not vary enough to produce significant covariation with their pairing partners. The MI analysis also does not support an alternative P1.1 pseudoknot (not shown) proposed on the basis of biochemical experiments where the register of the regions involved in making the P2.1 pairing is slightly shifted [29,62,63].
MI significance scores do resolve a conflict between two pairing models that have been proposed for the highly conserved B12 box of the AdoCbl riboswitch ( Figure 6). One model posits that a 'facultative stem loop' forms by pairing nucleotides within the B12 box [20]. The other model proposes longrange pairings between portions of the B12 box and nucleotides more distant in RNA sequence [39]. There is only a single, marginally significant MI score that supports the formation of the 'facultative stem loop', even though this region was correctly aligned to optimally discover such interactions. The MI analysis strongly supports several base pairs in the alternative proposed structure wherein portions of the conserved B12 box form the 3' sides of the short P3 and P6 helical stems.
RNA structure motifs
Several riboswitches contain common RNA structure motifs that are recognizable from their consensus features. A GNRA tetraloop [64] that favors a pyrimidine at its second position caps P4a of most GlcN6P ribozymes. A K-turn [65,66] between P2 and P2a is conserved in SAM-I riboswitch aptamers [66]. The asymmetric bulge between helices P2a and P2b in the lysine riboswitch also fits a K-turn consensus in most sequences [67], but a number of variants appear to lack this motif. A sarcin-ricin motif [68] (a specific type of loop E motif) in the asymmetric bulge between the P2 and P2a helices of the lysine riboswitch is more highly conserved [37,67].
We also find examples of other RNA structure motifs that have not previously been reported in these riboswitch classes. The consensus features of the three terminal loops capping P2, P3, and P5 in the FMN riboswitch and the P4 loop and P6-P7 bulge in the AdoCbl riboswitch are remarkably similar. Each has two closing G-C base pairs with a strand bias, a possible U-A pair separated from the helical stem by two bulged nucleotides on the 3' side, and a terminal GNR triloop sequence that is sometimes interrupted at a specific position by an intervening base-paired helix. These characteristics strongly suggest that they adopt T-loop structures (named for the T-loop of tRNA) where the U-A forms a key trans Watson-Crick/Hoogsteen pair [69].
Sequence conservation in the UNR loop that closes the P5 stem in the TPP aptamer suggests that it forms a conserved Uturn [70]. As expected, there is a sharp reversal of backbone direction following this uridine, subsequent bases stack on the 3' side of the loop, and the uracil base can hydrogen bond with the phosphate group 3' of the third U-turn nucleotide in the X-ray crystal structures of E. coli [71,72] and Arabidopsis thaliana [73] riboswitches. Also, in the TPP aptamer, the conserved UGAGA sequence 3' of the P3 helix fits the UGNRA consensus for a type R1 lonepair triloop [74]. The crystal Procedure for estimating MI significance between alignment columns Figure 4 Procedure for estimating MI significance between alignment columns. See the main text and Materials and methods for a complete description of the procedure used to estimate the statistical significance of MI scores between columns in a multiple sequence alignment in order to evaluate riboswitch secondary structures and predict new base-base interactions. structures confirm that this motif is present with the characteristic trans Watson-Crick/Hoogsteen U-A closing pair around the triloop. Commonly, a tertiary interaction between the triloop G base and an outside A leads to a composite GNRA tetraloop structure. However, in this case, the pyrimidine ring from the TPP ligand intercalates into the triloop at an equivalent position.
New base-base interaction predictions
In addition to supporting almost all of the helical elements in the riboswitch structure models, the MI analysis predicts eleven additional base-pairing interactions (Figures 5 and 7). Significant MI scores between two alignment columns should be interpreted with caution. They represent a statistical correlation and do not necessarily imply hydrogen bonding between nucleobases. Correlations between adjacent nucleotides that probably represent favored base stacking patterns in helices and column pairs with many gaps where MI scores can be dominated by the presence and absence of nucleotides rather than their base identities have been ignored. It is also possible to observe high mutual information between two bases that do not interact if several separate structure motifs with their own specific sequence requirements can substitute for each other in a functional RNA, as is seen for GNRA, UNCG, and CUUG tetraloops in 16S rRNA [59].
Furthermore, the estimates of MI significance rely on a phylogenetic tree reconstruction method that may not adequately capture the evolution of these RNA sequences, especially for the shorter riboswitch alignments. Even assuming that the estimated p values are completely accurate, there are 4,950 possible combinations of columns in an alignment with 100 columns, and that would imply that, on average, 5 pairs with a MI significance of ≤0.01 will be observed by chance. Some columns that are known to be base paired do not have MI scores this significant. In light of this noisy background, we manually screened MI predictions and concentrated on interacting columns that seem to have structural relevance.
The identities of interacting bases in a functional RNA are constrained during evolution. They can mutate only to other base pairs that preserve the local geometry of the sugar-phosphate backbone and any hydrogen bonds that are important for maintaining structure and function. Generally, only one of the three planar edges of a nucleobase participates in any given interaction: the Watson-Crick face (WC), Hoogsteen face (H), or sugar edge (SE). A systematic study of RNA structures has produced isostericity matrices [75] that tabulate which of the possible 16 base pairs should be interchangeable (in terms of C1'-C1' distances) when two nucleobases are interacting between different combinations of these three base edges and when the glycosidic bonds on both sides of the pair are cis or trans with respect to each other. The pairs of bases conserved at some of the new correlated positions in riboswitches suggest unusual non-Watson-Crick interactions, and this isostericity framework can be used to tentatively assign possible geometries to the newly predicted base pairs (Figure 7).
In the TPP riboswitch, there is significant MI between the two bases directly 5' of P3 and 3' of P3a that could bridge this helical junction. This correlation was highly significant (p = 0.0002) in an alignment of all TPP riboswitch sequences. However, re-examination of the alignment showed that the predominant A-G and U-A pairs mainly occurred in the 552 sequences that have the optional P3a stem-loop. In fact, there is no correlation between these columns in the remaining 355 sequences that lack P3a. Exchange of U-A and A-G pairs is most consistent with a cis H/WC edge interaction between these two bases. These pairs are also isosteric in a trans H/H geometry, but this configuration involves only a single hydrogen bond, and there are four other isosteric nucleobase combinations that are not observed. Both pair geometries imply that either the sugar-phosphate backbones of the interacting bases are in a parallel orientation or that they are anti-parallel, with one of the bases adopting a rare syn glycosidic bond rotation. It may be necessary for these bases to assume an unusual geometry to accommodate the P3a helix at this location.
The molecular resolution structures of TPP riboswitches do not impinge on this prediction, as each of these constructs lacks P3a [71][72][73]. On the basis of the consensus structure, it is possible to further predict that when the P3a helix is present it will coaxially stack on the P2 helix as part of a type C three-way helical junction [76] wherein P3a, P2, and P3 are assigned P1, P2, and P3 roles, respectively. The molecular structures show a diagnostic feature of this configuration even in the absence of P3a: the J13 motif sequence (corresponding to the conserved UGAGA) forms a pseudohairpin that makes adenine base contacts to the minor groove of the motif's P1 helix (P2 of the riboswitch). Furthermore, there is space in the crystal structure to accommodate P3a cohelically stacking on P2, and this would place P3a parallel to and offset from P3, as is expected for this common three-way junction geometry. Three new base interactions are predicted in AdoCbl riboswitch aptamers. A lone WC base pair (p < 0.0001) seems to enclose the conserved A-rich sequence between the P2 and P3 helices. A highly significant MI score (p < 0.0001) also supports a WC pair with purine/pyrimidine strand bias between the nucleotide directly 3' of the P4 helix and a position within the two nucleotide 3' bulge of the P6-P7 T-loop motif. The adjacent nucleotides in this strand and the T-loop bulge could form a highly conserved, cohelical C-G base pair. Similar long-range Watson-Crick base-pairing interactions to these two bulged nucleotides are common with 'type-II' Tloops [69]. The final new prediction in the AdoCbl riboswitch is a non-canonical G-A or A-G pair (p = 0.0001) that probably assumes a cis WC/WC geometry to continue base stacking with the P6 helix. These pairs are also isosteric in a cis H/H geometry, but this geometry seems less likely to be conserved because it involves only a single hydrogen bond.
Riboswitch aptamer structures
The FMN riboswitch may contain a strikingly similar T-loop interaction. The nucleotide directly 3' of its P5 helix can form a Watson-Crick pair (p = 0.009) with a pyrimidine/purine strand bias to the 3' bulge of the T-loop motif that caps P3. An adjacent G-C base pair is also possible here between highly conserved nucleotides in the strand and T-loop bulge. In both the AdoCbl and FMN riboswitches, the stem-loops adjacent to this predicted interaction have exactly five paired nucleotides and are capped by a second T-loop motif. Although the second T-loop does not seem to be directly relevant to this predicted pairing interaction, the double T-loop substructure that these riboswitches have in common suggests that significant similarity exists between their overall tertiary folds even though they recognize very different ligand molecules.
The MI analysis suggests two new base-base interactions in the glycine riboswitch. The first is a WC pair (p = 0.005) with purine/pyrimidine strand bias at the base of the P2 stem of the first aptamer. If this pair cohelically stacks with the P2 stem, then it would often require a bulged nucleotide on the 5' side of the composite helix. The second interaction is a predicted G-G or A-A homopurine pair (p = 0.002) that might adopt a cis bifurcated geometry within the central bulge of the second aptamer. Bifurcated pairs hydrogen bond between an exocyclic functional group on one base and the edge of the other base, and they are consequently intermediate between two edge geometries (possibly cis WC/WC and trans WC/H in this case). If this pair forms, it suggests that the two bases on each strand between it and the P1 stem may form G-A and A-G pairs. Both of these putative interactions are maintained in the opposite aptamer of the glycine riboswitch. However, the nucleotides at the corresponding positions are less variable, which may explain why they were not detected a second time by the MI analysis.
Two new base-pairing contacts are predicted for SAM-I riboswitches. The first occurs at the end of the P2 helix adjacent to the conserved G-A and A-G pairs of the K-turn motif. This pair has a highly significant MI score (p = 0.0006) and mainly varies from G-A to C-C, which is most compatible with a trans SE/H base interaction within this cohelical stacking context. Noncanonical pairs with this configuration are known to occur frequently adjacent to K-turns in other functional RNA structures [77]. The second predicted interaction (p = 0.0003) is an unexpected long-range cis WC/WC base pair between the base directly upstream of the 5' side of the P2b pseudoknot and the base directly upstream of the P1 3' strand.
After originally discovering these new interactions from sequence analysis, we were able to verify that both interactions occur with the predicted configurations in the X-ray crystal structure of a minimized version of the Thermoanaerobacter tengcongensis metF SAM-I riboswitch [78].
Comparison of B12 box structure models Figure 6 Comparison of B12 box structure models. In addition to the model of the AdoCbl riboswitch aptamer structure presented here [39], an alternative model that folds the highly conserved B12 box sequence (highlighted in red) into a 'facultative stem-loop' has been proposed [20]. The core of the AdoCbl riboswitch aptamer is shown with abbreviated peripheral helices and without the optional P8-P10-P11 domain for comparison with the alternative secondary structure model. The upper model is supported by multiple base pairs with significant MI scores between B12 box bases and remote positions. In it, a portion of the B12 box also forms part of an internal T-loop motif between P6 and P7. Each diagram uses the symbols described in the legend to Figure 5.
ong-range pseudoknot Pair may extend P2 helix after bulged nt. R/Y strand bias.
The MI analysis predicts two new base-base interactions in the SAM-II riboswitch. A homopurine G-G or A-A pair (p = 0.0002) could form between two positions in the bulge between P1 and the 5' strand of the P2 pseudoknot. This pair may adopt a cis bifurcated geometry. A Watson-Crick base pair (p < 0.0001) may also exist between the last nucleotide in the central loop that is contained within the P1 stem and a downstream position. This pair could be extended into a short helical element (P1a) if the adjacent, conserved C-G and G-C base pairs also form canonical WC pairs and an intervening base is bulged out.
Conclusion
The ten metabolite-sensing riboswitch classes surveyed here are widespread and versatile gene control elements. The conserved secondary structure models of these riboswitch aptamers have been revised to include information from additional sequence variants. These models incorporate newly recognized RNA structure motifs, including a double Tloop substructure that is conserved in AdoCbl and FMN aptamers, and specify new sites where the insertion of unconserved RNA domains is possible. Furthermore, an analysis of mutual information scores using an evolutionarily informed background model has enabled the prediction of new basebase interactions in several riboswitch aptamers. These refinements should improve the accuracy of future computational searches for riboswitches as the automated annotation of functional RNAs in genomic sequences becomes more routine [19]. They will also inform and validate ongoing efforts to determine the molecular resolution structures of riboswitch aptamers.
It is believed that some metabolite-binding riboswitch classes may be descended from the RNA World [79] and that others may be more recent evolutionary innovations [80], but the exact provenance of each riboswitch class is unclear. Significant uncertainty also remains about what physiological and evolutionary forces affect riboswitch use by modern organisms. Particularly, there are unexplained differences in the distributions and preferred regulatory mechanisms of riboswitches across contemporary bacteria. Riboswitches found in Firmicutes (low G+C Gram-positive bacteria) predominantly regulate transcription attenuation, whereas translation attenuation mechanisms are most prevalent in other groups. Overall, riboswitches also appear to be more common in Firmicutes than other bacterial groups.
One of the more interesting aspects of the riboswitch phylogenetic profile is that it outlines gaps and holes in the known distributions of riboswitch classes. Some of these apparently vacant regulatory niches may be occupied by regulatory proteins that fulfill the same role or by extreme structural variants of these riboswitch classes that are not detectable with current RNA homology search techniques. Other gaps could harbor new aptamer classes that recognize the same metabolite as a known riboswitch class. The discovery of SAM-II riboswitches in α-Proteobacteria [18], which are almost devoid of SAM-I riboswitches, sets a precedent for this latter scenario. The existence of a third SAM riboswitch in some lactic acid bacteria species [81], a subdivision of the Firmicutes, suggests that new riboswitch classes may occupy empty regulatory niches that exist at an even finer taxonomic resolution.
Computational analysis
In-house Perl scripts were used to organize the execution of other software tools, compute various statistics, and maintain local relational databases of genome and gene information.
Many of these scripts rely on Bioperl [82], and the Bio::Graphics module was particularly useful for visualizing the genomic contexts of riboswitch matches.
Riboswitch identification
Covariance models were trained on sequence alignments adapted from various sources (Table 1) using the Infernal software package (version 0.55) [83]. Heuristic filtering techniques [16] were used to accelerate CM searches of microbial sequences in the RefSeq database (version 12) [84] and environmental shotgun sequences from an acid mine drainage community [85], the Sargasso Sea [25], and Minnesota soil and whale fall sites [86]. CM searches for TPP riboswitches were also conducted against the plant and fungal portions of the RefSeq database (version 13).
The regulatory potentials of putative riboswitch aptamers were assessed by examining their genomic contexts. To uniformly predict gene functions, protein domains were assigned to COGs (orthologous gene clusters) [87] using RPS-BLAST and scoring matrices from the Conserved Domain Database (CDD) [88]. The plausibility of putative aptamer structures was assessed by computationally aligning hits to the original CM with Infernal and manually examining divergent RNA structures. Using these two complementary criteria, we established trusted CM score cutoffs. All hits in the microbial New base-base interaction predictions Figure 7 (see previous page) New base-base interaction predictions. For each numbered and asterisked prediction in Figure 5 the statistical significance (p value) of the mutual information between the two alignment columns is shown, followed by the relative frequencies with which specific combinations of bases are observed in those columns. Base pair geometries and isostericity groups compatible with the asterisked pairs are described in more detail elsewhere [75]. These descriptions include the relative orientations of the glycosidic bonds across the pair (cis or trans), the edges of each base that interact (WC, Watson-Crick; H, Hoogsteen; SE, sugar edge; bifurcated, intermediate between two edges), and the relative backbone strand geometry (parallel or anti-parallel) assuming both glycosidic bonds are in default anti conformations.
RefSeq database above these thresholds were judged to be functional riboswitches. Since gene context information is not available for most environmental sequences, hits from these data sets were included only if they had CM scores above the trusted threshold. Additional low-scoring sequences from the RefSeq database were also included when their genomic contexts and alignments strongly indicated that they were functional riboswitches.
To verify that this approach efficiently recovers known riboswitches, the final results were compared to a list of TPP riboswitches compiled in a comparative genomics analysis of thiamin metabolic genes and this regulatory RNA element [48]. The new searches successfully found all TPP riboswitches that had been previously identified in the set of complete microbial genomes analyzed in both studies. They also discovered a small number of TPP riboswitches upstream of thiamin-related genes (for example, a pnuC homolog in Helicobacter pylori and thiM in Lactococcus lactis) in genomes examined by the former study that had not yet been reported.
For the glycine riboswitch, a single aptamer covariance model and a tandem model containing both the first and second aptamers were used to separately identify matches. Every aptamer that is part of a tandem configuration was found by the single aptamer CM search, and cases of lone aptamers were noted. For consensus structure and MI calculations only the tandem glycine aptamer alignment was considered, but the complete set of lone and tandem aptamer glycine riboswitches were included in the expression platform analysis. Expression platform counts for other riboswitch classes that rarely occur in tandem were not corrected.
Mechanism classification
Expression platforms were classified according to the scheme in Figure 2 for a subset of the riboswitch matches found in complete and unfinished microbial genomes. Aptamer sequences with more than 95% pairwise identity at reference columns (positions where ≥50% of the weighted sequences in the alignment do not contain a gap) were omitted to avoid biasing statistics with duplicate sequences. Riboswitches with suspect gene annotations where >60 nucleotides (nt) of an open reading frame (ORF) on the same strand overlapped the aptamer or >700 nt separated the aptamer and the nearest downstream ORF were also screened out. Most of these cases appear to result from incorrect start codon choices, overpredictions of hypothetical ORFs, or missing annotation of real genes. The remaining sequences constituted the expression platform data set, and sequences beginning at the 5' end of each aptamer and continuing through the first 120 nt of the downstream ORF were extracted for further analysis.
Riboswitches where the downstream gene was on the opposite strand were examined as candidates for antisense regulation. Other riboswitches were classified as directly regulating translation initiation when the downstream gene's start codon was within 15 nt of the end of the conserved aptamer core structure (usually the P1 paired element). The remaining expression platforms were scanned with the local RNA secondary structure prediction program Rnall (version 1.1) [89] for intrinsic transcription terminators with a scanning window of 50 nt, a U-tail weight threshold of 4.0, a U-tail pairing stability cutoff of -8.3 kcal/mol, and default settings for other parameters. Riboswitches with a terminator predicted in their expression platform sequence were assigned transcription attenuation mechanisms. These riboswitches were classified as also regulating translation if the distance between the terminator hairpin and the gene's start codon is no more than 10 nt. Expression platforms that did not match any of the above criteria are assumed to employ translation attenuation mechanisms.
Rnall and distance parameters were calibrated by comparing expression platform predictions to expert predictions for a large and phylogenetically diverse collection of TPP riboswitches [48]. Rnall correctly predicts 46 out of 52 terminators in this data set with only 3 predictions of terminators in sequences not manually evaluated as containing a terminator (a sensitivity of 88% and an accuracy of 94%). The three false positives resemble terminators and may be functional, whereas the terminators that Rnall misses usually have large hairpins with poor thermodynamic stabilities. Overall, the decision tree classifies 159 out of 180 TPP riboswitch expression platforms (88%) correctly into the category assigned in the control set.
Consensus secondary structures
We manually adjusted the covariance model alignments of riboswitch aptamers while refining their consensus secondary structures. In particular, bases taking part in pseudoknotted pairings that cannot be represented by CMs were shifted to accurately represent these interactions. Bases flanking gapped consensus columns, which are sometimes ambiguously spread out across many possible positions by the alignment algorithm, were also systematically condensed into a minimum number of overall consensus columns. As new structure motifs and base-base interactions became evident, the alignments were adjusted to reflect these new constraints. Riboswitch sequences in the final alignments were weighted using Infernal's internal implementation of the GSC algorithm [90] to reduce biases from duplicate and similar sequences before calculating consensus structure statistics.
Mutual information significance
Duplicate sequences were purged and columns with >50% gaps were removed from riboswitch alignments prior to the MI analysis, and, if necessary, alignments were further pruned to the 300 most diverse sequences (as judged by pairwise base differences). A customized version of the program Rate4Site (version 2.01) [91] with modified output options was used to simultaneously estimate distances and per-column rates of evolution according to a gamma distributed model with at least 16 rate categories and a phylogenetic tree created with Jukes-Cantor distances that treated gaps as missing information. The resulting trees, rates, and distances were used to simulate 10,000 resampled alignments starting from an arbitrary ancestral sequence. Then, gaps and sequence weights were re-inserted into each of these derivative alignments at the same positions that they occupied in the original alignment.
Mutual information was calculated between column pairs for all alignments according to standard formulas [60], taking into account sequence weights and treating gaps as a fifth character state. The resampled alignments were used to estimate what the MI score distribution would have been if the bases present in each column had evolved independently, without covariation constraints. The p value significance of the actual MI between two columns is the fraction of the resampled alignments that have a greater MI score than the value observed between those two columns in the real alignment.
Authors' contributions
JEB designed the computational analyses, carried out the comparative studies, and created the figures. JEB and RRB interpreted the results and wrote the manuscript.
Additional data files
The following additional data files are available with the online version of this article. Additional data file 1 contains sequence alignments of the riboswitch aptamer data sets annotated with new base-base interactions in Stockholm format. Additional data file 2 contains sequence alignments of the riboswitch aptamer data sets annotated with new basebase interactions in HTML format.
Additional data file 1 Sequence alignments of the riboswitch aptamer data sets anno-tated with new base-base interactions in Stockholm format Sequence alignments of the riboswitch aptamer data sets anno-tated with new base-base interactions in Stockholm format. Click here for file Additional data file 2 Sequence alignments of the riboswitch aptamer data sets anno-tated with new base-base interactions in HTML format Sequence alignments of the riboswitch aptamer data sets anno-tated with new base-base interactions in HTML format. Click here for file | 12,065.2 | 2007-11-12T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Label-Free Detection of CA19-9 Using a BSA/Graphene-Based Antifouling Electrochemical Immunosensor
Evaluating the levels of the biomarker carbohydrate antigen 19-9 (CA19-9) is crucial in early cancer diagnosis and prognosis assessment. In this study, an antifouling electrochemical immunosensor was developed for the label-free detection of CA19-9, in which bovine serum albumin (BSA) and graphene were cross-linked with the aid of glutaraldehyde to form a 3D conductive porous network on the surface of an electrode. The electrochemical immunosensor was characterized through the use of transmission electron microscopy (TEM), scanning electron microscopy (SEM), atomic force microscope (AFM), UV spectroscopy, and electrochemical methods. The level of CA19-9 was determined through the use of label-free electrochemical impedance spectroscopy (EIS) measurements. The electron transfer at the interface of the electrode was well preserved in human serum samples, demonstrating that this electrochemical immunosensor has excellent antifouling performance. CA19-9 could be detected in a wide range from 13.5 U/mL to 1000 U/mL, with a detection limit of 13.5 U/mL in human serum samples. This immunosensor also exhibited good selectivity and stability. The detection results of this immunosensor were further validated and compared using an enzyme-linked immunosorbent assay (ELISA). All the results confirmed that this immunosensor has a good sensing performance in terms of CA19-9, suggesting its promising application prospects in clinical applications.
Introduction
Carbohydrate antigen 19-9 (CA19-9) is a kind of serum Lewis (a) carbohydrate antigen, and it is one of the best-validated biomarkers in pancreatic cancer, with 80% sensitivity [1].The level of CA19-9 may also be elevated in patients who suffer from other gastrointestinal malignancies, such as colon cancers, liver cancers, or gastric cancers, while it is usually lower than 37 U/mL in healthy people.Therefore, the level of CA19-9 can be used to evaluate the cancer stage and to predict long-term survival.In addition, it is much more effective to combine CA19-9 with other cancer biomarkers in the screening of serum samples from cancer patients [2].Therefore, the development of a rapid, simple, and sensitive method for the detection of CA19-9 has great application prospects.
Currently, the detection of CA19-9 is mainly based on immunoassay-related strategies, including enzyme-linked immunosorbent assays (ELISAs) [3], electrochemical biosensors [4][5][6][7], photoelectrochemical biosensors [8], electrochemiluminescence [9], fluorescent or colorimetric assays [2], and giant magneto-resistance biosensors [10].Among these various methods, electrochemical biosensors have great potential to be miniaturized and automated for point-of-care applications in clinical diagnostics [11][12][13], and they can also be cost-effective and portable, with low-power instrumentation.In particular, the electrochemical impedance spectrum (EIS) technique, as a simple label-free analytical method, can be utilized for the development of immunosensors and has great prospects in terms of point-of-care applications for various biomarkers [14].
The complex components in biological samples can result in the biological fouling of electrochemical electrode surfaces via non-specific adsorption and adhesion, which decrease the electrochemical current and the analytical performance of sensors.This is still a major obstacle to the successful commercial development of various electrochemical immunosensors [12,15].Therefore, the surface treatment of electrochemical electrodes is crucial for the antifouling performance of electrochemical sensors.Currently, antifouling strategies mainly include physical antifouling (such as nanoporous surfaces) and chemical antifouling (such as bovine serum albumin (BSA), antifouling polymer layers, and hydrogel) [15][16][17].For the physical antifouling strategy, various micro/nanostructures can be engineered on the transducer surface in order to fabricate a size-selective diffusional barrier of large non-specific molecules, which can also permit the diffusion of small analytes to the underlying transducer [4,6,16].For the chemical antifouling strategy, a wide range of antifouling molecules can be incorporated into electrochemical interfaces via self-assembly, electro-grafting, or polymerization methods.BSA is a common and cheap protein that has been broadly used in biological antifouling due to its abundance and superior biostability in the body.Antifouling polymers contain various chemicals, such as the commonly used polyethylene glycol (PEG), oligoethylene glycol (OEG), poly(vinyl alcohol) (PVA), zwitterionic polymers, and peptides with a mixed charge [18][19][20][21][22].However, for most antifouling molecules, the formed antifouling layer may hinder electron transfer and decrease the current at the electrochemical interface, resulting in a significant decrease in electrochemical sensing performance [23].Therefore, the balance of the antifouling and electrochemical performance of interfaces is an important issue.Incorporating antifouling molecules and conducting nanomaterials is an effective strategy to form an antifouling layer with 3D conductive nanocomposites, and it has been demonstrated to be suitable for the development of electrochemical immunosensors with excellent performance [24][25][26].
Here, we proposed an effective BSA/graphene-based antifouling electrochemical immunosensor, in which BSA and graphene were cross-linked with glutaraldehyde (GA) to form a 3D conducting antifouling layer on the surface of an electrochemical electrode.Graphene is a kind of thin, two-dimensional carbonaceous nanomaterial with superior conducting performance.The cross-linking of BSA and graphene with GA can build a stable 3D porous conducting network, which can maintain the antifouling behavior of BSA and the conducting performance of graphene at the same time.In addition, this BSA/graphene-based antifouling film can provide covalent reaction sites for the further effective immobilization of antibodies.By employing electrochemical impedance spectroscopy (EIS), a label-free electrochemical immunosensor was developed for the rapid and sensitive detection of CA19-9 in complex human serum samples.All the obtained results confirmed the good performance of this electrochemical immunosensor for the label-free detection of CA19-9.It has great potential and promising prospects in terms of application for the detection of CA19-9 in clinical samples to aid in the diagnosis of cancer diseases.It is worth noting that the electrodes and antibodies used in this study were only used to demonstrate the technical feasibility of this novel approach to the label-free detection of CA19-9.
Preparation of BSA/Graphene Nanocomposites
BSA/graphene nanocomposites were prepared by mixing 1.0 mg graphene and 15.0 mg BSA into 1 mL phosphate-buffered saline (PBS).The mixture was sonicated under a tip sonicator (125 W and 20 kHz) for 30 min with on/off intervals of 1 s at 50% amplitude, yielding an opaque black solution.After centrifugation at 3500 rounds per minute (rpm) for 15 min, a semi-transparent solution was recovered, and the black precipitate was discarded.The obtained BSA/graphene nanocomposite was stored at 4 • C for further application.The topography of the BSA/graphene nanocomposite was characterized through the use of TEM.
Preparation and Functionalization of BSA/Graphene/GA/Antibody-Modified Electrodes
A gold electrode (AuE, with a diameter of 3 mm) was firstly polished with 1.5 µm and 0.5 µm alumina slurries and rinsed with ultrapure water.The polished gold electrode was chemically cleaned with a freshly prepared piranha solution (volume ratio of H 2 SO 4 and H 2 O 2 was 3:1) and then thoroughly rinsed with ultrapure water for further modification.
The BSA/graphene nanocomposite was directly mixed with 70% GA at a volume ratio of 69:1.The well-cleaned gold electrode was immersed into the BSA/graphene/GA solution and maintained in a water-saturated atmosphere for 24 h at room temperature.After that, the modified gold electrode was thoroughly rinsed with PBS in a shaker for 30 min.The BSA/graphene/GA-modified electrodes were further functionalized with antibodies using carbodiimide chemistry.Briefly, the BSA/graphene/GA electrode was incubated with 400 mM EDC and 200 mM NHS in 0.1 M MES buffer (pH 6.0) for 30 min, rinsed with ultrapure water, and dried at room temperature.Then, 20 µg/mL of the anti-CA19-9 antibody was prepared in PBS with 0.5% glycerol.The activated gold electrode was reacted with the anti-CA19-9 antibody solution in a water-saturated atmosphere overnight at 4 • C, and it was rinsed and washed with PBS in a shaker for 30 min.Next, 1 M MEA solution was prepared in PBS and adjusted to pH 7.4 with HCl.The electrode was incubated with 1 M MEA at room temperature for 30 min to quench the unreacted active groups and then further blocked with 1% BSA solution for 1 h at room temperature.Finally, the electrode was thoroughly rinsed in ultrapure water for further application.
The diluted BSA/graphene nanocomposites were characterized using UV spectroscopy before and after the addition of GA to elucidate the conjugation-induced changes in the absorption spectra.The morphology of the BSA/graphene/GA-modified gold electrode was characterized through the use of SEM and an atomic force microscope (AFM).Each functionalization process of the electrodes was electrochemically characterized in a threeelectrode electrochemical cell via cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS) measurements.The redox aqueous solution in the electrochemical cell was 5 mM K 4 Fe(CN) 6 /K 3 Fe(CN) 6 with 1 M KCl.The functionalized gold electrode was used as the working electrode, Ag/AgCl was used as the reference electrode, and Pt wire was used as the counter electrode.CV was performed in a voltage range from 0 V to 0.7 V, with a scan rate of 100 Mv s −1 .EIS was conducted with a 5 mV amplitude at open-circuit potential in a frequency range from 1 MHz to 1 Hz.
Detection of CA19-9 in PBS and Human Serum
To detect CA19-9 proteins using the well-prepared electrochemical immunosensors, standard CA19-9 proteins were diluted with PBS or normal human serum with a series of concentrations (6.25 U/mL, 12.5 U/mL, 25 U/mL, 50 U/mL, 100 U/mL, 200 U/mL, 300 U/mL, 500 U/mL, and 1000 U/mL).The target CA19-9 solution was incubated on the BSA/graphene/GA/antibody-modified electrodes for 1 h.After being rinsed with PBS solution, the electrode was measured for EIS in 5 mM K 4 Fe(CN) 6 /K 3 Fe(CN) 6 with 1 M KCl.The electrochemical impedance was fitted with a modified Randles equivalent circuit model.CA19-9 (25 U/mL and 50 U/mL in human serum) was also tested with an ELISA kit according to the instructions.
Statistical Analysis
All the electrochemical measurements were performed at least 3 times.The data are shown as mean ± standard deviation (SD).The limit of detection was calculated as the corresponding concentration value of the calibration curve according to the principle of 3δ.
Working Principle of the Antifouling Electrochemical Immunosensor
To enhance the performance of the electrochemical immunosensor for detecting the protein biomarker in complex clinical samples, we proposed an antifouling label-free electrochemical impedance biosensor based on a 3D porous BSA/graphene/GA matrix (Scheme 1).The BSA and graphene nanocomposites were cross-linked via GA to modify the surface of the gold electrode and form a conducting antifouling interface, where BSA acted as a major component of the antifouling effect, and the conducting 2D nanomaterial (graphene) was used to sustain the electron transfer of the electrode.The antibody of the target protein can be covalently immobilized onto the layer of BSA/graphene/GA with the aid of EDC and NHS.After being further blocked with MEA and BSA, this modified electrochemical electrode can be employed to capture the target protein via specific immune recognition between antibodies and antigens.The specific capture of target proteins will hinder the electron transfer at the interface from the electrolyte to the electrode, which can be monitored via an electrochemical impedance spectrum in a label-free manner.
open-circuit potential in a frequency range from 1 MHz to 1 Hz.
Detection of CA19-9 in PBS and Human Serum
To detect CA19-9 proteins using the well-prepared electrochemical immunosensors, standard CA19-9 proteins were diluted with PBS or normal human serum with a series of concentrations (6.25 U/mL, 12.5 U/mL, 25 U/mL, 50 U/mL, 100 U/mL, 200 U/mL, 300 U/mL, 500 U/mL, and 1000 U/mL).The target CA19-9 solution was incubated on the BSA/graphene/GA/antibody-modified electrodes for 1 h.After being rinsed with PBS solution, the electrode was measured for EIS in 5 mM K4Fe(CN)6/K3Fe(CN)6 with 1 M KCl.The electrochemical impedance was fitted with a modified Randles equivalent circuit model.CA19-9 (25 U/mL and 50 U/mL in human serum) was also tested with an ELISA kit according to the instructions.
Statistical Analysis
All the electrochemical measurements were performed at least 3 times.The data are shown as mean ± standard deviation (SD).The limit of detection was calculated as the corresponding concentration value of the calibration curve according to the principle of 3δ.
Working Principle of the Antifouling Electrochemical Immunosensor
To enhance the performance of the electrochemical immunosensor for detecting the protein biomarker in complex clinical samples, we proposed an antifouling label-free electrochemical impedance biosensor based on a 3D porous BSA/graphene/GA matrix (Scheme 1).The BSA and graphene nanocomposites were cross-linked via GA to modify the surface of the gold electrode and form a conducting antifouling interface, where BSA acted as a major component of the antifouling effect, and the conducting 2D nanomaterial (graphene) was used to sustain the electron transfer of the electrode.The antibody of the target protein can be covalently immobilized onto the layer of BSA/graphene/GA with the aid of EDC and NHS.After being further blocked with MEA and BSA, this modified electrochemical electrode can be employed to capture the target protein via specific immune recognition between antibodies and antigens.The specific capture of target proteins will hinder the electron transfer at the interface from the electrolyte to the electrode, which can be monitored via an electrochemical impedance spectrum in a label-free manner.
Characterization of BSA/Graphene-Modified Immunosensor
In general, it is difficult to disperse graphene powder in PBS directly.However, when graphene is dispersed in a BSA solution, BSA proteins can adsorb onto the surface of the graphene sheet through hydrophobic and electrostatic interactions, resulting in the formation of a homogeneous dispersion of the BSA/graphene nanocomposite.During the preparation of the BSA/graphene nanocomposite solution, the concentration of BSA in PBS was optimized from 1 mg/mL to 15 mg/mL.It was found that the nanocomposite solution became more stable and homogeneous only when the concentration of BSA was up to 10 mg/mL (Figure S1).TEM was performed to better reveal the topography of the BSA/graphene nanocomposite.The two-dimensional single-layered and stacked graphene can be observed in Figure 1a.After the BSA/graphene nanocomposites were cross-linked with GA, a 3D sponge-like conducting protein matrix was generated.This was confirmed via an SEM analysis (Figure 1b), revealing that a relative densely packed nanocomposite film formed on the surface of the gold electrode.The GA-assisted cross-linking reaction produced 3D molecular networks of BSA/graphene.The detailed mechanism has been well discussed in a previous report [25].In brief, GA can react quickly to yield polymers of pyridine in the presence of the primary amines of BSA, resulting in the rapid formation of structural 3D glue molecular networks.This reaction can also increase the UV absorbance at 265-270 nm (Figure 1c).In addition, the surface morphology of the modified electrodes was further characterized with atomic force microscopy (AFM), and the result showed that the roughness of the gold electrode after the modification of the BSA/graphene/GA nanocomposites was R a = 2.43 ± 0.15nm (Figure 1d).All the results confirmed that the BSA/graphene nanocomposite formed a stable 3D porous molecular network in the presence of GA on the surface of the electrode.
Scheme 1. Schematic of the fabrication process of BSA/graphene/GA-modified electrochemical impedance immunosensor for the detection of target protein.
Characterization of BSA/Graphene-Modified Immunosensor
In general, it is difficult to disperse graphene powder in PBS directly.However, when graphene is dispersed in a BSA solution, BSA proteins can adsorb onto the surface of the graphene sheet through hydrophobic and electrostatic interactions, resulting in the formation of a homogeneous dispersion of the BSA/graphene nanocomposite.During the preparation of the BSA/graphene nanocomposite solution, the concentration of BSA in PBS was optimized from 1 mg/mL to 15 mg/mL.It was found that the nanocomposite solution became more stable and homogeneous only when the concentration of BSA was up to 10 mg/mL (Figure S1).TEM was performed to better reveal the topography of the BSA/graphene nanocomposite.The two-dimensional single-layered and stacked graphene can be observed in Figure 1a.After the BSA/graphene nanocomposites were crosslinked with GA, a 3D sponge-like conducting protein matrix was generated.This was confirmed via an SEM analysis (Figure 1b), revealing that a relative densely packed nanocomposite film formed on the surface of the gold electrode.The GA-assisted cross-linking reaction produced 3D molecular networks of BSA/graphene.The detailed mechanism has been well discussed in a previous report [25].In brief, GA can react quickly to yield polymers of pyridine in the presence of the primary amines of BSA, resulting in the rapid formation of structural 3D glue molecular networks.This reaction can also increase the UV absorbance at 265-270 nm (Figure 1c).In addition, the surface morphology of the modified electrodes was further characterized with atomic force microscopy (AFM), and the result showed that the roughness of the gold electrode after the modification of the BSA/graphene/GA nanocomposites was 2.43 0.15 nm (Figure 1d).All the results confirmed that the BSA/graphene nanocomposite formed a stable 3D porous molecular network in the presence of GA on the surface of the electrode.The electrochemical performance of different BSA/graphene/GA-modified gold electrodes was evaluated for the BSA and graphene nanocomposite at different ratios.Considering that the nanocomposite solution was less stable when the concentration of BSA was less than 10 mg/mL, here, we only tested the concentrations of 10 mg/mL and 15 mg/mL (named as BSA10 and BSA15).The final concentrations of graphene and GA were kept at 1 mg/mL and 1.4% (v/v), respectively.To assess the overall electrochemical quality and state of the solid-liquid interface, the potential separation between the oxidation peak and the reduction peak (∆Ep) and current changes were calculated according to cyclic voltammetry.Typical cyclic voltammetry showed that different modifications resulted in obviously different electron transfer kinetics (Figure 2a).The gold electrodes showed partial passivation after different modifications due to BSA hindering the electron transfer.∆Ep and current changes were calculated and compared between the groups of BSA10 and BSA15.The statistical results (Figure S2) showed that BSA15/graphene1/GA displayed a relatively low ∆Ep (102 ± 16 mV) and a high current (90 ± 6%), indicating that this BSA15/graphene1/GA coating maintained good electron transfer characteristics and exhibited better performance than the other coating protocols.Voltammograms at different scan rates were also tested and compared with those of the bare gold electrode in order to evaluate the mass transport process on the BSA15/graphene1/GA-modified electrode (Figure 2b,d).It was found that the peak current was linear to the square root of the scan rate with the increase in the scan rates in both the bare electrode and the BSA15/graphene1/GAmodified electrode (Figure 2c), indicating a diffusion-limited process of electroactive species at the interface.All the results demonstrate that the incorporation of conducting graphene into the BSA network improved the overall migration of the electroactive species at the interface of the electrode and electrolyte solution, which can be attributed to the formation of a porous coating membrane with the GA-assisted cross-linking of BSA and nanomaterials [24,25].
During the process of sensor preparation, the electrochemical characteristics of the electrodes were monitored step by step using CV and EIS (Figure 3).The voltammograms and Nyquist diagrams changed substantially after the covalent immobilization of the CA19-9 antibody and the BSA blocking.Figure 3a shows clear redox peaks on the bare gold electrode and BSA15/graphene/GA-modified electrode.Figure 3b shows that the electrochemical impedance decreased slightly after the modification of BSA15/graphene/GA.After the CA19-9 antibody was linked covalently on the BSA15/graphene/GA-modified electrode, the formation of a protein layer at the interface of the electrode and electrolyte resulted in a dramatic decrease in redox peak currents and an increase in impedance, which confirmed the successful immobilization of the CA19-9 antibody on the BSA15/graphene/GA electrode.In addition, ethanolamine and BSA were incubated onto the electrode surface to block the unreacted active sites, resulting in a further slight decrease in both the correlated redox currents and impedance.Additionally, the surface morphologies of the modified electrodes were also characterized with AFM after the capture of the CA19-9 antibody and antigen; the results are shown in Figure S3.It was found that the capture of the antibody and antigen onto the electrode did not result in significant changes in the surface roughness, which indicates that only the surface roughness of AFM cannot characterize the capture of a single layer of protein molecules.The electrochemical sensor was well characterized using CV and EIS, and it is suitable for the further detection of the target protein CA19-9 via specific binding with the antibody.
Performance of Electrochemical Immunosensor for the Detection of CA19-9
To examine the performance of the developed electrochemical immunosensor, different concentrations of the CA19-9 protein were diluted with a standard PBS buffer or human serum.The responses of the electrochemical immunosensor were recorded by monitoring the changes in electrochemical impedance (Figure 4).The results demonstrate that the electrochemical impedance increased with the increase in the target CA19-9 concentrations, and the immunosensor exhibited similar responses regardless of whether the dilution Sensors 2023, 23, 9693 7 of 12 buffer was the PBS buffer or human serum (Figure 4a,b).The responsive results were quantified by fitting the electrochemical impedance data with a modified Randles circuit model (R s (CPE(R ct Z w ))) (Figure S3), in which R s represents the resistance of the solution; the constant phase element (CPE) corresponds to a capacitor of the electrochemical interface with a constant phase; the charge transfer resistance (R ct ) represents the difficulty of electron transfer at the electrode interface and corresponds to the diameter of a high-frequency semicircle in the Nyquist plot; and the Warburg resistance (Z w ) represents the diffusion process of redox probes from the electrolyte to the electrode surface [27].The specific capture of target antigen proteins mainly resulted in changes in R ct by interfering with the electrode/electrolyte interface.Therefore, the relative change in R ct was used to quantity the target protein-induced impedance changes, which was defined as ∆R ct = R ct (after target capture)-Rct (after blocking).The concentration-dependent calibration curves are shown in Figure 4c,d.The detection range was from 6.25 U/mL to 1000 U/mL.The detection limit of CA19-9 was calculated to be as low as 13.5 U/mL in serum according to the principle of S/N = 3. mg/mL (named as BSA10 and BSA15).The final concentrations of graphene and GA were kept at 1 mg/mL and 1.4% (v/v), respectively.To assess the overall electrochemical quality and state of the solid-liquid interface, the potential separation between the oxidation peak and the reduction peak (ΔEp) and current changes were calculated according to cyclic voltammetry.Typical cyclic voltammetry showed that different modifications resulted in obviously different electron transfer kinetics (Figure 2a).The gold electrodes showed partial passivation after different modifications due to BSA hindering the electron transfer.ΔEp and current changes were calculated and compared between the groups of BSA10 and BSA15.The statistical results (Figure S2) showed that BSA15/graphene1/GA displayed a relatively low ΔEp (102 ± 16 mV) and a high current (90 ± 6%), indicating that this BSA15/graphene1/GA coating maintained good electron transfer characteristics and exhibited better performance than the other coating protocols.Voltammograms at different scan rates were also tested and compared with those of the bare gold electrode in order to evaluate the mass transport process on the BSA15/graphene1/GA-modified electrode (Figure 2b,d).It was found that the peak current was linear to the square root of the scan rate with the increase in the scan rates in both the bare electrode and the BSA15/gra-phene1/GA-modified electrode (Figure 2c), indicating a diffusion-limited process of electroactive species at the interface.All the results demonstrate that the incorporation of conducting graphene into the BSA network improved the overall migration of the electroactive species at the interface of the electrode and electrolyte solution, which can be attributed to the formation of a porous coating membrane with the GA-assisted cross-linking of BSA and nanomaterials [24,25].Due to the complex components in human serum, the selectivity of this electrochemical biosensor was evaluated by comparing the results of the response to CA19-9 using the same concentrations of the PBS buffer and normal human serum.Figure 4c,d and Figure 5a show the comparable responses in both solution backgrounds, indicating that this electrochemical immunosensor had good selectivity to the target protein CA19-9.When the BSA15/graphene/GA-modified electrodes were kept in PBS, 1% BSA, and human serum for 2 days, the currents remained comparable to the original states (Figure 5b, 98.9% in PBS; 91.9% in 1% BSA; 83.8% in serum).The BSA15/graphene/GA antibody-modified electrodes showed good selectivity towards CA19-9, even though there are various other background proteins with different concentrations in normal human serum, which could be attributed to the good antifouling performance of the BSA15/graphene/GA modification.The GAassisted cross-linking between BSA and graphene formed a homogeneous 3D conductive coating film, which balanced the BSA-based antifouling and graphene-assisted electron transfer.All the results prove that this kind of surface modification had good antifouling performance and reduced the non-specific binding in the complex clinical samples.
electrolyte resulted in a dramatic decrease in redox peak currents and an increase in impedance, which confirmed the successful immobilization of the CA19-9 antibody on the BSA15/graphene/GA electrode.In addition, ethanolamine and BSA were incubated onto the electrode surface to block the unreacted active sites, resulting in a further slight decrease in both the correlated redox currents and impedance.Additionally, the surface morphologies of the modified electrodes were also characterized with AFM after the capture of the CA19-9 antibody and antigen; the results are shown in Figure S3.It was found that the capture of the antibody and antigen onto the electrode did not result in significant changes in the surface roughness, which indicates that only the surface roughness of AFM cannot characterize the capture of a single layer of protein molecules.The electrochemical sensor was well characterized using CV and EIS, and it is suitable for the further detection of the target protein CA19-9 via specific binding with the antibody.
Performance of Electrochemical Immunosensor for the Detection of CA19-9
To examine the performance of the developed electrochemical immunosensor, different concentrations of the CA19-9 protein were diluted with a standard PBS buffer or human serum.The responses of the electrochemical immunosensor were recorded by monitoring the changes in electrochemical impedance (Figure 4).The results demonstrate that the electrochemical impedance increased with the increase in the target CA19-9 concentrations, and the immunosensor exhibited similar responses regardless of whether the dilution buffer was the PBS buffer or human serum (Figure 4a,b).The responsive results were quantified by fitting the electrochemical impedance data with a modified Randles circuit model (Rs(CPE(RctZw))) (Figure S3), in which Rs represents the resistance of the solution; the constant phase element (CPE) corresponds to a capacitor of the electrochemical interface with a constant phase; the charge transfer resistance (Rct) represents the difficulty of electron transfer at the electrode interface and corresponds to the diameter of a highfrequency semicircle in the Nyquist plot; and the Warburg resistance (Zw) represents the Due to the complex components in human serum, the selectivity of this electrochemical biosensor was evaluated by comparing the results of the response to CA19-9 using the same concentrations of the PBS buffer and normal human serum.Figures 4c,d and 5a show the comparable responses in both solution backgrounds, indicating that this electrochemical immunosensor had good selectivity to the target protein CA19-9.When the BSA15/graphene/GA-modified electrodes were kept in PBS, 1% BSA, and human serum for 2 days, the currents remained comparable to the original states (Figure 5b, 98.9% in PBS; 91.9% in 1% BSA; 83.8% in serum).The BSA15/graphene/GA antibody-modified electrodes showed good selectivity towards CA19-9, even though there are various other background proteins with different concentrations in normal human serum, which could be attributed to the good antifouling performance of the BSA15/graphene/GA modification.The GA-assisted cross-linking between BSA and graphene formed a homogeneous 3D conductive coating film, which balanced the BSA-based antifouling and graphene-as- The stability of the modified electrodes was also evaluated at 4 • C for 10 days either in a dry state or in a PBS solution by testing the current changes and electrochemical impedance changes.The results show that the BSA15/graphene/GA-modified electrodes demonstrated good stability under both conditions, and the R ct remained stable for 10 days (Figure 5c, 104.4% in dry; 96.4% in PBS), indicating that these BSA15/graphene/GAmodified electrodes exhibited good stability performance.Furthermore, the reproducibility performance was tested with three independent BSA/graphene/GA-modified electrodes.The relative standard deviation (RSD) was 8.6%, indicating the acceptable reproducibility and the possibility of developing disposable immunosensors.demonstrated good stability under both conditions, and the Rct remained stable for 10 days (Figure 5c, 104.4% in dry; 96.4% in PBS), indicating that these BSA15/graphene/GAmodified electrodes exhibited good stability performance.Furthermore, the reproducibility performance was tested with three independent BSA/graphene/GA-modified electrodes.The relative standard deviation (RSD) was 8.6%, indicating the acceptable reproducibility and the possibility of developing disposable immunosensors.In addition, the performance of this electrochemical immunosensor was further validated using an ELISA kit.Standard CA19-9 was spiked into normal human serum at the concentrations of 25 U/mL and 50 U/mL, which were tested with the electrochemical immunosensor and the ELISA kit, respectively.The results (Figure 5d) show that the detected value of the electrochemical immunosensor matched well with that of ELISA.All the results indicate that this electrochemical immunosensor has good sensing performance and great potential for the development of disposable biosensors for real clinical application.
Several electrochemical immunosensors have been reported for the detection of CA19-9.For most electrochemical immunosensors, the sensitive material (antibody) is usually immobilized onto the electrode surface covalently [21].Various nanomaterials have been employed to enhance the sensing performance in combination with corresponding sensing techniques [28][29][30].The performance of the immunosensor in this paper is comparable to that of previously reported biosensors (Table S1).In addition, considering that the level of CA19-9 was elevated significantly, the large detection range is one of the main advantages of the electrochemical immunosensor developed here, which can ensure the quantification of CA19-9 in various patients.The cross-linking of BSA and graphene with GA ensures good antifouling performance in the detection of serum samples.In addition, the performance of this electrochemical immunosensor was further validated using an ELISA kit.Standard CA19-9 was spiked into normal human serum at the concentrations of 25 U/mL and 50 U/mL, which were tested with the electrochemical immunosensor and the ELISA kit, respectively.The results (Figure 5d) show that the detected value of the electrochemical immunosensor matched well with that of ELISA.All the results indicate that this electrochemical immunosensor has good sensing performance and great potential for the development of disposable biosensors for real clinical application.
Conclusions
Several electrochemical immunosensors have been reported for the detection of CA19-9.For most electrochemical immunosensors, the sensitive material (antibody) is usually immobilized onto the electrode surface covalently [21].Various nanomaterials have been employed to enhance the sensing performance in combination with corresponding sensing techniques [28][29][30].The performance of the immunosensor in this paper is comparable to that of previously reported biosensors (Table S1).In addition, considering that the level of CA19-9 was elevated significantly, the large detection range is one of the main advantages of the electrochemical immunosensor developed here, which can ensure the quantification of CA19-9 in various patients.The cross-linking of BSA and graphene with GA ensures good antifouling performance in the detection of serum samples.
Conclusions
A label-free antifouling electrochemical immunosensor was successfully developed by cross-linking BSA and graphene with the aid of GA.The obtained nanocomposites formed a stable porous 3D conductive antifouling layer on the surface of an electrode, which maintained both the good electron transfer and antifouling performance at the electrochemical interface at the same time.After the antibody was further immobilized covalently onto the BSA/graphene/GA film, the target protein CA19-9 could be quantified directly via the electrochemical impedance spectroscopy technique in a label-free manner.This electrochemical immunosensor exhibited good sensing performance towards CA19-9 proteins in both a buffer and human serum.The responsive range was from 13.5 U/mL
Scheme 1 .
Scheme 1. Schematic of the fabrication process of BSA/graphene/GA-modified electrochemical impedance immunosensor for the detection of target protein.
Figure 2 .
Figure 2. Electrochemical characterization of gold electrodes before and after modification of BSA/graphene/GA.(a) Typical voltammograms of gold electrodes modified with different
Figure 2 .
Figure 2. Electrochemical characterization of gold electrodes before and after modification of BSA/graphene/GA.(a) Typical voltammograms of gold electrodes modified with different nanocomposites at a scan rate of 0.1 V/s.Voltammograms of bare gold electrode (AuE) (b) and BSA15/graphene1/GA-modified gold electrode (d) at different scan rates ranging from 0.1 V/s to 1.0 V/s.(c) Plots of extracted oxidation/reduction peak current from the voltammograms shown in (b,d) versus the square root of the scan rate.
Figure 3 .
Figure 3. Electrochemical characterization of the functionalization process.(a) Comparison of CV curves (a) and electrochemical impedance spectra (b) in each process of sensor fabrication.
Figure 3 .
Figure 3. Electrochemical characterization of the functionalization process.(a) Comparison of CV curves (a) and electrochemical impedance spectra (b) in each process of sensor fabrication.
Figure 4 .
Figure 4. Nyquist plots and delta Rct calibration curves of BSA15/graphene/GA/anti-CA19−9 antibody-modified electrodes for the detection of CA19-9 in PBS solution (a,c) and human serum (b,d).Orange dashed lines denote the limit of detection.
Figure 4 .
Figure 4. Nyquist plots and delta Rct calibration curves of BSA15/graphene/GA/anti-CA19-9 antibody-modified electrodes for the detection of CA19-9 in PBS solution (a,c) and human serum (b,d).Orange dashed lines denote the limit of detection.
Figure 5 .
Figure 5. (a) Selectivity assessment, (b) antifouling performance evaluation, (c) stability evaluation under PBS and dry storage conditions, (d) results comparison of CA19-9 samples using the electrochemical immunosensor and ELISA.
Figure 5 .
Figure 5. (a) Selectivity assessment, (b) antifouling performance evaluation, (c) stability evaluation under PBS and dry storage conditions, (d) results comparison of CA19-9 samples using the electrochemical immunosensor and ELISA. | 7,571.2 | 2023-12-01T00:00:00.000 | [
"Medicine",
"Chemistry",
"Materials Science"
] |
The Role of Fast-Cycling Atypical RHO GTPases in Cancer
Simple Summary For many years, cancer-associated mutations in RHO GTPases were not identified and observations suggesting roles for RHO GTPases in cancer were sparse. Instead, RHO GTPases were considered primarily to regulate cell morphology and cell migration, processes that rely on the dynamic behavior of the cytoskeleton. This notion is in contrast to the RAS proteins, which are famous oncogenes and found to be mutated at high incidence in human cancers. Recent advancements in the tools for large-scale genome analysis have resulted in a paradigm shift and RHO GTPases are today found altered in many cancer types. This review article deals with the recent views on the roles of RHO GTPases in cancer, with a focus on the so-called fast-cycling RHO GTPases. Abstract The RHO GTPases comprise a subfamily within the RAS superfamily of small GTP-hydrolyzing enzymes and have primarily been ascribed roles in regulation of cytoskeletal dynamics in eukaryotic cells. An oncogenic role for the RHO GTPases has been disregarded, as no activating point mutations were found for genes encoding RHO GTPases. Instead, dysregulated expression of RHO GTPases and their regulators have been identified in cancer, often in the context of increased tumor cell migration and invasion. In the new landscape of cancer genomics, activating point mutations in members of the RHO GTPases have been identified, in particular in RAC1, RHOA, and CDC42, which has suggested that RHO GTPases can indeed serve as oncogenes in certain cancer types. This review describes the current knowledge of these cancer-associated mutant RHO GTPases, with a focus on how their altered kinetics can contribute to cancer progression.
Introduction
The RHO GTPases consist of a group of GTP-hydrolyzing enzymes that belong to the RAS superfamily of small GTPases. In human cells, there are 20 different members of the RHO GTPases that can be further divided into eight subgroups: RAC, RHO, CDC42, RND, RHOD/F, RHOU/V, RHOBTB, and RHOH (Table 1) [1,2]. Ever since the discovery of the first RHO gene, RHOB, in 1985, the RHO GTPases have been considered to have low oncogenic potential in vivo [3]. Mutant RHO variants have been shown to transform cells in various in vitro models, but these studies have relied mainly on laboratory-generated activating mutants of RHO GTPases, rather than on mutant proteins found in tumors. Instead, the prevailing view has been that the link between RHO GTPases and cancer is of a more indirect nature.
This view originated from a number of studies that demonstrated that certain members of the RHO GTPases, as well as their regulators and the components of their downstream signaling pathways, are differently expressed in a wide range of tumors, which suggests a role in cancer [4]. These views defined the scientific climate for a long time until, in 2012, cancer-associated activating mutants were identified in malignant melanoma [5,6]. In this article, I will describe the recent developments in the field of cancer-associated mutant RHO GTPases. I will describe the underlying mechanisms for their oncogenic properties, and in the process, I will describe the concept of atypical fast-cycling RHO GTPases. Finally, I will discuss the current views on how fast-cycling GTPases can contribute to cancer [7]. RHOV has not been confirmed to function as a fast-cycling RHO GTPase; however, its similarity to RHOU suggests that it is.
The Origin of the RHO GTPases
As already mentioned, the first RHO gene was discovered in 1985. This finding is a good example of the serendipitous nature of scientific progress. The scientist responsible for the discovery, Pascal Madaule, was at the time a post-doctoral fellow in Richard Axel's research group at Columbia University in New York (USA). The project that Pascal was involved in aimed to clone an ortholog of the α subunit of human chorionic gonadotropin in the sea slug Aplysia californica. Unexpectedly, he instead identified a gene related to the human RAS genes, hence the name RAS homologous (RHO) [3]. The identification of the RHO genes was soon followed by the discoveries of a number of RHO-related genes, RHO, RAC, and CDC42 [8][9][10][11][12][13]. At this time, during the mid-1980s, the three RAS genes H-RAS, K-RAS, and N-RAS had been identified and demonstrated to be prominent oncogenes in several human cancers [14]. In contrast, similar oncogenic properties were not apparent for the RHO GTPases. Instead, several, by now classical, papers from the research group of Alan Hall during the early 1990s demonstrated that the RHO GTPases are key regulators in the signaling cascades that control actin dynamics in eukaryotic cells [15][16][17]. These studies paved the way for a paradigm, stating that only three RHO members, RHOA, RAC1, and CDC42, were sufficient to regulate the organization and dynamics of the actin filament system, and thereby complex cellular processes such as cell morphogenesis and cell migration [18]. Even if this model seems beautiful in its simplicity, it is a clear oversimplification; all 20 members of the RHO GTPases have both unique and overlapping roles in the regulation of a multitude of cellular processes [1,2].
According to the standard model, small GTPases alternate between tinactive, GDPbound and active, GTP-bound conformations. This conformational change results in the exposure of structural elements at the surface of the active GTPase, which allows it to interact with other proteins, as so-called effectors, which serve as downstream recipients of a signaling cue [14]. Thus, small GTPases serve as binary molecular switches that are either in an OFF or an ON mode. Two categories of regulatory proteins, the guanine nucleotide exchange factors (or GEFs) and the GTPase activating proteins (or GAPs) tightly regulate the cycling between these two modes. The GEFs catalyze the nucleotide exchange in the active site from GDP to GTP, thereby serving as positive regulators. The GAPs catalyze the hydrolysis of GTP to GDP, thereby serving as negative regulators [14]. This standard model is also applicable to the RHO family of small GTPases. The human genome harbors around 70 RHOGEFs and 80 RHOGAPs [18,19]. In addition, there is yet another group of RHO regulators: the RHO GDP dissociation inhibitors (or RHOGDIs; there are three members of this protein family) [20]. This group of proteins sequesters RHO GTPases in the inactive GDP-bound conformation. Various activating stimuli result in separation of the two proteins, allowing the activation of RHO GTPases by the GEFs [20]. This control mechanism of RHO activity is an elegant construction by nature; however, it only applies to 10 of the RHO GTPases: the members of the classical subfamilies RHO, RAC, and CDC42. Importantly, the other 10 members do not follow this simple scheme of activation and are therefore referred to as atypical RHO GTPases [7].
Most RAS-like small GTPases contain a so-called CAAX box at their extreme Cterminus. This is a tetrapeptide motif with the consensus sequence: cysteine, followed by two aliphatic amino-acid residues, and a less defined amino-acid residue at the ultimate position [14]. The CAAX box undergoes posttranslational modifications through covalent attachment of an isoprenoid moiety at the cysteine residue, followed by cleavage of the AAX peptides and carboxymethylation of the resulting ultimate prenylated cysteine. The RAS GTPases are modified by a 15-carbon farnesyl, and of the farnesylated RAS proteins can thereby be targeted to lipid to the plasma membrane or to other intracellular lipid bilayers. In contrast, the RHO GTPases are most often modified by a 20-carbon geranylgeranyl moiety [21]. This modification is important for membrane targeting of RHO GTPases, but it is also required for their control by RHOGDIs. Again, this concept of activity control only applies to the classical 10 RHO GTPases; the atypical RHO GTPases do not undergo prenylation and do not bind RHOGDIs.
The Concept of Atypical RHO GTPases
The first indication of the existence of RHO GTPases with atypical properties came from the discovery of RHOE (also known as RND3) [22]. RHOE was shown to lack intrinsic GTPase activity and to reside exclusively in a GTP-bound conformation inside cells. In addition, RHOE was resistant to the influence of RHOGAPs. The GTPase deficiency turned out to be caused by differences in the amino-acid sequence at three key positions in the nucleotide-binding pocket: 12, 59, and 61 (following RAS numbering of the codons). At position 12, RHOE has a serine instead of glycine; at position 59, a serine instead of alanine; and at position 61, an aspartic acid instead of a glutamine. These amino-acid substitutions are known to be oncogenic in RAS because they render RAS GTPase-deficient [14]. Similar types of amino-acid substitutions can be found in all three RND proteins, as well as in RHOH and RHOBTB. These RHO subfamilies are therefore classified as GTPase-deficient RHO GTPases [23][24][25]. The GTPase-deficient RHO GTPases are not only resistant to RHOGAPs, they are also refractive to regulation by RHOGEFs and they do not bind RHOGDIs. This means that they are regulated by other mechanisms, for instance by posttranslational modifications, such as phosphorylation, or by regulation at the level of transcription [25].
A second group of atypical RHO GTPases comprises the fast-cycling atypical RHO GTPases, which include RHOD, RHOF, RHOU, and RHOV [6]. The GTPase activity is more or less intact in the fast-cycling RHO GTPases, but the intrinsic GDP/GTP exchange activity is greatly elevated, meaning that the exchange of the nucleotides at the active site occurs without the involvement of RHOGEFs [26][27][28]. Due to the roughly 10-fold higher intracellular levels of GTP over GDP in most cell types, the fast-cycling RHO GTPases will reside predominantly in an active conformation [29]. In contrast to the GTPase-deficient RHO GTPases, the amino-acid residues at positions 12, 59, and 61 in the fast-cycling RHO GTPases are the same as in the classical RHO GTPases. Similar to the GTPase-deficient RHO GTPases, no RHOGEFs, RHOGAPs, and RHOGDIs have been found for the fast-cycling RHO GTPases.
The Mechanisms Underlying an Increased Intrinsic Exchange Activity
RHOU (also known as WRCH1) was the first RHO member to be described with an elevated exchange activity, but with an intact GTPase activity [26,27]. Although RHOV has several characteristics that suggest that it is also fast cycling, the kinetics of RHOV have not yet been established. RHOU was originally identified as a gene responsive to Wnt-1; however, the possible role of RHOU in Wnt signaling is not clear [30,31]. RHOD and RHOF have been shown to be fast-cycling GTPases, although there are contradicting views on this for RHOF [28,32]. One common cellular response for all of these four RHO members is that when ectopically expressed in various cell models, they trigger the formation of filopodia ( Figure 1) [4]. The mechanisms underlying the fast-cycling properties are not clear. RHOU and RHOV have tyrosines instead of phenylalanine residues in the positions equivalent to codon 28 in RAC1, which might lead to reduced hydrophobic interactions with the guanine base. RHOD and RHOF have phenylalanines in this position, similar to the classical RHO GTPases, but the amino-acid residues in proximity of this position differ from the classical RHO GTPases, which is likely to alter the nucleotide binding capacity of RHOD and RHOF ( Figure 3) [28]. For further reading on signaling pathways and effectors downstream of RHOU and RHOV, please see the following recent review articles [7,31]. Figure 2D for comparison. The Myc-tagged RHO GTPases were visualized using a mouse anti-Myc antibody, followed by an AlexaFluor 488conjugated anti-mouse antibody. Filamentous actin was visualized with TRITC-conjugated phalloidin. Scale bar, 10 µm.
RHOU and RHOV in Cancer
No point mutations related to cancer have been reported for RHOU or RHOV to date. Instead, increased expression of these RHO members has been linked to cancer progression. This is not so surprising, as the fast-cycling RHO GTPases have been shown to be constitutively active in their wild-type forms, so increased expression is likely to result in more active protein [6]. Elevated expression of RHOV has been shown in lung adenocarcinoma and correlated with high frequency of metastasis. Moreover, ectopic expression of RHOV in the A549 lung cancer cell line resulted in increased wound closure and focus formation [33]. Knock-down of RHOV in this same cell line resulted in reciprocal effects: slightly decreased wound closure and focus formation. Another study, which also analyzed expression levels of all 20 of the RHO members, concluded that RHOV was specifically overexpressed in lung adenocarcinoma [34]. A link to lung cancer was furthermore suggested in a study on non-small-cell lung cancer tumors in which the RHOV transcript was reported to be overexpressed in cell lines and in patient material [35]. Together, these studies have suggested a role for RHOV in lung cancer, but how RHOV functionally contributes to cancer progression is not known at present.
RHOU has a more ubiquitous expression pattern compared to RHOV [36]. Several studies have shown that RHOU is up-regulated in various cancers, such as T-cell acute lymphoblastic leukemia, prostate cancer, multiple myeloma, and breast cancer [37][38][39][40][41][42]. Knock-down of RHOU in T lymphoblastoid cells resulted in decreased cell migration and chemotaxis towards CXCL12 [37]. Increased expression of fatty-acid synthase in prostate cancer is associated with tumor progression. Fatty-acid synthase expression positively regulates cell migration in a RHOU-activation-dependent manner by regulation of the levels of RHOU palmitoylation [38]. In multiple myeloma cells, decreased RHOU expression using siRNA reduced their cell migration [41]. Additionally, in breast cancer cells, depletion of RHOU expression resulted in impaired cell migration and invasion. As atypical RHO GTPases are believed to be regulated at the level of transcription, it is relevant to postulate that increased expression of RHOU and RHOV is associated with increased activity of these proteins. However, RHOU expression has not only been positively correlated with tumor progression and increased cell migration. A recent study showed that RHOU expression is decreased in human colorectal tumor samples. Furthermore, studies in mice showed that cells in the gut of RHOU knock-out mice had an increased migratory capacity [43]. RHOU expression can therefore give rise to different cellular responses depending on the cellular context. Clearly more studies are needed to unravel the signaling capacity of RHOU.
RHOD and RHOF in Cancer
RHOD and RHOF have both been shown to function as fast-cycling RHO GTPases [28]. Another study came to a conflicting conclusion, however, as it suggested that the intrinsic GDP/GTP exchange activity of RHOF is slow, rather than fast [32]. The reason for this dichotomy is difficult to evaluate but might be because different methods were used for protein purification. Moreover, in the latter study, RHOF was not compared side by side to other RHO proteins [32].
Again, no cancer-related point mutations have been reported for RHOD or RHOF, although they have both been shown to be differently expressed (mostly overexpressed) in cancer. In acute myeloid leukemia, high expression of RHOF is associated with reduced overall survival [44]. Furthermore, RHOF was shown to be frequently up-regulated in hepatocellular carcinoma, and increased RHOF expression is associated with poor clinical outcome. Finally, RHOF up-regulation markedly increased in vivo cell migration and invasion of human hepatocyte carcinoma HepG2 cells [45].
Classical RHO GTPases as Proto-Oncogenes
The oncogenic properties of RHO GTPases have been debated since their respective discoveries. The general view has stipulated that RHO GTPases per se are not oncogenic, but that the RHOGEFs might serve as bona fide oncogenes [46,47]. Several studies have implicated that the constitutively active mutants of RHOA and RAC1 RHOA/G14V and RAC1/G12V, have transforming properties in several of the classical in vitro assays for cell transformation (e.g., focus formation assays, growth in soft agar) [48][49][50]. In addition, several observations have indicated that constitutively active mutants of RHOA and RAC1 can cause tumor growth in nude mice [48]. However, these results were dependent on laboratory-generated mutant RHO GTPases, and no cancer-associated mutated RHO GTPases were reported for a long time.
However, modern DNA-sequencing tools that allow large-scale sequencing of entire cancer genomes has now dramatically changed the scene, and the first step to an altered view on RHO GTPases as oncogenes came from the identification of a recurrent somatic point mutations in RAC1 (RAC1/P29S) in a sun-exposed melanoma, in 2012 [5]. Already before this finding, a splice variant of RAC1, RAC1B, was reported to have transforming capacity. Importantly, these mutant RAC1 variants were shown to function as fast-cycling RHO GTPases [51][52][53] (Table 2). Another similarity to the atypical fast-cycling RHO members is that these fast-cycling mutants of RAC1 promote the formation of filopodia ( Figure 2) [54]. Comparisons between the three-dimensional structures of wild-type RAC1 and RAC1B and RAC1/P29S reveals that there are clear differences in the orientations of the key amino-acid residues in their interactions with the guanine nucleotides ( Figure 4).
RAC1/P29S
A point mutation in codon 29 of RAC1, or rather in the Caenorhabditis elegans ortholog CED-10, was first identified in a screening for mutant genes that confer synthetic lethality with a weak ced-10 mutant. This screening identified a P29L mutant of CED-10 [68]. Later, the occurrence of a point mutation in human RAC1 was revealed in a study of the mutational landscape in melanoma tumors. This study involved 147 tumor samples and identified mutations in BRAF and NRAS at high incidence, as expected. Interestingly, in sun-exposed melanomas, the third most common recurring somatic mutation was a serine for a proline mutation in RAC1 [5]. This RAC1/P29S mutated protein was shown to cause increased GTP-binding associated with increased interactions with two previously known RAC1 effectors, PAK1 and MLK3. Furthermore, forced expression of RAC1/P29S in melanocytes resulted in increased cell proliferation and motility in a Boyden-type migration assay. Transient transfection of EGFP-tagged RAC1/P29S in COS7 cells resulted in increased accumulation of the mutant RAC1 protein in dorsal membrane ruffle-like structures [5]. The kinetics of RAC1/P29S were further analyzed in a study by Davies et al. [53], which demonstrated that RAC1/P29S is a fast-cycling RHO GTPase. These kinetic properties were similar to RAC1/F28L, a laboratory-generated mutant RHO GTPase that is also fast-cycling [55]. However, detailed structural analysis indicated that the fast-cycling of RAC1/F28L was the result of reduced interactions between the guanosine ring and mutated phenylalanine 28, whereas the fast-cycling of RAC1/P29S appears to result from destabilization of the GDP-bound conformation (Figure 4) [53]. Here, Davies et al. [53] also reported that RAC1/P29S increased membrane ruffling in transiently expressed COS7 cells as well as in NIH3T3 fibroblasts stably expressing RAC1/P29S. However, a study that compared the three-dimensional structures of RAC1/P29S and RAC1/A159V came to a somewhat different conclusion [69]. The authors to this report suggested that the fast-cycling of RAC1/P29S is the result of an open conformation of the switch I motif, and that the interaction between GTP and phenylalanine 28, proline 29 (which is now a serine in the mutant protein) and glycine 30 was lost [69].
The effects of fast-cycling RAC1 mutants on membrane ruffling are in contrast to a study of the cellular effects triggered by a panel of CDC42 and RAC1 mutants [54]. In this study, it was shown that fast-cycling mutants of CDC42 and RAC1 (including RAC1/F28L and RAC1/P29S) trigger the formation of filopodia in human fibroblasts as well as in porcine aortic endothelial cells. In contrast, GTPase-deficient mutants of CDC42 and RAC1 induce the formation of lamellipodia (Figure 2) [54]. The reason for this discrepancy was not clear, but COS7 cells do not represent an ideal model system for the analysis of cytoskeletal reorganization, and the generation of NIH3T3 cells stably expressing RAC1/P29S mutants might trigger compensatory mechanisms that can confound the acute effects on actin organization. Such a cell-type-dependent response was indeed supported by a study by Mohan et al. [70], where they showed that expression of RAC1/P29S in melanoma A375 cells induced the formation of extended lamellipodia driven by dendritic actin networks. Interestingly, the RAC1/P29S-induced cell proliferation was dependent on the integrity of the actin network. Moreover, the formation of extended lamellipodia resulted in sequestration and inactivation of the tumor suppressor Merlin/NF2 in a mechanism that involved phosphorylation of serine 518 on Merlin/NF2 by PAK1 [70]. This finding is in agreement with the RAC1/P29S-dependent increase in PAK1 activity described previously [5].
The studies described thus far have mainly described the cellular functions of RAC1/P29S in vitro, so what about the in vivo functions? RAC1/P29S was shown to regulate the expression of 'Programmed death-ligand 1' (PD-L1), which is an immune regulatory molecule [71]. PD-L1 up-regulation might allow cancers to evade immune control, and it is often associated with increased tumor aggressiveness [72]. Recurrent RAC1/P29S in primary cutaneous melanomas has often been shown in conjunction with mutant BRAF, and possibly aggravates BRAF-dependent disease progression [73]. Studies in mice demonstrated that RAC1/P29S can serve as an oncogenic driver mutation. Ubiquitous expression of RAC1/P29S at endogenous levels in adult mice resulted in B-cell lymphoma [74]. RAC1/P29S expression alone in mouse melanocytes did not result in melanoma; however, in combination with mutant BRAF, RAC1/P29S expression resulted in melanoma in these mice. The most plausible mechanism for RAC1/P29S in tumor progression is through activation of the serum response factor/myocardin-related transcription factor (SRF/MTRF) transcriptional programs, which lead to mesenchymal transition of melanocytes [74].
Some additional cancer-associated mutants in RAC1 and RAC2 have been reported in common cell lines and are given in public databases [61]. This way, two new fast-cycling mutants of RAC1 were identified, RAC1/N92I and RAC1/C157Y; furthermore, they were shown to undergo in vitro transformation [61]. However, not much else is known about these mutant proteins and how they contribute to human cancer.
RAC1B
The first observation that suggested that oncogenic RHO GTPases do exist outside the test-tube came from the discovery of a splice variant of RAC1, called RAC1B. This is not a recurrent somatic mutation, but rather a splice variant present at low levels in many human tissues, and it was noted for colorectal cancer [51]. RAC1B results from an alternative splicing event that adds 57 extra nucleotides between codons 75 and 76, which results in 19 extra amino-acid residues immediately behind the switch II motif [51,52]. Importantly, studies of its kinetics revealed that RAC1B has fast-cycling properties [75]. RAC1B was shown not to bind RHOGDI, and in contrast to RAC1/P29S, not to interact with PAK1, or at least, the interaction with full-length PAK1 appears to be abolished [76,77]. The binding spectra of all of the RAC1 mutants are not completely known, but it is clear that RAC1B has a different affinity for many of the RAC1 effectors identified, and triggers other downstream pathways compared to wild-type RAC1 [6]. For instance, RAC1B does not activate NF-κB or cyclin D1 expression but can trigger the AKT signaling pathway [77].
Several studies have indicated that increased RAC1B expression is associated with cellular transformation in commonly used cell models [77,78]. RAC1B is not involved in the formation of lamellipodia but is involved in the formation of filopodia [54,78]. What is the mechanism underlying the effects on cytoskeletal organization and cell morphology? One model suggests that RAC1B interferes with RAC1 signaling, as forced expression of a GTPase-deficient mutant of RAC1B (RAC1B/G12V) results in loss of endogenous RAC1 at peripheral membranes and an increase in activated RHOA [78]. According to this model, the RAC1B-induced cellular transformation will be dependent on RHOA activity. However, RAC1B does not appear to trigger all of the RHOA-dependent cellular effects, e.g., it does not appear to trigger the formation of stress fibers. An interesting study showed that MMP-3-induced cell transformation requires RAC1B [79], where they also showed that MMP-3 activation resulted in epithelial-mesenchymal transition (EMT) in mouse breast epithelial cells. MMP-3 induction was associated with activation of RAC1B, as well as with its increased expression, which resulted in ROS formation followed by Snail1 expression and EMT [79].
What is the function of RAC1B, and is it only expressed during disease? Studies on the evolutionary aspects of RAC1B have demonstrated that the exon involved in the alternative splicing can be found exclusively in amniotes, so it is possible that RAC1B has a role in a normal cellular context [36]. Additionally, while RAC1B has cell-transforming properties at the cellular level, what is its role in cancer? This is still an open question, and one role suggested by Nimnual et al. would be its modulation of the function of the major splice variant, or to shift the balance between RAC1-and RHOA-dependent signaling [78]. There are several reports on RAC1B expression in human cancers, such as breast, colorectal, lung, thyroid, and pancreatic cancers [51,52,[56][57][58][59][60]. However, many of these studies are based on cellular models and associations, rather than on hard data that univocally puts RAC1B as a causative factor in tumor progression. An important step to define a mechanism for RAC1B in progression of colorectal cancer comes from a recent study by Gudiño et al. [80]. They showed that high RAC1B expression correlates with high WNT activity and poor prognosis. In a mouse model for colorectal cancer, it was shown that abrogating Rac1b resulted in significantly decreased tumor burden and increased overall survival of the mice. This study also demonstrated the presence of a hitherto undiscovered role of RAC1B in EGF receptor trafficking. Deletion of Rac1b resulted in decreased EGFR internalization and increased receptor degradation through lysosomal sorting. Thus, increased Rac1b expression was associated with increased EGFR signaling. Interestingly, in studies using patient-derived organoids, it was shown that tumors resistant to EGFR inhibitors (e.g., cetuximab) were sensitive to RAC1B depletion, which suggests that this strategy might be used in a clinical setting [80].
The studies discussed thus far indicate the tumor-promoting role of RAC1B. However, there are indications that RAC1B might serve as a tumor suppressor, in particular in the context of TGFβ signaling [81,82]. Studies in pancreatic cancer cells showed increased TGFβ-dependent cellular responses, such as activation of the MKK6/p38 and MEK/ERK signaling pathways in cells where RAC1B was ablated. In addition, TGFβ-induced expression of EMT marker genes and the morphological cell alterations associated with EMT were much more pronounced in cells lacking RAC1B [83]. Thus, there is support for RAC1B as a tumor-promoting as well as a tumor-suppressing factor, and the reason for this apparent dichotomy is not clear at the moment. RAC1B has been shown to modulate RHOA signaling; however, the effect on the activity of the additional 18 RHO GTPases expressed in human tissues is not known. More studies are clearly needed to clarify the picture and reveal the molecular functions of RAC1B in more detail.
RHO Mutants in Cancer
Similar research strategies that resulted in the identification of the fast-cycling mutants of RAC1 have resulted in the identification of somatic mutations in RHOA (Table 2). For instance, mutations in RHOA were identified in angioimmunoblastic T-cell lymphoma, peripheral T-cell lymphoma, adult T-cell leukemia/lymphoma, and diffuse-type gastric carcinoma [62][63][64][65][66]. The most common RHOA mutation in these datasets was substitution of a valine for a glycine at codon 17 (RHOA/G17V), which appeared at high incidence, although other mutations were also identified [6,62,63,84].
RHOA/G17V was not defined as a fast-cycling RHO GTPase; instead, it has similar kinetics as the classical dominant-negative RHOA variant RHOA/T19N. This mutant protein has decreased affinity for guanosine nucleotides and higher affinity for RHOGEFs, and it is thought that dominant-negative RHO GTPases sequester RHOGEFs, and thereby block the downstream signaling. Expression of RHOA/G17V in T Jurkat cells resulted in increased cell proliferation and invasion, properties that are indicative of cell transformation. The underlying mechanism is not entirely clear here, but presumably this down-regulation of RHO-dependent signaling results in increased RAC1 signaling. Interestingly, expression of RHOA/G17V in NIH3T3 fibroblasts or HeLa cells resulted in loss of stress fibers and formation of filopodia-like protrusions, a response that resembles the phenotype induced by the expression of fast-cycling RHO GTPases [63,65]. In addition to this panel of dominantnegative variants of RHOA, two mutations, RHOA/C16R and RHOA/A161P, have been described as fast-cycling variants of RHOA [64], with little known of the roles of these two mutant proteins in a clinical setting.
Additional Cancer-Associated Mutations in RHO GTPases
There are sporadic observations of cancer-associated point mutations in other RHO GTPases (Table 2). Somatic mutations in CDC42 have been identified in well-differentiated papillary mesothelioma, where two different CDC42 mutants were detected: CDC42/P34Q and CDC42/Q61R [67]. The nucleotide-binding characteristics of these mutants have, to date, not been analyzed, but the CDC42/Q61L mutant is known to have defective GTPase activity. The proline at codon 34 resides in the so-called effector-binding loop, and it is therefore expected to result in altered binding to effector proteins, and thereby to altered downstream signaling [85].
Summary
Although oncogenic mutations in RHO GTPases are rare, there clearly remain many more to be characterized (e.g., see Catalogue of Somatic Mutations in Cancer database; https://cancer.sanger.ac.uk/cosmic). In addition, although oncogenic mutations in RHO GTPases are not common in a global setting, they might still have major impact on certain tumor types and might thus serve as 'druggable' targets in this context. A promising example is the combinatory treatment of malignant melanoma with B-RAF inhibitors and SRF/MTRF inhibitors in a mouse model of malignant melanoma [73]. Another example was the finding that tumors resistant to EGFR inhibitors are sensitive to RAC1B depletion. Hopefully, we will see more examples of this type in the future, and that this type of treatment regimen will eventually reach the clinic setting.
Conclusions
Research during the last couple of years has resulted in a paradigm shift and has bestowed RHO GTPases, in particular RHO GTPases with fast-cycling properties, with key roles in human cancer. A substantial number of cancer-associated point mutations in predominantly RAC1, RHOA, and CDC42 have been found and characterized. The kinetic properties of the oncogenic mutations in RHO GTPases differ from mutations in RAS; the former are most often fast-cycling and the latter GTPase deficient. These properties could make the oncogenic RHO GTPases, and/or signaling pathways regulated by this category of RHO GTPases, potential targets for future cancer treatments. Acknowledgments: I am grateful for the help from Stefan Knight, Uppsala University, Sweden, with the ChimeraX software.
Conflicts of Interest:
The author declares that there are no competing financial interests. | 6,948.2 | 2022-04-01T00:00:00.000 | [
"Biology"
] |
Aqueous Extract of Leaves and Flowers of Acmella caulirhiza Reduces the Proliferation of Cancer Cells by Underexpressing Some Genes and Activating Caspase-3
The increasing prevalence of cancers and the multiple side effects of cancer treatments have led researchers to constantly search for plants containing bioactive compounds with cell death properties. This work aimed at evaluating the antiproliferative effect of an Acmella caulirhiza extract. After evaluation of the in vitro antioxidant potential of the three extracts of Acmella caulirhiza (aqueous (AE-Ac), hydroethanolic (HEE-Ac), and ethanolic (EE-Ac)) through the scavenging of DPPH● and NO● radicals, the extract with the best antioxidant activity was selected for bioactive compound assessment and antiproliferative tests. Subsequently, the cytotoxic activity was evaluated on the viability of breast (MCF-7), brain (CT2A, SB-28, and GL-261), colon (MC-38), and skin (YUMM 1.7 and B16-F1) cancer lines using the MTT method. Then, the line where the extract was the most active was selected to evaluate the expression of certain genes involved in cancerogenesis by RT-PCR and the expression of cleaved caspase-3 involved in cell death mechanism by western blot. The AE-Ac showed the best scavenging activity with IC50s of 0.52 and 0.02 for DPPH● and NO●, respectively. This AE-Ac was found to contain alkaloids, flavonoids, and tannins and was more active on YUMM 1.7 cells (IC50 = 149.42 and 31.99 μg/mL for 24 and 48 h, respectively). Results also showed that AE-Ac downregulated the expression of inflammation (IL-1b (p = 0.017) and IL-6 (p = 0.028)), growth factors (PDGF (p = 0.039), IGF (p = 0.034), E2F1(p = 0.038), and E2F2(p = 0.016)), and antiapoptotic protein genes (Bcl-2 (p = 0.028) and Bcl-6 (p = 0.039)). The cleaved caspase-3 was positively modulated by the AE-Ac inducing thus cell death by apoptosis. AE-Ac showed inhibitory effects on the expression of genes involved in cancer progression making it a potential health intervention agent that can be exploited in cancer therapy protocols.
Introduction
Despite various technological advances, the cancer survival rate is still very low and is associated with about 9.96 million deaths worldwide [1].Breast cancer is the most common cancer with an incidence of 11.7%, followed by lung (11.4%), colorectal (10%), prostate (7.3%), stomach (5.64%), and other types of cancer (53.9%) [1].Te word "cancer" refers to a group of diseases that involve abnormal cell growth and invasion of adjacent or distant cells (or tissues).Various agents, both exogenous (radiation, viruses, or toxins) and endogenous (mutations), can afect the cell at several levels (genetic, biochemical, and epigenetic) and directly cause the deregulation of programmed cell death and initiate carcinogenesis [2].
Initiation of extrinsic apoptosis cell death pathway occurs when death receptors (Fas), tumour necrosis factor (TNF-α) receptors (TNFR1 and TNFR2), and TNF-related apoptosis-inducing ligand (TRAIL) receptors DR4 and DR5 are occupied by their respective ligands [3,4].Te intracellular portions of death receptors possess a conserved protein-protein interaction domain known as the death domain, which is a binding site for adaptor proteins, such as the TNF receptor-associated death domain (TRADD) and the Fas-associated death domain (FADD), as well as initiator caspase 8 [5].Activated caspase-8 in turn stimulates efector caspase-7, enabling cleavage of the death agonist protein BH3, which translocates to the mitochondria and triggers cytochrome C release.In the cytosol, cytochrome C forms a multiprotein complex structure with apoptotic protease activating factor-1 (Apaf-1) and procaspase-9 so-called apoptosome.Te apoptosome permits the conversion of procaspase-9 to active caspase-9, which in turn contributes to the activation of efector caspase signaling, destroying the cell through apoptosis [5,6].In contrast to antiapoptotic proteins (Bcl-2, Bcl-6, etc.), proapoptotic proteins (Bcl-2 family) act by forming pores in the mitochondrial membrane to release cytochrome C [7].
Once cancer is initiated, the cells immediately begin to secrete several factors such as vascular endothelial growth factor (VEGF), transformative growth factor (TGF), metalloproteinase 2 and angiopoietin-1 (Ang-1) that promote the formation of the new vessels which permit cells to have nutrients, blood, and energy and thus escape chemotherapies [8].
Te anticancer treatments target several mechanisms through the use of antimetabolites (raltitrexed), alkylating agents (cyclophosphamide), topoisomerase inhibitors (doxorubicin), mitotic spindle poisons (vincristine), and cytotoxic agents (bortezomib), but all are accompanied by severe side efects (relapses, severe anaemia, and weight and hair losses).Researchers believed that the exploitation of the mechanisms of cell death remains a better alternative to cancer management.Tus, several medicinal plants such as Cola verticillata, Indonesian cucumber, and even those of the Amaryllidaceae family have been shown to have benefcial properties in the management of diseases including cancers [9][10][11].Tese plants exert their activities due to the presence of some bioactive compounds such as alkaloids and phenolic compounds.Alkaloids from the Amaryllidaceae family inhibit the independent efects of p53 on the proliferation of colon cancer cells [12].Phenolic compounds such as ampelopsin and apigenin through their antioxidant and anti-infammatory properties induce cell death by apoptosis, suppressing miR-512-3p and promoting the G1 phase of cell cycle involving the p27 Kip1 protein in glioma and breast cancer cells [13][14][15][16].Other bioactive compounds like tomentosin, a terpenoid isolated from plants of the Asteraceae family such as Inula viscosa, and jolkinolide B (extracted from Euphorbia kansui) inhibit the proliferation and migratory activity of cancer cells by downregulating the PI3K-Akt pathway and the expression of certain proinfammatory genes [17][18][19].Meilawati et al. [20] have shown that scopoletin, a coumarin present in most edible plants, exerts its anticancer activities through multiple mechanisms including the modulation of cell cycle arrest, the induction of apoptosis, and the regulation of multiple signaling pathways.Tus, Acmella caulirhiza, a fowering plant belonging to the Asteraceae family, is seasonally found in humid tropical areas.It is used as an ornamental plant but is also consumed as a vegetable by the people of Madagascar and Comoro Islands [21].Traditionally, the whole plant is used to fght respiratory diseases (cold, asthma, and tuberculosis), baby diaper rash, dental caries [22], and cancer [23].Studies also revealed that its crude extracts possess antiinfammatory, antimicrobial, and antioxidant properties [24] which are commonly noted in plants with proven anticancer potential [18,25], thus making Acmella caulirhiza a potential candidate in the treatment of cancer.Hence, the present study aimed at evaluating the antiproliferative properties of the aqueous extract of Acmella caulirhiza (AE-Ac) on some cancer cell lines.
Materials
2.1.1.Cell Lines.Te cancer cell lines used for this study as described in Table 1 were obtained from the cell bank of the Rothlin-Ghosh Lab, Howard Hughes Institute, Yale School of Medicine (USA).
Cell
Culture.Cells were grown in their respective media at 37 °C in a humidifed atmosphere with 5% CO 2 and 95% air.Te cells were trypsinized (0.1% trypsin) at 85% confuency.
Plant Material and Preparation of Extracts.
Te leaves and fowers of A. caulirhiza were collected in October 2018 at Bandjoun (West Region, Cameroon) and identifed at the National Herbarium of Cameroon (NHC) under the number 602 in comparison with specimen number 57420/ NHC of the herbarium.Te material was then sorted, cleaned, and dried until constant weight before being powdered and stored at room temperature in a tightly closed amber bottle.Te extracts were prepared according to the protocol of Fiardilla et al. [26] with slight modifcations.For the preparation of the aqueous, ethanolic, and hydroethanolic extracts, the same quantity of Acmella caulirhiza powder (100 g) was macerated in 1200 mL volume of distilled water (for 24 hours), 95% ethanol (for 72 hours), and a mix of water and 95% ethanol at a ratio of 1 : 1 (for 48 hours), respectively, at room temperature.Te supernatant of each mixture was collected by fltration using Whatman paper No. 3.Each resulting fltrate was frozen and then freeze-dried using a USIFROID SMH 45 (Lagep) to obtain the diferent extracts labelled AE-Ac for aqueous extract, HEE-Ac for hydroethanolic extract, and EE-Ac for ethanolic extract.
Te extract with the best antioxidant capacity was selected for further work.
Quantitative Phytochemical Analysis
(i) Estimation of total phenolic content: Te phenol content was evaluated using the method described by Singleton and Rossi [29].To 30 μL of extract (1 mg/ mL) prepared in an ethanol solution, 1 mL of Folin-Ciocalteu (0.2N) solution and 1 mL of sodium carbonate were added.Tirty (30) minutes after incubation at 25 °C, the absorbance was read at 750 nm.Gallic acid was used as standard and was treated in the same conditions as the extract.Te total phenolic content was expressed in microgram gallic acid equivalence/g of dry matter (μg GAE/g DM).(ii) Estimation of favonoid content: Te favonoid content was evaluated using the method described by Bohorun et al. [30].To one mL of the extract (1 mg/ mL), 1 mL of aluminum chloride (10%), 1 mL of potassium acetate (1 M), and 5.6 mL of distilled water were added.Te mixture was allowed to stand at 25 °C for 30 min.Te absorbance of the reaction mixture was read at 420 nm with a spectrophotometer.Catechin was used as the standard and treated in the same conditions.Te favonoid content was expressed in microgram catechin equivalence per gram of dry matter (μg CaE/g DM).(iii) Estimation of alkaloid content: Quantifcation of the total alkaloid content in the extract was performed according to the method described by Singh et al. [31] with slight modifcations.100 mg of extract was dissolved in 10 mL of ethanol solution (80%, v/v).Te mixture was homogenized and centrifuged for 10 min at 4000 g, 1 mL of the supernatant was introduced into a test tube, followed by the addition of 1 mL of acidifed FeCl 3 solution (FeCl 3 , 0.025 M; HCl, 0.5 M) and 1 mL of an ethanolic solution of 1,10-phenanthroline (0.05 M).Te mixture was then homogenized and incubated for 30 min at 100 °C in a water bath.Te absorbance of the reddish complex formed was read at 510 nm against the blank.Quinine (10 μg/mL) was used as standard, and the alkaloid content was expressed in microgram quinine equivalence per gram of dry matter (μg QiE/g DM).Biochemistry Research International (iv) Estimation of tannin content: Te method described by Bainbridge et al. [32] was used to estimate the total tannin content of the AE-Ac.In the protocol, 1 mL of the extract (1 mg/mL) was mixed with 5 mL of working solution (50 g of vanillin +4 mL of HCl in 100 mL of distilled water) and the mixture was incubated at 30 °C for 20 min.Te absorbance was read at 500 nm against the blank.Gallic acid (0-1000 μg/mL) was used as standard, and the calibration curve was used to compute the tannin content of the extract.Te results were expressed in micrograms of gallic acid equivalence per gram of dry matter (µg GAE/g DM).(v) Estimation of total terpenoid content: Total terpenoids were determined according to the method of Ghorai et al. [33].To 500 mg of extract was added 3.5 ml of ice-cold 95% methanol.It was homogenized before centrifugation at 4000 g for 15 min at room temperature and the supernatant was collected.To 200 μl of supernatant, 1.5 ml of chloroform was added, and the mixture was then roughly mixed and left to stand for 3 minutes.Ten, 100 µl of sulfuric acid was added and the whole mixture was incubated at room temperature for 2 h in the dark.Ten, carefully and gently the supernatant was decanted without disturbing the precipitation.Ten, 1.5 ml of 95% methanol was added and vortexed until the precipitation completely dissolved in the methanol and read at 538 nm.Results were expressed using linalool as a reference molecule.
Cell Viability Assay.
Cell viability was performed using the thiazolyl blue tetrazolium bromide (MTT) according to Ahmad et al. [34] with some modifcations.About 5000 cells of B16F and YUMM 1.7; 10000 of CT2A, MC-38, and SB-28; 15000 of GL231; and 20000 MCF-7 were plated per well in 96-well plates overnight.Tey were then treated with the best antioxidant capacity extract at 5, 10, 50, 100, 250, or 500 μg/mL and incubated for 24 h or 48 h.MCF-7 cells were also treated for 72 h (based on the fact that its half-life was about 42 hours).At the end of the diferent incubation periods, the treated media were removed and 10 μL of 5 mg/ mL MTT solution was added to each well and then incubated for 4 h.Later, 100 μL of DMSO was added to each well and plates were protected from light for incubation overnight.
Antioxidant Potential of Extracts of Acmella caulirhiza.
Te NO • and DPPH • radical scavenging potential of the diferent extracts of A. caulirhiza is reported in Figures 1(a) and 1(b).Results showed that the extracts scavenged these radicals in a concentration-dependent manner.Te AE-Ac presented the highest scavenging capacity towards NO • and DPPH • with SC 50 of 0.02 and 0.52, respectively, followed by the HEE-Ac (Table 3).As the tumour microenvironment is characterized by a high oxidative stress state, the AE-Ac with the best antioxidant activity was selected for further study.
Bioactive Content of the Aqueous Extract of Acmella caulirhiza.
Results revealed the presence of total phenolics, alkaloids, and favonoids in the aqueous extract of A. caulirhiza (Table 4).
Efect of AE-Ac on Cell Viability.
A concentrationdependent efect of the AE-Ac was noted on the proliferation of some cell lines as reported in Figures 2(a)-2(g).Te CT2A, SB-28, MC-38, and MCF-7 cell lines were the most resistant to the AE-Ac as their IC 50 s were higher than 500 μg/mL after 24 h of exposition.However, after 48 h of treatment, their IC 50 s were >500, 351.5).As the antiproliferative properties of the extract were more perceptible on the YUMM 1.7 cell line, it was selected for the evaluation of the extract ability to regulate the expression of certain genes.
Efect of AE-Ac on the Lipid Peroxidation.
Te RSL3 induced peroxidation in YUMM 1.7 cells, but the extract signifcantly limited the peroxidative properties of RSL3 for all three concentrations.Nonetheless, this protective property was lower than that of ferrostatin-1 (p < 0.05) (Figure 3).Biochemistry Research International
Efect of the AE-Ac on the Expression of Some Genes
Involved in Carcinogenesis of the YUMM 1.7 Cells
Efect of the AE-Ac on Proinfammatory Gene
Expression.Except for IL-10, treatment of YUMM 1.7 cells with the aqueous extract of A. caulirhiza led to a decrease in the expression of TNF-α, IL-1b (p � 0.017), and IL-6 (p � 0.028) genes compared to control (Figure 4).
Efect of AE-Ac on Two Genes of Antiapoptotic Proteins.
After exposure of YUMM cells to AE-Ac, a signifcant downregulation of Bcl-2 (p � 0.028) and Bcl-6 (p � 0.039) gene expression was observed (Figure 6).
Efect of AE-Ac on the Cleaved Caspase-3's Expression.
Results revealed that the AE-Ac induced apoptosis just like etoposide (an inducer of apoptosis) in cells through the activation of cleaved caspase-3, a cysteine protease involved in cell death by nucleic acid degradation (Figure 7).
Discussion
Cancer cells are unable to control the expression of cell death genes, making them resistant to chemotherapies.Anticancer therapies target several mechanisms of cell death through the use of antimetabolites, alkylating agents, mitotic spindle poisons, and even cytotoxic agents.However, these therapies are toxic to cancer cells and also to normal cells of the body.Te control of natural mechanisms of cell death remains a better alternative in the management of cancers.Results of this study which aimed at evaluating the antiproliferative properties of the aqueous extract of Acmella caulirhiza revealed that the extract exerted antiproliferative properties through the downregulation of the expression of some genes involved in the cancerogenesis pathway and this was due to the presence of bioactive molecules.Te evaluation of the in vitro antioxidant potential (scavenging of DPPH • and NO • radicals) of the three extracts of Acmella caulirhiza (EE-Ac, HEE-Ac, and AE-Ac) showed that the AE-Ac exhibited the best free radical 8 Biochemistry Research International scavenging activity (Table 3).Tis antioxidant capacity of the extract is linked to its high content of phenolic compounds as well as other bioactive compounds like alkaloids (Table 4).Phenolic compounds have free hydroxyl groups and conjugated double bonds in their structures capable of providing hydrogen or electron to a free radical or a metal [35].Also, it is well known that the bioactive compounds of plants have already demonstrated anticancerous properties in both in vitro and in vivo studies [11,36,37], thus the selection of the aqueous extract for antiproliferative tests.One of the biological characteristics of cancer cells is the production of reactive oxygen species.One of the major drawbacks of chemotherapy is lipid peroxidation, which is frequently caused by interactions of these species with polyunsaturated fatty acids in lipid membranes.Lipid peroxidation is considered a key biochemical process in the toxicity process that causes cell death as well as oxidative damage to cellular components.During this process, free radicals steal electrons from cell membrane lipids, which jeopardize cell life by causing decreased membrane fuidity, increased membrane permeability, and decreased physiological performance [38].In this study, the potential of AE-Ac to inhibit lipid peroxidation was measured by evaluating the viability of YUMM 1.7 line cancer cells exposed to RSL3 (a lipid peroxidation activator), which acts by inhibiting glutathione peroxidase 4. AE-Ac at all concentrations (50, 100, and 150 μg/mL) showed protective efects of cell membranes against RSL3 by protecting cells from the peroxidative action of RSL3.Tis activity could be explained by the antioxidant properties of the aqueous extract of Ac through its ability to scavenge free radicals generated by tumour cells, thus preventing the peroxidation of lipid membrane [38].
Te evaluation of the cytotoxic properties of this aqueous extract on breast, brain, skin, and colon cancer cell lines showed that the extract was more active on YUMM 1.7 cells with IC 50 s of 149.42 and 31.99 after 24 and 48 h, respectively (Table 5).Xie et al. demonstrated that alkaloids, favonoids, coumarins, and terpenoids also present in our extract upregulated the expression of Kip/p27 leading to a decrease in cyclin-D, cyclin-E, and CDK2/4/6 proteins in the melanoma cancer cell lines (WM1361B and WM983A) [16], colon cancer cell lines (HCT-116, LoVo, and DLD-1) [12], and human breast cancer cell line (MCF-7) [37].Tis could lead to the breakage of retinoblastoma and E2F proteins, stopping the cell cycle in the G1/G0 phase.
Tis downregulated the transcription factors NF-κB and upregulated tumour suppressor factors (p53 and p21t) which are cellular gatekeepers of growth.
Cleaved caspase-3 is well known as an executor protease of cell death by apoptosis through the reduction of mitochondrial membrane potential.Te AE-Ac induced expression of this protein (Figure 7).Te favonoids in the extract could reduce mitochondrial membrane potential, leading to the release of apoptogenic factors such as Arts, Diablo, Second Mitochondria-Derived Activator of Caspase (SMAC), and High-Temperature Requirement Protein A2 (Omi/HTRA2).Tese proteins can block the action of the apoptosis inhibitor proteins Bcl-2 and Bcl-xL (which inhibit apoptosis by forming macropores on the mitochondrial membrane and prevent the release of cyt c and the formation of the apoptosome) and the activation of caspase-3 [39,40].Once activated, caspase-3 translocates to the nucleus in its cleaved caspase-3 form where it causes cell death by DNA fragmentation and degradation of nucleic acids [6,41].Indeed, Bernard et al. have shown that cleaved caspase induced gene transcription by interacting with their promoters and inhibiting their expression within the VEGFA gene in the chip experiment [42,43].Tis resulted in the downregulation of several pathways of angiogenesis (FAS, TRAIL, IFN-c, TNF receptor, and RAC1) in MCF-7 and human Jurkat leukaemia cells [40,44].In addition, studies had shown that apigenin (a favonoid) exerted antiproliferative activities on human melanoma cell lines A375P and A375SM through the activation of the apoptotic pathway and the decrease of the antiapoptotic protein Bcl-2 expression [15,45].
To escape cell death, cancer cells also alter the expression of genes and proteins associated to infammation.Some cytokines and proinfammatory proteins like TNF-α and IL are promoters of carcinogenesis through the activation of several signaling pathways [3,7,46].PCR analyses showed that treatment of YUMM 1.7 cells with the AE-Ac downregulated the expression of the proinfammatory gene (TNF-α, IL-1β, and IL-6) (Figure 4).Mathieu et al. demonstrated that alkaloids such as those of Amaryllidaceae (lycorine, narciclasine, and haemanthamine) exert anticancer activities through the inhibition of NFkB (an oncogene involved in tumorigenesis and resistance to apoptosis [47]) [12].It consequently activated the p53 tumour suppressor gene leading to a decrease in the expression of TNF-α, IL-6, IL-1β, and VEGF in colorectal cancer cells [12].On the other hand, growth factors (IGF-1, TGFβ, VEGF...) are activated by binding to their respective intracellular receptors, which can activate several signaling pathways involved in cell proliferation [8].PCR results showed that growth factors' gene expression was downregulated after 4 h of treatment (Figure 5).Tis would be due to the presence of favonoids which can prevent the binding of ligands to their membrane receptor such as epidermal growth factor receptor/mitogen-activated protein kinase (EGFR/MAPK), thus inhibiting their activity with the consequence of stopping the cell proliferation [48,49].
Te deregulation of B cell lymphoma family proteins is the main feature of malignant diseases.It is responsible for resistance to cell death and thus to treatment [7,50].In this study, exposure of cells to EA-Ac resulted in a downregulation of the expression of antiapoptotic genes Bcl-2 and Bcl-6 (Figure 6) due to the capacity of the terpenoids present in the AE-Ac.Indeed, Yang et al. had shown that borneol, a bicyclic monoterpenoid, caused a signifcant release of cyt c which promotes apoptosome formation by aggregating caspase-9 with Apaf-1 in the cytosol triggering apoptosis [16,18].Results of western blot analysis (Figure 7) show that the extract also induced Biochemistry Research International death by apoptosis through its ability to promote activation of caspase-3.
Conclusion
Te AE-Ac exhibited the highest antiradical scavenging activities due to the presence of bioactive compounds (alkaloids, favonoids, tannins, and terpenoids).Tis extract was more cytotoxic on YUMM 1.7 cells after 24 and 48 h incubation periods compared to the other cancer cell lines.Te AE-Ac can induce cell death through the underexpression of infammation, growth factors, and antiapoptotic protein genes.Te presence of cleaved caspase-3 after treatment of YUMM 1.7 cells with the extract confrmed its capacity to induce apoptosis.Studies of other antiproliferative pathways of this extract can reinforce the knowledge of the aqueous extract of Acmella caulirhiza as a potential candidate for cancer treatment [51].
Figure 6 :Figure 7 :
Figure 6: Efect of AE-Ac on some genes of antiapoptotic proteins in YUMM 1.7 cells.Bcl: B cell lymphoma.* Signifcant diference at p < 0.05 compared to the control.
Table 1 :
Cell lines and description.
Te absorbance was read at 570 nm.Signifcance was set at p < 0.05.Te IC 50 and SC 50 values were obtained by linear regression and Microsoft Excel 2016 spreadsheet software was used to plot the graphs.
2.5.Efect of the Extract on the Expression of Some GenesInvolved in Cancerogenesis.Te YUMM 1.7 cells (350000/ well in a 6-well plate) were treated with 150 μg/mL of AE-Ac in triplicate and incubated for 24 h.RNA extraction and purifcation were carried out for each treatment using the RNeasy Mini Kit (QIAGEN, USA).RNA quantifcation was done using the NANODROP 2000 Spectrophotometer of Termo Fisher Scientifc.Te purifed RNA was then used to synthesize cDNA using Bio-Rad's iScript cDNA Synthesis Kit.For qPCR analyses, the KAPA SYBR FAST Universal kit with corresponding primers was prepared for each gene in triplicate.Te C1000 Touch Termal Cycler of Bio-Rad with associated software was used to run qPCRs.All kits were used according to manufacturers' protocols.Te genes evaluated included growth factor (PDFG, IGFR-1-R, TGFβ, VEGF, E2F1, and E2F2), antiapoptotic (Bcl-2 and Bcl-6), and cytokine (TNF-α, IL-1b, IL-6, and IL-10) genes.2.6.Efect of the Extract on the Expression of Cleaved Caspase-3.Following a 24 h of treatment of YUMM 1.7 cells (350000 cells per well in a 6-well plate) with 150 μg/mL of extract or 2 μM of etoposide, cells were lysed using RIPA bufer (50 mM Tris-HCl pH 7.4, 150 mM NaCl, 1% NP-40, 0.1% SDS, and 2 mM EDTA).Protein content was determined using the BCA assay kit (Termo Fisher Scientifc) according to the manufacturer's instructions.Samples (nontreated cells and cells treated with either extract or etoposide) were loaded and run in a polyacrylamide gel for 90 min, at 100 V. Separated proteins were then transferred to a PVDF membrane and blocked using Intercept Blocking Bufer (LI-Biochemistry Research International COR).Te membrane was then incubated with a primary antibody from rabbit (caspase-3, Cell Signaling Technology) and mouse antibody for β-actin (Cell Signaling Technology) for a period of 24 h in a cold room.Te membrane was then washed 3 times with PBST (PBS + 0.1% Tween 20) and incubated with the secondary antibody; the donkey antirabbit antibody (cell signalling technology) was conjugated to a chemiluminescent such as IR680 (red) to detect cleaved caspase-3, and the donkey antimouse antibody conjugated to IR800 (green) to reveal β-actin.Te plate protected from light was incubated for 1 h and later visualised using the Odyssey system associated with ImageStudio (LI-COR).2.7.Statistical Analysis.Data were expressed as mean-± standard deviation (SD).Statistical analysis was performed using Statistical Package for Social Science (SPSS) software version 20.0 for Windows.One-way analysis of variance (ANOVA) and the least signifcant diference (LSD) post hoc test were used to compare means between groups.
Table 2 :
Primers of the genes used (Sigma-Aldrich DNA oligos template).
Table 4 :
Content of bioactive compounds in the aqueous extract of Acmella caulirhiza.
Table 5 :
Inhibitory concentrations 50 of the AE-Ac on cancer cell lines. | 5,807.2 | 2024-02-10T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Biology"
] |
Optimal features for auditory categorization
Humans and vocal animals use vocalizations to communicate with members of their species. A necessary function of auditory perception is to generalize across the high variability inherent in vocalization production and classify them into behaviorally distinct categories (‘words’ or ‘call types’). Here, we demonstrate that detecting mid-level features in calls achieves production-invariant classification. Starting from randomly chosen marmoset call features, we use a greedy search algorithm to determine the most informative and least redundant features necessary for call classification. High classification performance is achieved using only 10–20 features per call type. Predictions of tuning properties of putative feature-selective neurons accurately match some observed auditory cortical responses. This feature-based approach also succeeds for call categorization in other species, and for other complex classification tasks such as caller identification. Our results suggest that high-level neural representations of sounds are based on task-dependent features optimized for specific computational goals.
5
Here, we demonstrate using an information-theoretic approach that production-invariant classification of calls can be achieved by detecting mid-level acoustic features. Starting from randomly chosen marmoset call features, we used a greedy search algorithm to determine the most informative and least redundant set of features necessary for call classification. Call classification at >95% accuracy could be accomplished using only 10 10 -20 features per call type. Most importantly, predictions of the tuning properties of putative neurons selective for such features accurately matched some previously observed responses of superficial layer neurons in primary auditory cortex. Such a feature-based approach succeeded in categorizing calls of other species such as guinea pigs and macaque monkeys, and could also solve other complex classification 15 tasks such as caller identification. Our results suggest that high-level neural representations of sounds are based on task-dependent features optimized for specific computational goals.
Human speech recognition is a highly robust behavior, showing tolerance to 20 variations in prosody, stress, accents and pitch. For example, speech features such as formant frequencies exhibit large variations within-and between-speakers 1, 2 , arising from production mechanisms (production variability). To achieve accurate speech recognition, the auditory system must generalize across these variations. This challenge is not uniquely human. Animals produce species-specific vocalizations ('calls') with large 25 within-and between-caller variability 3 , and must classify these calls into distinct categories to produce appropriate behaviors. For example, in common marmosets (Callithrix jacchus), a highly vocal New World primate species, critical behaviors such as finding other marmosets when isolated depend on accurate extraction of call-type and caller information 4 -8 . Similar to human speech, marmoset call categories overlap in 30 their long-term spectra (Fig. 1A), precluding the possibility that calls can be classified based on spectral content alone, and requiring selectivity for fine spectrotemporal features to classify calls. At the same time, marmoset calls also show considerable production variability along a variety of acoustic parameters 8 . For example, 'twitter' calls produced by different marmosets vary in such parameters as dominant frequencies, 35 lengths, inter-phrase intervals, and harmonic ratios (Fig. 1). Tolerance to large variations in spectrotemporal features within each call type is thus necessary to generalize across this variability. Therefore, there is a simultaneous requirement for fine and broad selectivity for production-invariant call classification. The present study explores how the auditory system resolves these conflicting requirements. Histograms are overall parameter distributions, split into the training (blue) and testing (red) sets. These data show the large production variability captured by the training and test data 50 sets, over which the model must generalize. No systematic bias is evident in calls used for model training and testing.
This problem of requiring fine-and tolerant feature tuning, necessitated by high variability amongst members belonging to a category, is not unique to the auditory 55 domain. For example, in visual perception, object categories such as faces also possess a high degree of intrinsic variability 9 -12 . To classify faces from other objects, using an exemplar face as a 'template' typically fails because this does not generalize across within-class variability 12 . Face detection algorithms use combinations of midlevel features, such as regions with specific contrast relationships 13,14 , or combinations 60 of face parts 12 , to accomplish classification. Of these algorithms, the one proposed by Ullman et al. 12 is especially interesting because of its potential to generalize to other classification tasks across sensory modalities. In this algorithm, starting from a set of random fragments of faces, the authors used 'greedy' search to extract the most informative fragments that were highly conserved across all faces despite within-class 65 variability. Post-hoc analyses revealed that these fragments were 'mid-level', i.e., they typically contained combinations of face parts, such as eyes and a nose. The features identified using this algorithm were consistent with some physiological observations, for example at the level of BOLD responses 15 . While the differences between visual and auditory processing are vast, these results inspired us to ask whether a similar concept 70 sound categorization using combinations of acoustic featurescould be implemented by the auditory system.
The behavioral salience of calls for marmosets 4 -8 , and the increasing resources allocated to the processing of calls along the cortical processing hierarchy 17 on the y-axis. (B) Schematic for initial random feature generation for a twitter (within-class) versus other calls (outside-class) categorization task. Waveforms (top) were converted to cochleagrams (middle). Random initial features were picked from twitter cochleagrams (for example, magenta box). The maximum value of the normalized cross correlation function between each call (within-classblue, outside-classgreen) and each random feature was 115 taken to be the 'response' of a feature to a call. (C) Distributions (top) of a feature's responses to 500 within-class (blue) and 500 outside-class (green) calls. The mutual information (bottom) of a feature computed as a function of a parametrically varied threshold. The dotted line, corresponding to maximal mutual information, is taken to be each feature's optimal threshold. Feature 'response' has to be greater than this optimal threshold for a feature to be considered 120 present within a call.
Results
Features of intermediate lengths and complexities are more effective for call classification 125 We start with the premise that the first step in call processing is the categorization of calls into discrete call types, generalizing across the production variability that is inherent to calls. Fig. 1). We first generated 6000 random initial features from the cochleagrams of 500 twitter calls 135 emitted by 8 marmosets ('training' set, blue histograms in Fig. 1). For the purposes of this study, a 'feature' is a randomly selected rectangular segment of the cochleagram, corresponding to the spatiotemporal activity pattern of a subset of auditory nerve fibers within a specified time window. For each random feature, we determined an optimal threshold at which its utility for classifying twitters from other calls was maximized. The 140 merit of each feature was taken to be the mutual information value at this optimal threshold in bits (Fig. 2).
In Supplementary Fig. 2
Call categorization can be accomplished using a handful of optimal features
Because we generated the initial features at random, many of these have low merit, and many are similar. Therefore, the set of optimal features for classification is expected to 185 be much smaller than this initial set. To determine the set of optimal features that together maximize classification performance, we used a greedy-search algorithm (see Methods Figure 3, magenta boxes outline the top 5 MIFs that are optimal for each of these classification tasks (the first five MIFs in Fig. 4A). The optimal features that we arrive at are mostly intuitivefor 195 example, the top MIFs for classifying twitters detect the frequency contour of individual twitter phrases and the repetitive nature of the twitter call. In some cases, features seemed counter-intuitivefor example, the second MIF for trill classification seems to detect 'empty' regions of the cochleagram. In this theoretical framework, the lack of energy at those frequencies is also informative about the presence of a trill.
200
In Figure 4A, we show the pairwise information added by each MIF, the merits, and the weights of the top 10 MIFs for these classification tasks. To validate our model and to test the effectiveness of using only the MIFs for classifying call types, we used a novel set of calls consisting of 500 new within-category and 500 new outside-category calls drawn from the same 8 marmosets. This 'test' call set did not 235 significantly differ from the training set along any of the characterized parameters (red histograms in Fig. 1). We conceptualized each MIF as a simulated template-matching neuron whose 'response' to a stimulus was defined as the maximum value of the normalized cross-correlation (NCC) function. This simulated MIF-selective neuron 'spiked' whenever its response crossed its optimal threshold, i.e., when an MIF was 240 detected in the stimulus. In Fig. 5, we plot the spike rasters of simulated MIF-selective neurons for twitter, phee, and trill (top 10 MIFs shown), responding to a train of randomly selected calls from the novel test set. Each spike was weighted by the loglikelihood ratio of the MIF and the weighted sum of responses in 50 ms time bins was taken as the evidence in support of the presence of a particular call type. Although 245 occasional false positives and misses occurred, over the set of MIFs, the evidence in support of the correct call type was almost always the highest. Therefore, productioninvariant call categorization is a two-step processfirst, MIFs are detected in the stimuli, and then each feature is weighted by its log-likelihood ratio to provide evidence for a call type. 250 We quantified the performance of the entire set of MIFs (n=11, 16, and 20 for twitter, phee, and trill respectively) for the classification of novel calls by parametrically varying an overall evidence threshold and computing the hit rate (true positives) and false alarm rate (false positives) at each threshold. From these data, we plotted receiver operating characteristic (ROC) curves (Fig. 6A). In these plots, the diagonal 255 corresponds to chance, and perfect performance corresponds to the upper left corner.
The MIFs achieved >95% classification performance for all call types with very low false alarm rates. for twitter (top, blue), phee (middle, red), and trill (bottom, yellow). Each dot represents spiking of a putative MIF-selective neuron (i.e. when the response of the MIF exceeds its optimal threshold). (C) The evidence for presence of a particular call type, defined as the normalized sum of the firing rate of all MIF-selective neurons, weighted by their log-likelihood ratio. Over the duration of each call, the call type with the most evidence is considered to be present. 265 Occasional false alarms are usually outweighed by true positive MIF detections.
Control simulations
First, we ensured that our selection of 6000 initial random features adequately sampled stimulus space. To do so, we iteratively selected sets of MIFs using our greedy 270 search algorithm from initial random sets from which previously picked MIFs were excluded. We found that distinct sets of MIFs that had similar classification performance could be selected in successive iterations ( Supplementary Fig. 3). This suggests that our initial random feature set indeed contained several redundant MIF-like features, confirming the adequacy of our initial sampling.
275
Second, in order to determine the contributions of various model assumptions and parameters, we repeated this process of random initial feature generation, threshold optimization, and MIF selection in different scenarios. To better visualize these differences, we used detection-error tradeoff curves (Fig. 6B), where perfect performance is the lower left corner. In this figure, the performance of the default model, shows performance when using small features only (<100 ms and <1 oct.) or excluding small features, and using large features only (>250 ms and >2 oct.) or excluding large features. For trills, some of these conditions fall outside the range of the axes. Bottom row shows performance when the bandwidth and duration of features used for classification were independently varied. Note that because of the short duration of trill calls, we did not test the 370 effect of using only long duration features.
In this study, we used greedy search and pairwise maximization of information to find optimal features. However, it is possible that the greedy search algorithm does not find an optimal solution because of its inability to overcome local maxima. We do not 375 think this is the case because: 1) the model performs at high accuracy levels, leaving little room for significant improvements, 2) we could arrive at similar sets of MIFs and achieve similar performance levels from different initial feature sets, specifically when highly informative features were excluded (Supp. Fig. 3), and 3) we could match or outperform other machine learning based algorithms for marmoset call classification 19 .
380
Therefore, the implemented greedy search algorithm likely converges at a true optimal solution.
Factors contributing to the success of the MIF-based approach
Three factors were critical in the design and implementation of our approach. First, 385 focusing on a behaviorally critical task (call categorization), and choosing model species with rich vocal repertoires and behaviors (marmosets and guinea pigs) allowed us to clearly identify a computational goal of cortical processingcall categorization.
Previous experiments, both using electrophysiological 20 -24 and imaging techniques 17,25,26 , showing an increase in cortical resources allocated to call processing, validate our 390 choice of call categorization as a critical computational goal in vocal animals. Second, our analyses were based on a large sample of calls recorded from a large number of animals 8 . From this data set, we deliberately oversampled a large number of initial potential features. This ensured that the full extent of production variability was represented in this data set. Third, the greedy search algorithm efficiently identified 395 informative features from a training data set of a few hundred calls. Since clean and labelled training data sets are laborious to generate, the efficiency of greedy search provided a significant methodological advantage.
MIF-based reconstruction of call stimuli 400
The observation that an MIF-based approach successfully generalizes across production variability implies that most calls belonging to a category will contain one or more of the MIFs. Therefore, we asked how well calls could be reconstructed based on MIFs alone, using twitters as a specific example. To do so, we detected model twitter MIF neuron 'spiking' as earlier to the 500 training and 500 test twitters, and convolved 405 these spike times with an alpha function (with a time constant of 20 ms) to detect the peak locations of twitter MIFs within a twitter ( Supplementary Fig. 5A). We then placed copies of MIF cochleagrams at these peak locations, or added copies of MIF cochleagrams to previously placed feature cochleagrams. The final summed cochleagram was taken to be the reconstructed call ( Supplementary Fig. 5B). We We then asked if the auditory system uses such an optimal feature-based approach to call classification. To explore this possibility, as a first step, we generated 'tuning curves' of putative MIF-selective model neurons responding to commonly used acoustic stimuli 420 and asked if these tuning curves matched previous experimental observations. In this effort, we were restricted by the appropriateness and availability of previous data. Fig. 6). We then compared the 430 MIF responses to available neural data from marmoset primary auditory cortex (A1). Although the MIF model was purely theoretical and did not have prior access to neurophysiological data, we found that model MIF neuron tuning recapitulated actual data to a remarkable degree, both at the population and single-unit levels. For example, the population of model MIFs showed high preference for natural calls compared to 450 reversed calls (Fig.8A, bottom), similar to observations by Wang and Kadia 27 (reproduced in Fig. 8A, top). The high sparseness of auditory cortical neurons is well-documented 28 -30 . The responses of model MIF-selective neurons were also sparseonly few MIF neurons were activated by any given stimulus set, and only after extensively optimizing the parameters of the stimulus set to drive specific model MIF 455 neurons. For example, in Fig. 8B (top), we show a single-unit recording from a marmoset A1 L2/3 neuron that did not respond to most stimulus types (reproduced from Sadagopan and Wang 30 ), and only strongly responded to two-tone stimuli. Twitter MIFs (Fig. 8B, bottom) were similarly not responsive to most stimulus types, and only responded to carefully optimized linear frequency-modulated (lFM) sweeps. None of the 460 model twitter and trill MIF-selective neurons responded to pure tones (Fig. 8B, bottom), similar to many A1 L2/3 neurons.
Most strikingly, we could recapitulate some specific and highly nonlinear singleneuron tuning properties as well. Figure 8C (top; reproduced from Sadagopan and Wang 30 ) is a single-unit recording from marmoset A1 L2/3 that did not respond to pure 465 tones, but selectively responded to upward lFM sweeps of specific lengths (~80 ms).
Responses of at least three of the top 5 twitter MIF-selective model neurons showed similar tuning for 80 ms-long upward lFM sweeps (Fig. 8C, bottom). A second peak at ~40 ms was also present in responses of two model twitter MIF-selective neurons, also matching the experimental data. Figure 8D showed remarkably similar tuning (Fig. 8D, bottom). These model neurons did not respond to single sweeps as well, but responded to trains of at least 2 or more sweeps occurring with a 50 ms inter-sweep interval. Taken together, these data suggest neurons tuned to MIF-like features are present in A1 L2/3. Therefore, we would predict that a spectral-content based representation of calls in the ascending auditory pathway 480 becomes largely a feature-based representation in A1 L2/3. Consistent with the prediction of feature selectivity, we have found neurons in A1 of both marmosets and guinea pigs that respond selectively to conspecific call features.
In Fig. 9, we present the spike rasters of example single neurons in both marmoset and 485 guinea pig A1 responding to marmoset (Fig. 9A) and guinea pig calls (Fig. 9B) respectively. We presented multiple exemplars of each call type as stimuli. These shading corresponds to stimulus duration (different calls have different lengths). Note that spikes occur at specific times, and in response to 2 or 3 call types, suggesting that the neurons are responding to smaller features within these calls. (B) Spike rasters of three single units from guinea pig A1 responding to guinea pig call stimuli. 505
Task-dependent MIF-based classification as a general auditory computation
Our approach has two limitations. First, the number of auditory tasks that an animal is potentially required to solve is ill-defined. While we mitigate this limitation by choosing ethologically critical tasks such as call categorization, it is likely that we are only probing a small subset of all behaviorally relevant auditory tasks. Consequently, 510 while a subset of neurons in auditory cortex match predictions from our model for call and caller classification, developing a larger bank of natural auditory behavior (for example, predator sounds versus neutral sounds) will allow us to model and predict a larger fraction of cortical responses. Second, our model derives features from the auditory nerve representation of stimuli. It is well-known that this representation is 515 transformed more than once before impinging on cortical neurons. Therefore, the actual representation from which cortical neurons detect features are not accurately modeled here. This limitation arises from the current lack of predictive models for central auditory processing stages. It is possible that the performance of our algorithm will increase if we could accurately model other sub-cortical processing stages.
520
Recognizing these limitations, we asked if MIF-based representations of sounds could also be used for optimally solving other tasks, such as caller identification, and if MIF-based call classification also generalized to other vocal species. To test these hypotheses, we performed three proof-of-principle simulations using limited available data sets. For caller identification, we generated training and test sets of 60 twitters 525 each from eight marmosets, and generated 500 initial random features from the training set. We applied the greedy-search algorithm to determine the MIFs for caller identification in a caller A vs. all other callers task (Fig. 10A). We found that similar to call categorization, caller identification could also be achieved using a small number of MIFs (n = 4). If caller identification was performed in a binary fashion (four 530 classifications between two animals each), in half of these tasks, classification could be accomplished using less than 3 MIFs, indicating that the calls of these marmosets probably differed along the frequency axis. This is because if there are clear differences in dominant frequency (for example, Animal 1 vs. 4 in Fig. 1E), all features that lie in one animal's frequency range will detect all of that animal's calls and none of the other 535 animal's calls. During the greedy search procedure, these features will be considered redundant and reduced to a single feature. In the other half, more MIFs were required for caller identification, and in general, MIFs were larger than those for call-type classification. This is likely because the differences between twitters produced by these animals are smaller compared to the differences between call types and can only be 540 resolved in a higher dimensional space. Thus, integration over more frequencies and a larger time window may be necessary to resolve caller differences. In Supplementary Fig. 7, we plot the ROC for caller identification between a pair of marmosets with overlapping dominant frequencies. The MIF-based approach (n = 20 MIFs) achieved >80% hit rates with <10% false alarm rate for caller identification.
545
For determining the efficacy of MIF-based call classification in other species, we used guinea pig and macaque call classification as examples. Guinea pigs are highly vocal rodents that produce seven main call types 23, 31, 32 , which are highly overlapping in the low frequency end of the spectrum, and show high production variability. We used the MIF-based approach to classify guinea pig call types ('whine', 'wheek', and 'rumble') 550 from all other guinea pig call types. Similar to marmosets, guinea pig classification could be accomplished using a handful of features (12,9, and 3 MIFs for whine, wheek, and rumble), and MIF-based classification achieved high performance levels (Fig. 10B).
Similarly, we implemented the MIF-based algorithm to classify macaque calls (using 5, 4, and 9 MIFs for coos, grunts and harmonic arches) from a limited macaque call data 555 set 33 and achieved high classification performance (Fig. 10C). These proof-of-principle experiments demonstrate that an MIF-based approach indeed succeeds for different auditory classification tasks and in different species, suggesting that building representations of sounds using task-relevant features in auditory cortex may be a general auditory computation.
Discussion
In these experiments, we set out to understand the computations performed by the auditory system that enable the categorization of behaviorally critical sounds, such as calls, despite wide variations in the spectrotemporal structure of calls belonging to a category (production variability). We found that the optimal theoretical solution is to 570 detect the presence of informative mid-level features (termed MIFs) in calls. These MIFs generalize over production variability, and conjunctions of MIFs accomplish productioninvariant call classification with high accuracy. Critically, the tuning properties of putative MIF-selective neurons match previous recordings from marmoset A1 to a surprising degree. MIF-based classification was also successful for other tasks (marmoset caller identification), and in other species (guinea pig and macaque call recognition). Our results suggest that the representation of sounds in higher auditory cortical areas might enable performance of auditory tasks based on the detection of optimal task-relevant features.
Comparison to previous theoretical and experimental methods
An implication of our results is that in higher auditory processing stages, neural representations of sounds serve specific behavioral purposes. For example, the MIFbased classification approach that we proposed here is targeted to solve well-defined classification problems. At earlier stages of the auditory pathway, however, it may be 585 more important to faithfully represent sounds using basis sets that enable the accurate encoding of novel stimuli. Previous theoretical studies have proposed, for example, that natural sounds can be efficiently encoded using spike patterns, where each spike represents the magnitude and timing of input acoustic features 34 . However, when optimized to encode the complete waveforms of natural sound ensembles, the kernel 590 functions that elicit each spike show a striking similarity to cochlear filters. The advantage of this approach is that novel stimuli can be completely encoded using these kernel functions. In our approach, the input to our model implements a similar encoding schematicin the cochleagram, inputs are encoded as spatiotemporal spike patterns, where each spike is the result of cochlear filtering. In this early representation, while 595 information about category identity is present, it is distributed in the activity of many neurons in a high-dimensional space. We propose that in later processing stages, this early representation is transformed into a representation where category identity is more easily separable. By encoding MIF-like features, sound representation in later processing stages is less useful for high-fidelity encoding, but is instead goal-oriented. 600 However, this means that each task will require a distinct set of MIFs for optimal performance, and animals likely perform a large number of such behaviorally relevant tasks. The observed 1000-fold increase between the number of cochlear inputs and auditory cortical neurons may partially result from this necessity to encode a multitude Previous experimental studies have described call selectivity primarily using two methods: 1) categorization of neural tuning along an exhaustive list of call parameters 41 , and 2) categorizing call tuning as tuning for regions of the modulation spectrum 42 -44 . In 645 the former study, marmoset calls were parametrized along multiple acoustic dimensions. Some of these parameters were common to all call types, such as the length or dominant frequency of a call. The more distinguishing parameters, however, were unique to individual call types, such as the inter-phrase interval for twitters, or sinusoidal frequency modulation rate for trills. Neural tuning to calls was described 650 using tuning to these parameters but did not use the same set of parameters across call types. In our study, different MIFs are used for classification of different call types, but MIFs are parametrized along the same axesbandwidth and integration window, allowing for a uniform basis for comparisons. In the latter set of studies, neural tuning for birdsong was described using selectivity for specific frequency and temporal 655 modulations. In this case, tuning could be expressed in a unified stimulus space (of spectral-and temporal modulation rates). Both these methods, however, serve to describe neural tuning, and not to explain why tuning to certain parameters or regions of modulation space are necessary in the first place. Our results suggest that generating selectivity for task-relevant features explains why selectivity for stimulus parameters 660 arises in the first place.
Possible mechanisms of generation of MIF-based representations
MIF-based representations are constructed from MIF-selective neurons. Neural selectivity for MIFs may be generated 1) gradually along the ascending auditory 665 pathway, or 2) de-novo in cortex. Single-neuron feature selectivity often (but not always, see below) leads to selectivity for one or a few call types, and analyzing call selectivity of neurons at different auditory processing stages could provide insight into where MIFbased representations might be generated in the auditory pathway. In early auditory processing stages, evidence for call selectivity at the single-neuron level is minimal. For 670 example, at the level of the cochlear nucleus, few single neurons in species other than mice show call selectivity 45 . At the level of inferior colliculus, a population-level bias in call-selectivity has been reported 45 -47 , but evidence for single-neuron level callselectivity is equivocal 48 . It is only at the level of auditory cortex where clear singleneuron selectivity for calls or call features has been observed. Therefore, it is quite likely 675 that selectivity for MIF-like features in species with spectrotemporally complex calls is generated at the level of auditory cortex. This is supported by the expansion in the number of cortical neurons mentioned above. Importantly, the cortical emergence of MIF-based representations is also supported by the fact that MIF-like responses have been observed in the superficial layers of marmoset A1 30 . 680 We propose the following hierarchical model for auditory processing based on the representation of task-relevant features. In thalamorecipient layers of A1, representation of sound identity is still based on spectral content. This is reflected in the strongly tone-tuned responses of A1 L4 neurons. From these neurons, tuning for MIFlike features may be generated using nonlinear mechanisms such as combination-685 sensitivity. For example, the tuning properties of the marmoset A1 responses shown in
Computations underlying the perception of auditory categories
In conclusion, we propose a hierarchical model for solving a central problem in 715 auditory perceptionthe goal-oriented categorization of sounds that show high withincategory variability such as speech 1, 2 or animal calls 3 . Our work has broad implications as to where in the auditory pathway categorization begins to emerge, and what features are optimal to learn in categorization tasks. For example, the lack of distinction of perceptual categories of English /r/ and /l/ by native Japanese speakers, and the 720 success of bilingual Japanese speakers in accomplishing this classification, suggests that categorical differences can be learned 50 . Our model suggests that native speakers do not distinguish /r/-/l/ differences because the optimal features necessary for /r/-/l/ categorization are not encoded, as this categorization is not task-relevant for Japanese speech. FMRI evidence supports this conjecture 51 . Our model would predict that what is 725 learned in bilingual speakers are optimal features that maximize /r/-/l/ differences. Our model would further predict that this learning would be primarily reflected in changes to the A1 L2/3 circuit. Consistent with this hypothesis, a recent study showed that training humans to categorize monkey calls resulted in finer tuning for call features in the auditory cortex 52 . We therefore suggest that the neural representation of sounds at 730 higher cortical processing stages uses task-dependent features as building blocks, and that new blocks can be added to this representation to enable novel perceptual requirements. that demonstrated the advantages of features learnt using multiple binary classifications compared to those learnt using a single multi-way classification. Specifically, in that 760 study, multiple binary classifications resulted in features that were distinctive and highly tolerant to distortions 56 . For each classification task, we first generated training data sets, which consisted of 500 random within-class calls (e.g., twitters) produced by 8 animals (about 60 calls per animal), and 500 random outside-class calls (e.g., trills, phees, other calls) produced by the same 8 animals. In order to convert sound 765 waveforms of the calls into a physiologically meaningful quantity, we transformed these calls into cochleagrams using a previously published auditory nerve model 54 using human auditory nerve parameters with high spontaneous rate. We used human auditory nerve parameters because of the close similarity between marmoset and human audiograms 55 . The output of this model was the time-varying activity pattern of the entire 770 population of auditory nerve fibers, and resembles the spectrogram of the call ( Fig. 2A, B). We then extracted 6000 random features from these 500 within-class cochleagrams.
Vocalizations
To do so, we randomly chose a center frequency, bandwidth, onset time and length and extracted a snippet of activity from the cochleagram. Each feature thus corresponded to the spatiotemporal pattern of activity of a subset of auditory nerve fibers within a 775 specified time window (magenta box in Fig. 2B). We used rectangular feature shapes rather than other shapes to minimize assumptionsfor example, an ellipse shaped feature would imply that the weighting of individual auditory nerve fibers changes over time. To ensure that smaller features were well-sampled, 2000 of these features were restricted to have a bandwidth less than 1 octave and a duration less than 100 ms. The 780 bandwidth and duration of the remaining 4000 fragments were not constrained.
Threshold optimization: We defined the 'response' of a feature to a call as the maximum value of the normalized cross correlation (NCC) function between the feature's cochleagram and the call's cochleagram, restricted to the auditory nerve fibers that are 785 represented in the feature. We effectively implemented a one-dimensional version of NCC by only considering the auditory nerve fibers that overlapped between the call and the feature. Note that this means features can only be detected in the frequency range that they span, but can be detected anywhere in time within a call. NCC is a commonly used metric to quantify template-match. To compute the NCC, the feature and the where P(C) was assumed to be 0.10. We empirically verified that features identified were insensitive to variations of this value. The optimal threshold for each feature was taken to be the threshold value at which the mutual information was maximal, and the merit of each feature was taken to be the maximum mutual information value in bits (Fig. 2C). The 'weight' of each feature was taken to be its log-likelihood ratio. At the end 810 of this procedure, each of the initial 6000 features were allocated a merit, a weight, and an optimal threshold at which each individual feature's utility for classifying calls as belonging to within-or outside-class was maximized. Note that merit and weight are distinct quantities that need not be monotonically related. For example, if the lack of energy in a frequency band is indicative of a target category, features that contain 815 energy in this frequency band will be detected often in the other categories, but not in the target category. The feature will thus have high merit for classification, as it is informative by its absence, but have a negative weight.
Greedy search: Because we chose initial features are random, many of these features 820 individually provided low information about call category, and many of the best features for classification were self-similar, or redundant. Therefore, to extract maximal information from a minimal set of features for classification, we used a greedy search algorithm 12 to iteratively 1) eliminate redundant features, and 2) pick features that add the most information to the set of selected features. The minimal set of features that 825 together maximize information about call type were termed maximally informative features (MIFs). The first MIF was chosen to be the feature with maximal merit from the set of all 6000 initial random features. Every consecutive MIF was chosen to maximize pairwise added information with respect to the previously chosen MIFs. Note that these consecutive features need not have high merit individually. We iteratively added MIFs 830 until we could no longer increase the hit rate without increasing the false alarm rate.
Practically, this meant adding features until total information reached 0.999 bits, or individual features added less than 0.001 bits, whichever was reached earlier. At the end of this procedure, a small set of MIFs, containing the optimal set of features for call classification was obtained.
835
Analysis and statistics: To test how well novel calls could be classified using these MIFs alone, we generated from the same 8 animals a test set of 500 within-and outside-class calls that the model had not been exposed to before. We computed the NCC between each test call and MIF, and considered the MIF to be detected in the call if the 840 maximum value of the NCC function exceeded its optimal threshold. If detected, the MIF provided evidence in favor of a test call belonging to a call type, proportional to its loglikelihood ratio. We then summed the evidence provided by all MIFs and generated ROC curves of classification performance by systematically varying an overall evidence threshold. We used the area under the curve (AUC) to compare ROC curves for 845 classification performance by MIFs generated with different constraints (see Results).
Statistical significance was evaluated using non-parametric methods for comparing between these conditions, and for comparing performance to a large number of simulations generated using random MIFs. values that could be conceptualized as equivalent to membrane potential (Vm) responses. These were converted to firing rates by applying a power law nonlinearity, of the form: Where FR is the firing rate response in spk/s, is the MIF's optimal threshold, p is the 860 exponential nonlinearity set to a value of 4, and k is an arbitrary scaling factor.
Call reconstruction from MIFs: To reconstruct calls, we conceptualized MIFs as MIFselective neurons, and considered the times at which NCC values exceeded the optimal threshold to be the spike times of these neurons. MIF spike times were computed with a 865 time resolution of 2 ms to simulate refractoriness, and alpha-functions were convolved with the spike times to determine the peak time at which each MIF was detected. A copy of the MIF cochleagram was then placed at the peak time, or summed (with loglikelihood weights) if overlapping with a previously placed cochleagram. The accuracy of reconstruction was defined as the NCC between the original stimulus and its 870 reconstructed version at zero lag.
Electrophysiology methods: Predictions generated from the MIFs were compared to earlier recordings from marmoset A1. Details of recording procedures are available from original experimental data sources. All recordings were from adult marmosets. 875 Population data comparing natural to reversed twitters were obtained from Wang and Kadia 27 . These experiments were performed in anesthetized marmosets. Single-neuron data regarding feature selectivity were obtained from Sadagopan and Wang 30 . These recordings were from awake, passively-listening marmosets. Single-neuron data regarding feature selectivity in guinea pigs were obtained from adult, head-fixed, 880 passively-listening guinea pigs at the University of Pittsburgh. Briefly, a headpost and recording chambers were secured to the skull using dental cement following aseptic procedures. Animals were placed in a double-walled, anechoic, sound attenuated booth. A small craniotomy was performed over auditory cortex. High-impedance tungsten electrodes (3 -5 MΩ, A-M Systems Inc. or FHC, Inc.) were advanced through 885 the dura into cortex to record neural activity. Stimuli were generated in MATLAB, and presented (TDT Inc.) from the best location in an azimuthal speaker array (B&W-600S3 or Fostex FT-28D for marmosets, TangBand 4" full-range driver for guinea pigs). Single units were sorted online using a template matching algorithm (Alpha Omega Inc. or Ripple, Inc), and for guinea pigs, refined offline (MKSort). All analyses were performed 890 using custom MATLAB code.
Code availability: Custom code will be provided upon request to the corresponding author (SS). | 8,915.2 | 2018-09-08T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Hantavirus in Bat, Sierra Leone
To the Editor: Hantaviruses (family Bunyaviridae) are transmitted from rodent reservoirs to humans. These viruses cause life-threatening human diseases: hantavirus cardiopulmonary syndrome in the Americas and hemorrhagic fever with renal syndrome in Asia and Europe (1). Since 2006, indigenous hantaviruses were reported also from Africa. Sangassou virus was found in an African wood mouse (Hylomyscus simus) in Guinea (2). Discovery of newer African hantaviruses, Tanganya virus and recently Azagny virus, was even more surprising because they were found in shrews (3,4).
The detection of hantaviruses in small mammals other than rodents, such as shrews and also moles (4), increasingly raises questions regarding the real hantavirus host range. Bats (order Chiroptera) are already known to harbor a broad variety of emerging pathogens, including other bunyaviruses (5). Their ability to fly and social life history enable efficient pathogen maintenance, evolution, and spread. Therefore, we conducted a study on hantaviruses in bats from Africa.
A total of 525 tissue samples from 417 bats representing 28 genera were tested for the presence of hantavirus RNA. Samples originated from different regions in western and central Africa and were collected during 2009 and early 2011. Total RNA was extracted from tissue samples and reverse transcribed. cDNA was screened by PCR specific for sequences of the large genomic segment across the genus Hantavirus (2).
One sample yielded a product of the expected size and was subjected to cloning and sequencing. The positive sample (MGB/1209) was obtained from 1 of 18 investigated slit-faced bats (family Nycteridae). The animal was trapped at the Magboi River within Gola National Park, Sierra Leone (7°50.194′N, 10°38.626′W), and the identification as Nycteris hispida has been verified with the voucher specimen (RCJF529). Histologic examination of organs of the animal showed no obvious pathologic findings.
The obtained 414-nt sequence covers a genomic region, which was found to correspond to nt position 2,918–3,332 in the large segment open reading frame of prototypic Hantaan virus. Bioinformatic analysis on the amino acid level showed highest degrees of identity to shrew- and mole-associated hantaviruses (Thottapalayam virus 73.0%, Altai virus 69.7%, Nova and Imjin virus 69.3%). On the basis of tree topology of a maximum-likelihood phylogenetic tree, the sequence does not cluster with rodent-associated hantaviruses but groups with those found in shrews and moles (Figure).
Figure
Maximum-likelihood phylogenetic tree of MGB/1209 virus based on partial large segment sequence (414 nt) and showing the phylogenetic placement of the novel sequence from Nycteris spp. bat compared with hantaviruses associated (i) with shrews and moles: ...
Considering that bats are more closely related to shrews and moles than to rodents (6), a certain genetic similarity of a putative bat-borne hantavirus with shrew- and mole-associated hantaviruses seems reasonable. Notably, shrew-associated Thottapalayam virus (India) and Imjin virus (South Korea) seem to be closer relatives, and African Tanganya virus (Guinea) and Azagny virus (Cote d’Ivoire) are more distantly related. Additional sequence data is needed for more conclusive phylogenetic analyses.
Because the new amino acid sequence is at least 22% divergent from those of other hantaviruses, we conclude that the bat was infected with a newly found hantavirus. We propose the putative name Magboi virus (MGBV) for the new virus because it was detected in an animal captured at the Magboi River in Sierra Leone. The MGBV nucleotide sequence is novel and has not been known or handled before in our laboratory. Before this study, hantavirus nucleic acid was found in lung and kidney tissues of bats from the genera Eptesicus and Rhinolophus in South Korea. However, nucleotide sequencing showed the presence of prototypical Hantaan virus indicating a spillover infection or laboratory contamination (7).
Further screening is necessary to confirm N. hispida as a natural reservoir host of the virus. Although the presented bat-associated sequence is obviously distinct from other hantaviruses, which suggests association with a novel natural host, a spillover infection from another, yet unrecognized host cannot be ruled out. However, detection of the virus exclusively in 1 organ (lung but not in liver, kidney, and spleen; data not shown) suggests a persistent infection that is typically observed in natural hosts of hantaviruses (8).
To date, only a few reports exist on cases of hemorrhagic fever with renal syndrome in Africa (9,10). However, underreporting must be assumed because the symptoms resemble those of many other febrile infections. Moreover, in cases of infections by non–rodent-associated hantaviruses, cross-reactivity with routinely used rodent-borne virus antigens should be limited and may hamper human serodiagnostics (1). The results suggest that bats, which are hosts of many emerging pathogens (5), may act as natural reservoirs for hantavirus. The effect of this virus on public health remains to be determined.
The molecular masses of the PrP res moieties from the 2 cows were also clearly distinct from those from controls with L-and H-BSE (Figure). For samples from animals with H-BSE, enzymatic deglycosylation demonstrated PrP res subtypes, 1 and 2, the latter being a C-terminal PrP res fragment of ≈12-14 kDa (6). To investigate whether the novel PrP res type corresponds to PrP res subtype 2, we compared samples from cow 2 with those from the H-BSE control by Western blot. The PrP res type from the 2 cows reported here and PrP res subtype 2 from the H-BSE control were indeed distinct (Figure).
We report a novel PrP res signature in 2 cows with BSE diagnoses determined according to established criteria. Combining Western blot analysis with an epitope mapping strategy, we ascertained that these animals displayed an N terminally truncated PrP res different from currently classifi ed BSE prions ( Figure). The interpretation of these fi ndings remains diffi cult because neuropathologic and systematic clinical data for the 2 cases are not available. Moreover, the tissue samples were autolyzed, and the question of whether this affected the PrP res molecular signature is of concern. Nonetheless, our fi ndings raise the possibility that these cattle were affected by a prion disease not previously encountered and distinct from the known types of BSE. To confi rm this possibility and to assess a potential effect on disease control and public health, in vivo transmission studies using transgenic mouse models and cattle are ongoing. Until results of these studies are available, molecular diagnostic techniques should be used so that such cases are not missed.
Hantavirus in Bat, Sierra Leone
To the Editor: Hantaviruses (family Bunyaviridae) are transmitted from rodent reservoirs to humans. These viruses cause life-threatening human diseases: hantavirus cardiopulmonary syndrome in the Americas and hemorrhagic fever with renal syndrome in Asia and Europe (1). Since 2006, indigenous hantaviruses were reported also from Africa. Sangassou virus was found in an African wood mouse (Hylomyscus simus) in Guinea (2). Discovery of newer African hantaviruses, Tanganya virus and recently Azagny virus, was even more surprising because they were found in shrews (3,4).
The detection of hantaviruses in small mammals other than rodents, such as shrews and also moles (4), increasingly raises questions regarding the real hantavirus host range. Bats (order Chiroptera) are already known to harbor a broad variety of emerging pathogens, including other bunyaviruses (5). Their ability to fl y and social life history enable effi cient pathogen maintenance, evolution, and spread. Therefore, we conducted a study on hantaviruses in bats from Africa.
A total of 525 tissue samples from 417 bats representing 28 genera were tested for the presence of hantavirus RNA. Samples originated from different regions in western and central Africa and were collected during 2009 and early 2011. Total RNA was extracted from tissue samples and reverse transcribed. cDNA was screened by PCR specifi c for sequences of the large genomic segment across the genus Hantavirus (2).
One sample yielded a product of the expected size and was subjected to cloning and sequencing. The positive sample (MGB/1209) was obtained from 1 of 18 investigated slit-faced bats (family Nycteridae). The animal was trapped at the Magboi River within Gola National Park, Sierra Leone (7°50.194′N, 10°38.626′W), and the identifi cation as Nycteris hispida has been verifi ed with the voucher specimen (RCJF529). Histologic examination of organs of the animal showed no obvious pathologic fi ndings.
The obtained 414-nt sequence covers a genomic region, which was found to correspond to nt position 2,918-3,332 in the large segment open reading frame of prototypic Hantaan virus. Bioinformatic analysis on the amino acid level showed highest degrees of identity to shrewand mole-associated hantaviruses (Thottapalayam virus 73.0%, Altai virus 69.7%, Nova and Imjin virus 69.3%). On the basis of tree topology of a maximum-likelihood phylogenetic tree, the sequence does not cluster with rodent-associated hantaviruses but groups with those found in shrews and moles (Figure).
Considering that bats are more closely related to shrews and moles than to rodents (6), a certain genetic similarity of a putative bat-borne hantavirus with shrewand mole-associated hantaviruses seems reasonable. Notably, shrewassociated Thottapalayam virus (India) and Imjin virus (South Korea) seem to be closer relatives, and African Tanganya virus (Guinea) and Azagny virus (Côte d'Ivoire) are more distantly related. Additional sequence data is needed for more conclusive phylogenetic analyses.
Because the new amino acid sequence is at least 22% divergent from those of other hantaviruses, we conclude that the bat was infected with a newly found hantavirus. We propose the putative name Magboi virus (MGBV) for the new virus because it was detected in an animal captured at the Magboi River in Sierra Leone. The MGBV nucleotide sequence is novel and has not been known or handled before in our laboratory. Before this study, hantavirus nucleic acid was found in lung and kidney tissues of bats from the genera Eptesicus and Rhinolophus in South Korea. However, nucleotide sequencing showed the presence of prototypical Hantaan virus indicating a spillover infection or laboratory contamination (7). Further screening is necessary to confi rm N. hispida as a natural reservoir host of the virus. Although the presented bat-associated sequence is obviously distinct from other hantaviruses, which suggests association with a novel natural host, a spillover infection from another, yet unrecognized host cannot be ruled out. However, detection of the virus exclusively in 1 organ (lung but not in liver, kidney, and spleen; data not shown) suggests a persistent infection that is typically observed in natural hosts of hantaviruses (8).
To date, only a few reports exist on cases of hemorrhagic fever with renal syndrome in Africa (9,10). However, underreporting must be assumed because the symptoms resemble those of many other febrile infections. Moreover, in cases of infections by non-rodent-associated hantaviruses, cross-reactivity with routinely used rodent-borne virus antigens should be limited and may hamper human serodiagnostics (1). The results suggest that bats, which are hosts of many emerging pathogens (5), may act as natural reservoirs for hantavirus. The effect of this virus on public health remains to be determined.
Outbreak of Porcine Epidemic Diarrhea in Suckling Piglets, China
To the Editor: Beginning in October 2010, porcine epidemic diarrhea (PED), caused by a coronaviral infection affecting pigs, emerged in China in an outbreak characterized by high mortality rates among suckling piglets. The outbreak overwhelmed >10 provinces in southern China, and >1,000,000 piglets died. This outbreak was distinguished by ≈100% illness among piglets after birth (predominantly within 7 days and | 2,590 | 2012-01-01T00:00:00.000 | [
"Biology"
] |
High-power TR-24 cyclotron-based p-n convertor cooled by submerged orifice jet
The TR-24 cyclotron (Advanced Cyclotron Systems Inc., Canada) of the Nuclear Physics Institute in Řež provides protons with variable energies from 18 MeV up to 24 MeV and beam current of 0.3 mA. For such parameters, the p +Be source reaction on thick Be target can produce a white-spectrum neutron field (En ≤ 22 MeV) with the intensity of 5×10 12 n/s/sr in forward direction. Present paper outlines the development of Be-target cooling system, devoted to remove the heat load of 7 kW (density up to 4 kW/cm2) from the target. Due to novel “orifice-form“ of jet cooling (resulting in the shortest source-tosample distance of 20 mm) with extremely high cooling efficiency, the TR-24 p-n convertor can achieve neutron-flux up to 2x1012n/cm2/s nearby the target output.
Introduction
For accelerator-based neutron irradiation facilities the deuteron break-up on light-nuclei target ( 3 H, D 2 O, Li, Be and C) presents most intensive source-reactions tool. However, high-power deuteron accelerators suffer from principal limitations (cyclotrons) and/or by high technical demands (linear accelerators), which result in expensive facilities (SARAF and future projects IFMIF, DONES [1]) -consequently multipurpose and far from dedicated compact solutions. Commercially available and relatively low-cost highpower proton cyclotrons (mostly of medical purposes) incorporated with suitable proton-neutron convertor (usually with fixed and/or rotating Be discs) are widely proven in the BNCT. Present work points at relatively low-cost high-power medical proton cyclotrons, which could provide fast "fusion-like" neutrons at intensity which could not only supply in parallel the future irradiation facilities, butactually -the material-research fission reactors in particular. The NPI TR-24 cyclotron, (Advanced Cyclotron Systems Inc.)( Fig. 1) provides protons with variable energies up to 24 MeV and 0.3 mA of external beam current. For such parameters, the p+Be source reaction on thick Be target can produce a white-spectrum neutron field (E n ≤ 22 MeV) with an intensity of 5×10 12 n/s/sr in forward direction (based on MCNPX calculations, experimental activation tests, and standalone investigation of the p+Be source reaction for 24 MeV proton beam using the multi-foil activation technique at the NPI [2]). Most critical point in the effort to reach desirable high neutron flux density consists in removing a high local heat load from the target by cooling assembly with minimized dimensions in the direction of neutron emission.
Cooling assembly of the target 2.1 ANSYS simulations
Due to Gaussian-like profile of a cyclotron beam spot on the target (σ ≤ 20 mm), the heat density up-to 4 kW/cm 2 resulting in overall heat load of 7.2 kW needs to be removed from the target of any proton-neutron convertor. Point-like form of accelerator based neutron sources leads to an inverse square dependence of the neutron flux density on source-to-sample distance. Therefore, the dimensions of cooling assembly are to be minimized to use high flux density option in a vicinity of the target. Taking advantages in well appropriate dimensions (small radial length) and commonly known high heat transfer coefficients, various types of orifice nozzles were considered and tested to form a submerged impingement jets in cooling arrangement of static Be target. Nozzles were investigated to form a cooling assembly with water flow of maximum available rate 2 l/s and a system pressure of 1.1 MPa on static Be target. Special effort was devoted to minimize the distance between the target and output backing side of the cooling chamber. The simulations were carried out in the forced convection-flow mode ANSYS (Fig. 2) to determine basic characteristics (heat transfer coefficient, fluid velocity at shear layer and pressure in the stagnation one) for different nozzle types (see Fig. 3).
The prototype of Be target chamber
Considering similar empirical data on water wettability (contact angle) of the aluminum and beryllium surfaces, the Al disc (Fig. 4) -instead of Be disc was utilized in the pilot experiments to facilitate a set of three thermocouples inside the target (rather complicate in the case of Be material). The thickness of target discs was determined by full stopping of 24 MeV protons. To take into account an expected hydrogen embrittlement (blistering effects) due to intensive proton beam, backing material with high hydrogen diffusion coefficient is considered instead of usually used not-stopping the protons within the target itself. The reason comes from possible disruption of thermal dynamics in the stagnation zone when part of thermal load is dissipated in the shear layer of cooling water.
Cooling tests
A mock-up of the target setup (Fig. 5) was manufactured to verify the ANSYS simulation and to determine empirical parameters of boiling mode of cooling. Temperature distribution in the target was measured by the set-up of three thermocouples at water flow rate of 2 l/s, the system pressure of 1.1 MPa, constant water-flow temperature of 20°C (ensured by a cooling unit) at different beam currents and spot dimensions during the irradiation with 24 MeV protons. In the Fig. 6, a typical behavior of measured temperature in various locations across the beam-spot area is given. Here, the linear dependence of temperature on beam current corresponds to heat transfer at convective single-phase mode. Clear evidence is seen for the onset of boiling, the area of nucleate and fully developed nucleate boiling and the presence of local critical heat flux as well. Due to novel "orifice-form" of cooling assembly (resulting in the shortest s-to-s distance of 20 mm), the TR-24 p-n convertor can achieve neutron-flux up to 2x10 12 n/cm 2 /s nearby the target output, the highest value of flux density for fast-neutron irradiation purposes until now. Target station with the open area at forward direction is developed to provide the irradiation under non-perturbed arrangement of different samples and associated hardware. Remotely controlled manipulators of irradiated components are being developed to ensure basic operation in the large induced activity ranging up to Sv/h. The methods to minimize the blistering efect during operation are under investigation.
Submerged water jet cooling assembly was tested at BNCT neutron facility of the Massachusetts Institute of Technology [5]. Removing of 5 kW heat loaded by resistive heating into fixed steel-dummy target has been reported. However, long pipe-nozzle derogates a possibility to reach high density of the neutron flux in irradiated samples. | 1,442 | 2020-07-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
On the Simulation-Based Reliability of Complex Emergency Logistics Networks in Post-Accident Rescues
This paper investigates the reliability of complex emergency logistics networks, as reliability is crucial to reducing environmental and public health losses in post-accident emergency rescues. Such networks’ statistical characteristics are analyzed first. After the connected reliability and evaluation indices for complex emergency logistics networks are effectively defined, simulation analyses of network reliability are conducted under two different attack modes using a particular emergency logistics network as an example. The simulation analyses obtain the varying trends in emergency supply times and the ratio of effective nodes and validates the effects of network characteristics and different types of attacks on network reliability. The results demonstrate that this emergency logistics network is both a small-world and a scale-free network. When facing random attacks, the emergency logistics network steadily changes, whereas it is very fragile when facing selective attacks. Therefore, special attention should be paid to the protection of supply nodes and nodes with high connectivity. The simulation method provides a new tool for studying emergency logistics networks and a reference for similar studies.
Introduction
With the rapid development of global commerce, accidents such as diseases or public health emergency frequently occur, resulting in numerous casualties and heavy economic losses. Emergency logistics provide material support for accidents, which makes a reliable emergency logistics network vital [1]. A complex emergency logistics network is a type of temporal and spatial evolving dynamic network that involves a wide variety of unstructured data. An effective emergency logistics network is helpful for improving the performance of emergency rescue and post-accident operations in sustainability. As a typical complex network, an emergency logistics network may be the victim of attacks and damages arising from uncertainties, such as randomness, diffusivity, and the aftermath of an outbreak of emergencies. These issues may cause local failure or paralysis of the network, which would introduce serious aftereffects into the entire socio-economic system. Thus, this topic has garnered the attention of emergency personnel. When an emergency logistics network is under sudden attack, it is vital to swiftly return it to a sound working mode, and reliability is the key to accomplishing this task. Reliability analysis is an indispensable process in emergency network planning, design, and management. Therefore, increasing emergency material reserves and optimizing the reliability of emergency logistics networks are key factors in emergency management systems. Therefore, we must further the research on the reliability of networks to relieve the effects of network emergencies. demand nodes. Emergency materials supply nodes include emergency logistics bases, emergency materials supply bases and emergency allocation centers. The regions are divided into several units, and the center of each unit represents each emergency material's demand node.
Complex network theory originated from the random graph model initiated by Erdös and Rényi in the 1960s. An increasing number of applications of complex network theory continue to be developed in a variety of fields, including communication, and published in journals such as Nature and Science.
The initiation of small-world networks [26] and scale-free networks [27] was proposed by Watts and Barabasi. Existing research shows that complex networks have characteristics different from statistical features (including the small-world effect and scale-free property [28]) and thus belong to neither random nor regular networks.
Based on the physical structure of an actual emergency logistics network, this paper describes emergency logistics supply nodes, emergency logistics demand nodes, and links between supply and demand nodes in the network. For instance, Figure 1 shows that there are 5 nodes and 6 links in the network. Supply Node 1 directly connects to the rest of the nodes, Supply Node 3 connects to Node 4, and Supply Node 2 connects to Node 5.
Each link between two nodes can be assigned a weight value to indicate specific information about the relationship between nodes, such as in a weighted network, with each side possessing different values to describe the network. The weight of each link is specified according to different research objectives, such as transportation time, actual distance, transportation capacities, and freight traffic volume. For example, Figure 1 shows that it takes 4 h to transport materials from Supply Node 1 to Demand Node 2 and 5 h to transport materials from 3 to 4. demand nodes. Emergency materials supply nodes include emergency logistics bases, emergency materials supply bases and emergency allocation centers. The regions are divided into several units, and the center of each unit represents each emergency material's demand node.
Complex network theory originated from the random graph model initiated by Erdös and Rényi in the 1960s. An increasing number of applications of complex network theory continue to be developed in a variety of fields, including communication, and published in journals such as Nature and Science. The initiation of small-world networks [26] and scale-free networks [27] was proposed by Watts and Barabasi. Existing research shows that complex networks have characteristics different from statistical features (including the small-world effect and scale-free property [28]) and thus belong to neither random nor regular networks.
Based on the physical structure of an actual emergency logistics network, this paper describes emergency logistics supply nodes, emergency logistics demand nodes, and links between supply and demand nodes in the network. For instance, Figure 1 shows that there are 5 nodes and 6 links in the network. Supply Node 1 directly connects to the rest of the nodes, Supply Node 3 connects to Node 4, and Supply Node 2 connects to Node 5.
Each link between two nodes can be assigned a weight value to indicate specific information about the relationship between nodes, such as in a weighted network, with each side possessing different values to describe the network. The weight of each link is specified according to different research objectives, such as transportation time, actual distance, transportation capacities, and freight traffic volume. For example, Figure 1 shows that it takes 4 h to transport materials from Supply Node 1 to Demand Node 2 and 5 h to transport materials from 3 to 4. Direct connection refers to a direct connection between two nodes; theoretically, transportation will take an infinite amount of time if there is no connection between two nodes. However, we attempt to take temporary measures to connect a supply node with a demand node, even if there is no connection between them. If the emergency time limit period is , then the link between two unconnected nodes can be assigned the value . Nodes of this type cannot be added to the number of effective demand nodes because they cannot be supplied in time.
An adjacency matrix is used to describe the emergency logistics network.
If there is a direct connection between nodes and , then is the time needed for direct transportation between them; if there is no direct connection between nodes and , then is . is the total number of nodes in the network.
Thus, a topological model of a complete emergency logistics network consisting of supply nodes, demand nodes, and links was built. The model retains the topological features of the emergency logistics network. Therefore, researchers can study the features of the complex network and judge the network reliability by analyzing its basic geometric features, such as degree distribution, average network path length, and the clustering coefficient. Direct connection refers to a direct connection between two nodes; theoretically, transportation will take an infinite amount of time if there is no connection between two nodes. However, we attempt to take temporary measures to connect a supply node with a demand node, even if there is no connection between them. If the emergency time limit period is L, then the link between two unconnected nodes can be assigned the value L. Nodes of this type cannot be added to the number of effective demand nodes because they cannot be supplied in time.
An adjacency matrix is used to describe the emergency logistics network.
If there is a direct connection between nodes i and j, then d ij is the time needed for direct transportation between them; if there is no direct connection between nodes i and j, then d ij is L. n is the total number of nodes in the network.
Thus, a topological model of a complete emergency logistics network consisting of supply nodes, demand nodes, and links was built. The model retains the topological features of the emergency logistics network. Therefore, researchers can study the features of the complex network and judge the network reliability by analyzing its basic geometric features, such as degree distribution, average network path length, and the clustering coefficient.
The Statistical Characteristics of a Complex Emergency Logistics Network
The following are the major parameters used in this paper to describe the abovementioned features [29].
Average path length: The average path length of a complex emergency network is the average value of the path lengths between all node pairs in the network, which describes the degree of separation between nodes in the network. The path length between two nodes is defined as the number of links on the shortest path linking them.
Clustering coefficient C describes the clustering state of nodes of the network. A large C value indicates a tight network and a small C value indicates a loose network. The formula for the clustering coefficient is as follows: where m i is the number of vertices directly connected to vertex v i , and l i is the number of links directly connected to vertex v i . The network clustering coefficient is the average value of the clustering coefficients of all nodes. Complex emergency network node degree k is the total number of links connected to one node. The complex emergency network degree cumulative probability is the ratio of nodes with node degrees no less than k among all nodes such that where n(k) is the number of nodes with degrees no less than k, and N is the total number of nodes in the network. Emergency logistics will exhibit the small-world effect if the average path length is small and the clustering coefficient is large; furthermore, the network will exhibit the scale-free property if the relationship between the cumulative probability of one node and that of other nodes fits a power law distribution.
Connecting Reliability of Emergency Logistics and Its Evaluation Index
Connection reliability matters in determining whether demand nodes can receive emergency materials from supply nodes when there is an emergency need. Clearly, the reliability is related to the effects introduced by attacks and the topological structure of the network.
The main evaluation indicators used for an emergency logistics network include the following: (1) Emergency supply time T The emergency supply time is the amount of time it takes the network to fully supply the necessary emergency materials. This measure is the arithmetic mean value of the supply time of all demand nodes. The supply time of demand nodes refers to the time used to transport emergency materials from supply nodes to each demand node.
(2) Ratio of effective demand nodes P Effective demand nodes refer to the demand nodes that directly or indirectly connect to the emergency supply nodes in time. The ratio of the effective demand nodes refers to the ratio of the number of effective demand nodes among all demand nodes.
Attack Types of Emergency Logistics Network
This paper only considers the nodes under attack. Invalidation of one node means the invalidation of all zones connected to this node at the same time. All connections bypassing the node will be cut out. Attacks targeting emergency logistics networks can be divided into random attacks and selective attacks.
(1) Random attacks occur randomly at each node and are typically observed in situations such as natural disasters, accidents, and partial failures. (2) Selective attacks occur based on the number of direct connections, usually in descending order.
These attacks are typically observed in situations such as terrorist attacks and blocking at major nodes.
Simulation Pattern and Method of Random Attack
In the simulated pattern of random attacks, all nodes in the network are attacked randomly, and links connected to the nodes under attack will be invalidated. The minimum transportation time from each supply node to each demand node will be calculated in accordance with the arithmetic of Dijskra. The minimum time will be chosen if one demand node can be supplied by two or more supply nodes. All demand nodes with transportation times less than L are defined as effective demand nodes. We can then calculate the ratio of effective demand nodes of the network P in the rest of the network. The emergency supply time T is the arithmetic mean value of the transportation time of all demand nodes. Then, a node will be randomly chosen for attack, and the ratio of demand nodes P and emergency supply times T for the rest of the network will be calculated. We repeat this process until all nodes are attacked. The following are the steps of the simulation algorithm.
(1) Initialize the adjacency matrix D(n) = d ij n×n . If there is a side connecting i, j directly, then d ij is the transportation time between the two nodes. If there is no direct connection between i, j, d ij = L (L is the emergency time limit).
Simulation Method of Selective Attacks
Selective attack simulation is generally similar to random attack simulation. However, unlike the simulation of random attacks, this simulation selects the nodes with the highest node degree k as its targets for attack. Consequently, the links connected to the node simultaneously lose their function. The ratio of the effective demand nodes P and emergency supply time T over the rest of the network are then calculated, followed by an attack of the node with the next highest node degree k. Again, the ratio of the effective demand nodes P and emergency supply time T are calculated for the rest of the network. We repeat the process until all nodes in the network are attacked. This simulation only requires changes to Steps (2) and (7) in Section 4.2. Figure 2 shows the topological model of an emergency logistics network structure. The model is taken from the eastern coast of China and describes the topological relationships of emergency logistics in the region, assuming that = 120 h.
Marking of Network Type
To study the characteristics of complex networks, a numerical statement on the degree number of all nodes and the clustering coefficient is established according to Figure 2, as shown in Table 1.
Based on the corresponding statistics, there are 12 nodes with node degrees of 1-2, 4 nodes with node degrees of 3-4, and 4 nodes with node degrees exceeding 4, which indicate that the node degrees of most of the nodes are very small and that few nodes have large node degrees. Additionally, the table shows that the average clustering coefficient is 0.35, which is far greater than the reciprocal of the nodes 1/20 and that the average path length of the emergency logistics network is 2.758, far smaller than that of the 20 nodes and 29 links of the network. Therefore, this emergency logistics network is a small-world network because of its relatively large clustering coefficient and small average path length.
Marking of Network Type
To study the characteristics of complex networks, a numerical statement on the degree number k of all nodes and the clustering coefficient C is established according to Figure 2, as shown in Table 1. Based on the corresponding statistics, there are 12 nodes with node degrees of 1-2, 4 nodes with node degrees of 3-4, and 4 nodes with node degrees exceeding 4, which indicate that the node degrees of most of the nodes are very small and that few nodes have large node degrees.
Additionally, the table shows that the average clustering coefficient C is 0.35, which is far greater than the reciprocal of the nodes 1/20 and that the average path length of the emergency logistics network is 2.758, far smaller than that of the 20 nodes and 29 links of the network. Therefore, this emergency logistics network is a small-world network because of its relatively large clustering coefficient and small average path length.
The characteristics of the scale-free property of the network are discussed next. In Figure 3, the broken line shows the relationship between the degree of this network and the cumulative probability while the curve is generated by fitting a power function. The curvilinear equation is p(k) = 1.2124k −1.0396 , and the coefficient of determination is R 2 = 0.9183. Therefore, the cumulative degree distribution of the network is consistent with the power law distribution of the dispersion index λ = 1.0396. This finding indicates that this emergency logistics network possesses the scale-free property according to complex network theory.
The characteristics of the scale-free property of the network are discussed next. In Figure 3, the broken line shows the relationship between the degree of this network and the cumulative probability while the curve is generated by fitting a power function. The curvilinear equation is ( ) = 1.2124 −1.0396 , and the coefficient of determination is 2 = 0.9183. Therefore, the cumulative degree distribution of the network is consistent with the power law distribution of the dispersion index λ = 1.0396. This finding indicates that this emergency logistics network possesses the scale-free property according to complex network theory. The abovementioned findings demonstrate that this emergency logistics network is a smallworld and a scale-free network.
Simulation Results and Its Analysis
To save the distribution time and reduce environmental and public health losses in post-accident operations and emergency rescue, based on the simulation method outlined above, we developed a simulation model using the Visual Basic platform to analyze random and selective attacks on the emergency logistics network. The abovementioned findings demonstrate that this emergency logistics network is a small-world and a scale-free network.
Simulation Results and Its Analysis
To save the distribution time and reduce environmental and public health losses in post-accident operations and emergency rescue, based on the simulation method outlined above, we developed a simulation model using the Visual Basic platform to analyze random and selective attacks on the emergency logistics network. The changing state of the emergency supply time T and the ratio of effective demand nodes P under random attacks are shown in Figure 4a The characteristics of the scale-free property of the network are discussed next. In Figure 3, the broken line shows the relationship between the degree of this network and the cumulative probability while the curve is generated by fitting a power function. The curvilinear equation is ( ) = 1.2124 −1.0396 , and the coefficient of determination is 2 = 0.9183. Therefore, the cumulative degree distribution of the network is consistent with the power law distribution of the dispersion index λ = 1.0396. This finding indicates that this emergency logistics network possesses the scale-free property according to complex network theory. The abovementioned findings demonstrate that this emergency logistics network is a smallworld and a scale-free network.
Simulation Results and Its Analysis
To save the distribution time and reduce environmental and public health losses in post-accident operations and emergency rescue, based on the simulation method outlined above, we developed a simulation model using the Visual Basic platform to analyze random and selective attacks on the emergency logistics network. The simulation results indicate that the supply time is = 2.25 h and the effective demand ratio is = 1 in the initial state of the emergency logistics network when there are no attacks. When facing random attacks, supply time and effective demand node P change steadily. There is an approximately linear relationship between the number of attacks and these two respective measures, and they reach their final values after the final attack (supply time = 120 h, effective demand ratio = 0). This finding indicates that the topological structure of the emergency logistics network does not change suddenly and that this network enjoys a high degree of reliability. This observation is determined by the scale-free property of the network. To be exact, nodes under random attacks usually correspond to demand nodes, and the number of demand nodes with small connecting degrees is far greater than the number of supply nodes with large connecting degrees. The nonfunctionality of these demand nodes will not introduce severe consequences because they are not required to send supplies to other nodes and have few connections. Figure 4 shows that the ratio of effective demand nodes is on average less than 0.5 after 7 random attacks or 3 selective attacks, which indicates that the emergency logistics network is very fragile when facing selective attacks. Selective attacks can damage vulnerable supply nodes with large network connectivity, which will severely affect the reliability of the emergency logistics network. The abovementioned analysis suggests that selective attacks will greatly increase the transportation time of the emergency logistics network and significantly decrease the ratio of demand nodes . After four attacks, the two will reach their final values, which indicates that the network will have lost its functionality and will no longer be able to fulfill any emergency logistics tasks. Therefore, to achieve efficient operation of the network, several suggestions are put forward: (1) Special attention should be paid to the protection of supply nodes and nodes with high connectivity, such as emergency logistics conversion nodes. A dynamic, flat emergency supplies reserve mechanism and network should be established. The market-oriented storage and government reserves should be combined with the integration of the central and local emergency supply nodes to achieve the linkage between the reserve nodes. (2) We should accelerate the construction of an emergency logistics channel so that we can find an alternate link when one link is blocked in post-accident rescue. After the timeliness and safety of transportation routes are focused on, several alternative transportation plan should be prepared in advance, the corresponding alternative transportation plan will be immediately activated in post-accident rescue.
Conclusions
This paper applies complex network theory to emergency logistics research. We analyzed statistical characteristics of complex emergency logistics networks and defined their connected reliability and evaluation indicators. Then, a simulation model was established to analyze the reliability of an emergency logistics network under two modes of attack. The simulation method provides references for emergency logistics network reliability research and evaluation. Although complex network theory is a hot research topic, its application to logistics, particularly emergency The simulation results indicate that the supply time is T = 2.25 h and the effective demand ratio is P = 1 in the initial state of the emergency logistics network when there are no attacks. When facing random attacks, supply time T and effective demand node P change steadily. There is an approximately linear relationship between the number of attacks and these two respective measures, and they reach their final values after the final attack (supply time T = 120 h, effective demand ratio P = 0). This finding indicates that the topological structure of the emergency logistics network does not change suddenly and that this network enjoys a high degree of reliability. This observation is determined by the scale-free property of the network. To be exact, nodes under random attacks usually correspond to demand nodes, and the number of demand nodes with small connecting degrees is far greater than the number of supply nodes with large connecting degrees. The non-functionality of these demand nodes will not introduce severe consequences because they are not required to send supplies to other nodes and have few connections. Figure 4 shows that the ratio of effective demand nodes is on average less than 0.5 after 7 random attacks or 3 selective attacks, which indicates that the emergency logistics network is very fragile when facing selective attacks. Selective attacks can damage vulnerable supply nodes with large network connectivity, which will severely affect the reliability of the emergency logistics network. The abovementioned analysis suggests that selective attacks will greatly increase the transportation time T of the emergency logistics network and significantly decrease the ratio of demand nodes P. After four attacks, the two will reach their final values, which indicates that the network will have lost its functionality and will no longer be able to fulfill any emergency logistics tasks. Therefore, to achieve efficient operation of the network, several suggestions are put forward: (1) Special attention should be paid to the protection of supply nodes and nodes with high connectivity, such as emergency logistics conversion nodes. A dynamic, flat emergency supplies reserve mechanism and network should be established. The market-oriented storage and government reserves should be combined with the integration of the central and local emergency supply nodes to achieve the linkage between the reserve nodes. (2) We should accelerate the construction of an emergency logistics channel so that we can find an alternate link when one link is blocked in post-accident rescue. After the timeliness and safety of transportation routes are focused on, several alternative transportation plan should be prepared in advance, the corresponding alternative transportation plan will be immediately activated in post-accident rescue.
Conclusions
This paper applies complex network theory to emergency logistics research. We analyzed statistical characteristics of complex emergency logistics networks and defined their connected reliability and evaluation indicators. Then, a simulation model was established to analyze the reliability of an emergency logistics network under two modes of attack. The simulation method provides references for emergency logistics network reliability research and evaluation. Although complex network theory is a hot research topic, its application to logistics, particularly emergency logistics, has not been reported. Thus, introducing this theory into emergency logistics studies may provide a new research tool for this field and reduce environmental and public health losses in post-accident operations and emergency rescues.
This paper assumes that the distribution of emergency materials can be completed as long as there are connections between supply nodes and demand nodes. This paper does not consider the demand and supply of emergency materials, which is a simplification of real-world conditions. Therefore, we will take the actual transportation volume into consideration in future emergency logistics reliability research. In addition, various data on emergency logistics networks can now be collected. It would be worthwhile to further investigate the sustainability of complex emergency logistics networks under recent diseases or public health emergencies in China based on data-driven quantitative analysis. | 6,127.8 | 2018-01-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Drug-Drug Interactions Potential of Icariin and Its Intestinal Metabolites via Inhibition of Intestinal UDP-Glucuronosyltransferases
Icariin is known as an indicative constituent of the Epimedium genus, which has been commonly used in Chinese herbal medicine to enhance treat impotence and improve sexual function, as well as for several other indications for over 2000 years. In this study, we aimed to investigate the effects of icariin and its intestinal metabolites on the activities of human UDP-glucuronosyltransferase (UGT) activities. Using a panel of recombinant human UGT isoforms, we found that icariin exhibited potent inhibition against UGT1A3. It is interesting that the intestinal metabolites of icariin exhibited a different inhibition profile compared with icariin. Different from icariin, icariside II was a potent inhibitor of UGT1A4, UGT1A7, UGT1A9, and UGT2B7, and icaritin was a potent inhibitor of UGT1A7 and UGT1A9. The potential for drug interactions in vivo was also quantitatively predicted and compared. The quantitative prediction of risks indicated that in vivo inhibition against intestinal UGT1A3, UGT1A4, and UGT1A7 would likely occur after oral administration of icariin products.
Introduction
Icariin (Figure 1), a typical flavonol glycoside, is known as an indicative constituent of the Epimedium genus, which is commonly known as horny goat weed or yin yang huo. Extracts from these plants are reputed to produce aphrodisiac effects and have been commonly used in Chinese herbal medicine to enhance treat impotence and improve sexual function, as well as for several other indications for over 2000 years [1]. It is thought that icariin is the primary active component of Epimedium extracts, as it has been shown to give various pharmacological effects, including immunoregulation [2], enhancement of cGMP levels in cavernous smooth muscle cells [3], enhancing the production of bioactive nitric oxide [4], as well as mimicking the effects of testosterone [5].
Herb-drug interactions have received increasing attention over the past few decades. To date, in many countries, numerous persons have ever taken icariin or Epimedium extracts however, little is known about the interactions between icariin and prescription drugs. Metabolizing enzyme-based drug-drug interactions (DDIs) constitute the major proportion of clinically important DDI [6]. Cytochrome P450 (CYPs) and UDP-glucuronosyltransferase (UGTs) isoforms are responsible for the metabolic clearance of more than 90% drugs clinically used [6]. The previous studies showed that icariin had no inhibitory effects on CYPs activities [7]. However, the effects of icariin on UGT activities have not been characterized. UGTs catalyze the conjugation of various endogenous substances and exogenous compounds. At least 22 to date based on sequence homologies [8]. In humans, approximately 40-70% of all clinical drugs are subjected to glucuronidation reactions metabolized by UGTs [8,9]. UGTmediated DDI can potentially occur for many drugs, even resulting in enhanced adverse drug effects [10][11][12]. In fact, several significant DDIS have been clinically observed [13]. Thus, understanding the effects of icariin on UGT activities is important to ensure the safe administration of icariin.
As herbs are orally administered in most cases, the gastrointestinal tract serves principally as absorption site for absorption and first biotransformation site. Degradation of herb in the gastrointestinal tract is often observed [14]. Increasing attention has been paid to the role of herb metabolites in herb-drug interactions [15][16][17][18][19]. Previous study showed that icariin could be metabolized to three main metabolites icariside I, icariside II, and icaritin by the bacteria in rat intestine ( Figure 1). Therefore, it is important to evaluate whether icariin and its intestinal metabolites possess the potential for exerting an influence on metabolic enzymes.
The aim of this study was to investigate the effects of icariin and its intestinal metabolites on the activities of human UGTs. Using a panel of recombinant human UGT isoforms, we found potent inhibition of icariin and its intestinal metabolites against several UGT isoforms. The potential for DDI in vivo was also quantitatively predicted and compared.
4-MU Glucuronidation
Assay. 4-MU, a nonselective substrate of UGTs, was used as probe substrate for all UGTs except UGT1A4. Incubations with each individual enzyme were conducted using conditions previously described [10]. There was a 5 min preincubation step at 37 • C before the reaction was started by addition of UDPGA. The incubation mixtures were then centrifuged at 20,500 ×g for 15 min to obtain the supernatant. Aliquots (20 μL) were then analyzed by HPLC. The HPLC system (SHIMADZU, Kyoto, Japan) consisted of an SCL-10A system controller, two LC-10AT pumps, a SIL-10A auto injector, and a SPD-10A VP UV detector. Chromatographic separation was achieved using a Kromasil ODS column (4.6 × 150 mm I.D., 5 μm particle size) at a flow rate of 1 mL/min and UV detection at 316 nm. The mobile phase consisted of 10 mM KH 2 PO 4 , pH 2.7 (A) and acetonitrile (B). The following gradient was applied at a flow rate of 1 mL/min: 0-4 min 80% A and 20% B, 4.1-8 min 50% A and 50% B, 8.1-12 min, 30% A and 70% B. All experiments were performed in two independent experiments in duplicate.
TFP Glucuronidation
Assay. TFP was used as the substrate for UGT1A4. Trifluoperazine glucuronide (TFPG) formation was measured using a modification of the method reported [20]. The incubation mixture (200 μL total volume) contained Tris-HCl buffer (50 mM, pH 7.4), UDPGA (5 mM), MgCl 2 (5 mM), 50 μg/mg protein alamethicin, 0.1 mg/mL for recombinant UGT1A4, and TFP. Reactions were initiated by the addition of UDPGA and incubations were performed at 37 • C in a shaking water bath for 20 min. Incubations were terminated by the addition of 4% acetic acid/96% methanol (0.2 mL) and then centrifuged at 20,500 ×g for 15 min. A 40 μL aliquot of the supernatant fraction was injected into the HPLC column.
Measurement of TFPG formation HPLC was performed using a SHIMADZU SCL-10A HPLC system (SHIMADZU, Kyoto, Japan) fitted with a Kromasil ODS column (4.6 × 150 mm I.D., 5 μm particle size). A gradient mobile phase consisting initially of 30 : 70, mobile phase A (acetonitrile) verse mobile phase B (0.5% formic acid/water) was brought to a composition of 90 : 10 in 10 min which was held for 7 min, all at a flow rate of 1 mL/min. Column eluant was monitored by UV absorbance 256 nm.
Inhibition of UGT Activity Assay.
A typical incubation mixture contained recombinant human UGTs, 5 mM MgCl 2 , 5 mM UDPGA, 50 μg/mg protein alamethicin, 50 mM Tris-HCl buffer (pH 7.4), and various probe substrates of UGTs in the absence or presence of different concentrations of inhibitors. Icariin, icariside I, icariside II, icaritin, and inhibitors were dissolved in DMSO. The final concentration of DMSO in the incubation system was 1% (v/v). Since 75-100 μM are almost the highest plasma concentrations observed in patients of clinical drugs [21], the inhibition experiments with icariin were conducted at 1, 10, or 100 μM. Incubations with 4-MU or TFP were performed at the concentration corresponding to the apparent K m or S 50 value reported for each isoform (110, 1200, 110, 15, 750, 30, 80, 1200, 350, 250, and 2000 μM 4-MU for UGT1A1, UGT1A3, UGT1A6, UGT1A7, UGT1A8, UGT1A9, UGT1A10, UGT2B4, UGT2B7, UGT2B15, and UGT2B17, resp., or 50 μM TFG for UGT1A4, resp.). Known UGT inhibitors were used as positive controls: diclofenac for UGT1A1, UGT1A6, UGT1A7, and UGT1A9; androsterone for UGT1A3, UGT2B7, and UGT2B15, phenylbutazone for UGT1A8 and UGT1A10, and hecogenin for UGT1A4, respectively [10]. There is no positive control reported available for UGT2B4 and UGT2B17. The negative controls are the incubation without UDPGA. Since their water solubility is poor, these tested chemicals and inhibitors were previously dissolved in DMSO for effective solubilization. The final concentration of DMSO in the incubation system was 1% (v/v). DMSO did not noticeably change the catalytic activity of UGTs at 1% (data not shown). There was a 5 min preincubation step at 37 • C before the reaction was started by the addition of UDPGA. Incubation times were 120 min for UGT1A1, UGT1A10, UGT2B4, UGT2B15, and UGT2B17, 75 min for UGT1A3, and 30 min for UGT1A4, UGT1A6, UGT1A7, UGT1A8, and UGT1A9. The reactions were quenched by adding 100 μL acetonitrile and internal standard. The incubation mixtures were then centrifuged at 20,500 ×g for 15 min to obtain the supernatant. An aliquot of the supernatant was used for HPLC analysis as described above. All experiments were performed in two independent experiments in duplicate.
Inhibition Kinetics Analysis.
Inhibition constant (K i ) values were determined using various concentrations of 4 MU or TFP in the presence or absence of icariin. Inhibition data from experiments were graphically represented by Dixon plots. K i values were calculated by nonlinear regression using the equations for competitive inhibition (1), noncompetitive inhibition (2), or mixed inhibition (3), where v is the velocity of the reaction; S and I are the substrate and inhibitor concentrations, respectively; K i is the inhibition constant describing the affinity of the inhibitor for the enzyme; K m is the substrate concentration at half of the maximum velocity (V max ) of the reaction; α reflects the effect of inhibitor on the affinity of the enzyme for its substrate. The type of inhibition was determined from the enzyme inhibition models. Goodness of fit to kinetic Based on the assumption that the possible maximum concentrations of icariin in gut lumen were the ratio of oral administered dose excluding of the fraction absorbed into blood to the volume of gut lumen, the possible maximum concentrations of icariin in human gut lumen after a single oral administration of Epimedium pubescens decoction were estimated according to (4) , where C L is the concentration of icariin in gut lumen after a single oral administration of a Chinese traditional decoction of Epimedium pubescens in human volunteers, F a is the extent of absolute oral bioavailability of icariin, MW is the molecular weight of icariin, and V L is the average human gut volume. The reported V L was 1.65 L/70 kg [23]. The reported oral bioavailability of icariin was 0.12 [24]. The concentrations of icariin intestinal metabolites in human gut lumen are not available, but it has been reported that about 70% icariin will be transformed into its intestinal metabolites in the intestinal lumen [25]. Then the concentrations of icariside I, icariside II, and icaritin in human gut lumen were calculated based on the above calculated concentration of icariin and the ratio of their concentrations in blood after oral administration of icariin [26]. Here we arbitrarily assumed the ratio in rat equaled to that in human blood, and the values of their oral bioavailability were consistent.
The possible maximum concentrations of icariin, icariside I, and icariside II in human blood were calculated with the reported icaritin concentration (1.5 nM) after a single oral administration of Epimedium pubescens decoction [27] and the ratio of their concentrations in blood after oral administration of icariin [26]. Table 1, icariin exhibited moderate inhibitory effect against UGT1A3 activity with an IC 50 value of 12.4 ± 0.1 μM and also weak inhibition against UGT1A4 activity. It is interesting that the intestinal metabolites of icariin exhibited a different inhibition profile compared with icariin. Icariside II inhibited UGT1A4, UGT1A7, UGT1A9, and UGT2B7 activities, with an IC 50 value of 2.9 ± 0.1 μM, 2.8 ± 0.1 μM, 2.4 ± 0.1 μM, and 12.5 ± 0.1 μM, respectively. Icaritin exerted potent inhibition against UGT1A7 and UGT1A9, with an IC 50 value of 0.3 ± 0.0 μM, and 1.5 ± 0.1 μM, respectively.
Inhibition Kinetic Analysis in Recombinant UGTs.
Kinetic experiments were performed to further characterize the inhibition of UGT activities by icariin, icariside II, and icaritin. Icariin strongly inhibited the formation of 4-MUG by UGT1A3. The representative Lineweaver-Burk plots for the inhibition of 4-MUG formation by icariin (Figure 2(a)) and analysis of the parameters of the enzyme inhibition model suggested that the inhibition types were competitive. Based on nonlinear regression analysis and Dixon plots presented in Figure 2(b), icariin showed competitive inhibition against the formation of 4-MUG with K i of 8.0 ± 1.4 μM in recombinant UGT1A3. Icariside II exhibited potent competitive inhibition against UGT1A4 with K i of 1.9 ± 0.3 μM (Figures 3(a) and 3(b)). It also exerted noncompetitive inhibition against UGT1A7 with K i of 6.2 ± 0.5 μM (Figures 3(c) and 3(d)) and mixed inhibition against UGT2B7 with K i of 8.2 ± 1.5 μM and α of 3.3 (Figures 3(e) and 3(f)). Icaritin exerted mixed inhibition against UGT1A7 with K i of 0.7 ± 0.2 μM and α of 2.7 (Figures 4(a) and 4(b)).
The Calculated Concentrations of Icariin and Its Intestinal
Metabolites in Blood and Gut Lumen. The possible maximum concentrations of icariin, icariside I, icariside II, and icaritin in human gut lumen after a single oral administration of Epimedium pubescens decoction were calculated to be about 9.9 μM, 0.2 μM, 3.7 μM, and 3.8 μM, respectively. The possible maximum concentrations of icariin, icariside I, icariside II, and icaritin in human blood after a single oral administration of Epimedium pubescens decoction were calculated to be about 1.3 nM, 0.1 nM, 1.5 nM, and 1.5 nM, respectively.
Quantitative Prediction of Risks of In Vivo Inhibition on
UGTs by Icariin. Risks of in vivo inhibition on UGT1A3 by icariin, icariside II, and icaritin were estimated by calculating the ratios of [I]/K i . After a single oral administration of Epimedium pubescens decoction, the ratio of [I]/K i was 1.2 for the inhibition of icariin against intestinal UGT1A3. For icariside II, the values were 1.9 for intestinal UGT1A4, respectively. For icaritin, the ratio was 5.4 for intestinal UGT1A7. For reversible inhibition, if the ratio of [I]/K i were greater than 1, in vivo inhibition on the UGTs would likely occur [28]. Thus, in vivo inhibition against intestinal Table 1: The IC 50 values for the inhibition of icariin and its intestinal metabolites on UGT activities a . IC 50 (μM) Icariin Icariside I Icariside II Icaritin UGT1A1 >100 >100 >100 >100 UGT1A3 12 a Data were showed as mean ± SD. All experiments were separately performed in duplicate for three times. UGT1A3, UGT1A4, and UGT1A7 would likely occur after a single oral administration of Epimedium pubescens decoction.
As for hepatic UGTs, the values were negligible.
Discussion
DDIs caused by inhibition of drug-metabolizing enzymes receive considerable attention due to their clinical relevance. As a result of increased understanding, the use of in vitro approaches to predict aspects of human drug metabolism and pharmacokinetics in vivo has found increasing acceptance in recent years. Our data offer in vitro evidence that icariin and its intestinal metabolites are potent inhibitors of several UGT isoforms. We found that icariin exhibited potent inhibition against UGT1A3. It is interesting that the intestinal metabolites of icariin exhibited a different inhibition profile compared with icariin. Different from icariin, icariside II was a potent inhibitor of UGT1A4, UGT1A7, UGT1A9, and UGT2B7, and icaritin was a potent inhibitor of UGT1A7 and UGT1A9. UGT1A3 is responsible for the metabolism of several endogenous and exogenous substrates, including bile acid, naringenin, quercetin, estrone, anthraquinones, naproxen, opioids, ketoprofen, ezetimibe, 7-hydroxycoumarins, losartan, candesartan, and zolarsartan [29,30]. UGT1A4 can catalyze the tertiary amines including imipramine, amitriptyline, doxepin, promethazine, chlorpromazine, loxapine, and cyproheptadine [29]. UGT1A7 is involved in the glucuronidation of dulcin, SN-38, acetaminophen, mycophenolic acid, and so on [13]. UGT1A9 is involved in the glucuronidation of a number of drugs, including flavopiridol, mycophenolic acid, propofol, acetaminophen, and others [13]. UGT2B7 is the most commonly listed enzyme (35%) involved in glucuronidation of the top 200 prescribed drugs in the United States in 2002 [9]. Therefore, the potent inhibition of UGTs activities by icariin and its intestinal metabolites can modulate the metabolism of numerous drugs cleared by UGTs. The expression levels of UGT1A3, UGT1A4, UGT1A9, and UGT2B7 are high in human liver, whereas UGT1A3, UGT1A4, UGT1A7, and UGT2B7 are highly expressed in the gastrointestinal tract [13,31,32]. In view of the low levels of icariin and its intestinal metabolites in blood, icariin is unlikely to cause a clinically significant DDI through inhibition of hepatic glucuronidation after oral administration. However, the quantitative prediction of risks of in vivo inhibition on intestinal UGTs by icariin and its intestinal metabolites indicated that in vivo inhibition against intestinal UGT1A3, UGT1A4, and UGT1A7 would likely occur after a single oral administration of Epimedium pubescens decoction. UGTs in the gastrointestinal tract may contribute significantly to the first-pass metabolism of orally administered drugs that undergo glucuronidation [13]. These results showed that icariin might exert an influence on the glucuronidation and first-pass metabolism of some drugs orally administered. In addition, in vitro data tend to underestimate inhibition of drug glucuronidation in vivo [20], and the pharmacokinetic parameters used here to calculate concentrations are mean values of the parameters reported, but interindividual variability is large. So the actual effects of icariin might be more potent than those calculated here.
This finding also offers new experimental evidence for the opinion that the biotransformation of herb in the gastrointestinal tract could play a key role in the herbassociated DDI [17,18]. Our data shows that the degradation products of herb by gastrointestinal factors may exhibit distinct effects on metabolic enzymes compared to naturally occurring components. Additional attention should be paid on the effects of intestinal metabolites of herbs on the metabolic enzymes during the safety evaluation of herbal products. Our results might be helpful to clinically safe administration of icariin, but further DDI studies with associated drugs will need to be performed to evaluate whether this in vitro phenomenon also occurs in vivo.
In conclusion, Icariin and its intestinal metabolites were found to be potent inhibitors of several UGT isoforms. The in vivo inhibition against intestinal UGT1A3, UGT1A4, and UGT1A7 would likely occur after a single oral administration of Epimedium pubescens decoction. The present findings shed light on the mechanisms underlying clinically significant DDI associated with icariin and also provide the basis for further in vivo studies investigating the DDI potential between icariin with UGT substrates. | 4,155.2 | 2012-10-16T00:00:00.000 | [
"Biology",
"Chemistry",
"Medicine"
] |
A Primary Dead-Weight Tester for Pressures (0.05–1.0) MPa
Recent advances in technology on two fronts, 1) the fabrication of large-diameter pistons and cylinders with good geometry, and 2) the ability to measure the dimensions of these components with high accuracy, have allowed dead-weight testers at the National Institute of Standards and Technology (NIST) to generate pressures that approach total relative uncertainties previously obtained only by manometers. This paper describes a 35 mm diameter piston/cylinder assembly (known within NIST as PG-39) that serves as a pressure standard in which both the piston and the cylinder have been accurately dimensioned by Physikalisch Technische Bundesanstalt (PTB). Both artifacts (piston and cylinder) appeared to be round within ±30 nm and straight within ±100 nm over a substantial fraction of their heights. Based on the diameters at 20 °C provided by PTB (±15 nm) and on the good geometry of the artifact, the relative uncertainties for the effective area were estimated to be about 2.2 × 10−6 (1σ).
invented by Johnson and Newhall [9] which is described by Heydemann and Welch [10] and is referred to as a controlled clearance technique. Other equally important aspects for the translation of these very accurate linear dimensions to an accurate effective area are that both pieces constituting the present gage possessed excellent geometry and there was a relatively small clearance between piston and cylinder. These three conditions, 1) accurate dimensional measurement capability from the comparator at PTB, 2) good geometry of the artifact and 3) small clearance allows the effective area when used as a pressure generator to be determined with a relative standard uncertainty u(A)/A ≈ ±1.4 × 10 -6 , (1σ).
A value for the effective area distilled from all the information in this report agrees with a recent value obtained via NIST's Ultrasonic Interferometer Manometer (UIM) [11] within 2.5 × 10 -6 and it agrees within 1 × 10 -6 of dimensional measurements performed at NIST some years ago [8].
Because NIST's Pressure and Vacuum Group uses a reference temperature of 23°C whereas the dimensional measurements were done at 20°C it was necessary to obtain an accurate value for the thermal expansion in order not to degrade the accuracy when operating the gage at 23°C. A special oven/cooler was constructed to measure the thermal expansion.
Apparatus
For the present measurements we used a piston and a close fitting cylinder with large (35 mm) diameters made by the Ruska Instrument Corporation 1 . (See Fig. 1.) Known within NIST as PG-39, both piston and cylinder were made of tungsten carbide. When used as a pressure generator the assembly employs a conventional design with the usual floating piston. An important feature of the gage is that both piston and cylinder are fashioned from single blocks of tungsten carbide rather than relying on a bimetallic construction. With careful handling we expect this feature to provide good stability over extended periods.
For the dimensional measurements we relied on the relatively new state of the art comparator at PTB, Braunschweig Germany, which has the capability of measuring both diameter and straightness of cylinders using a probe contact technique with high accuracy. Diameters via this comparator were obtained on both piston and cylinder [6]. Roundness measurements were obtained using other equipment at PTB.
Other specialized apparatus was used for auxiliary measurements: i) an oven/cooler for measurements of the thermal expansion coefficient, ii) capacitance measurements between the piston and cylinder for estimates of the crevice width, and iii) ultrasound for measurements of Young's modulus of the piston and cylinder.
Rather than attempt to determine the linear expansion coefficient of the tungsten carbide material for the individual components with laser interferometry for example, it was easier to use our expertise in pressure metrology and determine the areal expansion coefficient through a direct comparison of pressure with a Volume 108, Number 2, March-April 2003 Journal of Research of the National Institute of Standards and Technology 136 1 Certain commercial equipment, instruments, or materials are identified in this paper to foster understanding. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the materials or equipment identified are necessarily the best available for the purpose. reference piston gauge. A temperature controlled environmental chamber (oven /cooler) was constructed for the 35 mm piston/cylinder assembly and base and was used to accurately measure the thermal expansion coefficient of the piston/cylinder assembly by placing PG-39 inside the chamber and using another piston gage outside the chamber as a reference. The chamber was capable of better than ±0.005 K stability. The temperature of the chamber could be controlled between 10°C and 40°C using a Peltier element and could be measured with a calibrated thermometer to better than ±0.02 K. With the piston /cylinder assembly inside, however, the chamber was operated only between 15°C and 40°C in order to avoid possible damage to the piston and cylinder. In general, a longer temperature span yields a more accurate expansion coefficient. Thermal gradients within the oven were estimated to be less than ±0.1°C.
For crevice width measurements, a capacitance gauge with ±0.1 nF resolution was used to measure the capacitance between the piston and cylinder in its pressure column. One electrode was attached to the base of the assembly and electrically at the same ground potential as the cylinder. The other electrode was connected to the top of the piston through a small cup that contained a tiny amount of mercury in order to minimize extraneous non-axial forces on the cylinder assembly. The capacitance method is currently under investigation within the Pressure and Vacuum Group as a means of measuring the clearance in other gages.
For estimating Young's modulus, E, the speed of sound in the tungsten carbide piston was measured using an ultrasonic pulse launched at one end of the piston. From its reflection at the other end and subsequent return, the pulse was detected and the total time of flight was measured from which the speed of sound was determined. Young's modulus was obtained from the speed of sound, c, and the density ρ [12]: Similar measurements were made on the cylinder.
Characterization From Dimensional Measurements
The PTB measured the piston and cylinder using their relatively new state-of the art comparator [5]. Diameters were measured along two directrices (two longitudes, 0° to 180° and 90° to 270°) for both pieces. Diameters were obtained at two places in both vertical planes, or four diameters on the piston and four diameters on the cylinder. All diameters were measured near 20°C and adjusted to the reference temperature of 20°C. A full set of straightness data was obtained from both piston and cylinder using the comparator. (See Fig. 2.) Roundness data were obtained using a separate device. (See Fig. 3.)
Direct Averages
We averaged the diameters supplied by PTB for both piston and cylinder, and this yielded values for the areas of each component at the reference temperature 20°C: and A 0c,20 = πD c 2 /4 ≈ π(35.824 318 ± 0.000 017) 2 mm 2 /4, Here D p and D c are the average diameters of the piston and cylinder, respectively. The ambient pressure (1 atmosphere) effective area of the assembly derived from these measurements at 20°C is: The uncertainty listed represents a relative uncertainty of 1.2 × 10 -6 (1σ) at ambient pressure and is obtained from the type B uncertainty from the dimensional measurements root sum squared with the variance of the mean of the diameters. (See Tables 1-3.) The type B uncertainties were added together algebraically because these could be correlated. This area compares very favorably with the area obtained from dimensions measured by the NIST Precision Engineering Division in 1989, (1007.926 ± 0.011) mm 2 , @ 20°C [7,8].
Numerically Integrated Results
All of the information, absolute diameters at four places, roundness traces at five heights, and straightness traces at eight angles was put together in the form of what is sometimes called a "birdcage" that represented the piston and another set of information to represent the cylinder. Cylindrical harmonics were then fit to the data in order to obtain analytic functions r p (z,θ) and r c (z,θ) for the surfaces where z is the vertical coordinate
The Piston Base, A base
The base area of the piston, A base , was obtained by a numerical integration of the analytical function r p (z,θ): (4) where r p (z = 0,θ) is the piston radius at the base of the piston. and θ is the azimuth angle. Using r p (z,θ) and r c (z,θ), a numerical integration of forces acting over the surface of the piston was performed with Dadson et al.'s work serving as a guide [13]. These authors divide the forces into three categories, 1) a basal force acting upward on the base of the piston, 2) a vertical component of the normal forces acting on the sides of the piston if it is other than perfectly straight and vertical, and 3) a force from viscous gas flowing upward and exerting a vertical drag on the piston.
Shape Contribution δ δA s
The change in r p (z,θ) with respect to height introduces an additional vertical force given by the following equation: Here P 0 is the pressure at the top, P 1 is the pressure at the bottom of the piston, and P(z) is the pressure as a function of height within the crevice and L is the length of the crevice. The contribution to the effective area from the shape of the sides of the piston is then: Numerically integrating the derivative of the fitting function, dr p /dz, as indicated above using a pressure profile, P(z), derived from the Poiseuille flow equation gives an increase in the effective area: δA s~+ 0.0167 mm 2 , with respect to the area at the base of the cylinder. The pressure profile was derived assuming an average crevice width at each height (8) where the crevice width is h(z,θ) = r c (z,θ) -r p (z,θ). In Eq. (5) a gas density linear in pressure was also assumed. In this case: where P 1 and P 0 are the pressures at the bottom and the top of the crevice, respectively. The definite integral I z is:
The Flow Contribution δ δA f
The flow of gas up through the crevice between the piston and cylinder contributes a drag force that must be accounted. Assuming Poiseuille flow in the crevice the drag force is: (11) Numerically integrating Eq. (11) using the fitting functions r c (z,θ) and r p (z,θ) with the same pressure profile as in the previous section and converting the results to fractional area gives: The drag force (since it is acting up-ward in this case) will serve to increase the area of the piston by an amount of about 44.6 × 10 -6 . Adding the contributions from Eqs. (4), (7) and (12) gives:
Uncertainty in the Numerical Integration of
The principal uncertainty in the numerical calculation of A base , δA s , δA f arises from the uncertainty in the dimensional measurements and the simplifying assumptions involved in calculating the pressure profile. A sensitivity check on the integration's dependence on the input parameters showed that the uncertainty in the average radius of the piston, u(r p ), produced about a 0.43 × 10 -6 uncertainty in the area of the gauge. A similar check of the uncertainty of the derivative dr p /dz ≈ 0.4 nm, showed about a 0.19 × 10 -6 contribution to the uncertainty in the effective area. Similar sensitivity checks on the radius of the cylinder, r c , and dr c /dz, produced 0.42 × 10 -6 and 0.30 × 10 -6 shifts in the effective area, respectively. With regard to the calculation of the pressure profile, the simplifying assumption of Eq. (8) was checked by assuming instead that: in Eq. (9), with the result that dA/A changed by about 0.1 × 10 -6 mm 2 /mm 2 . Several integrations were done in which the cylinder was rotated with respect to the piston. This resulted in small differences, <0.15 × 10 -6 .
Moving the piston and cylinder's vertical position relative to one another by 3.5 mm, resulted in a 1.0 × 10 -6 change in effective area. Root sum squaring the seven contributions to the uncertainty in the effective area, namely, u(r c ), u(dr c /dz), u(r p ), u(dr p /dz), u(h), u(θ p ) and u(z p -z c ) adds an uncertainty of 1.2 × 10 -6 . Lastly, with regard to the flow contribution, another model for the flow was assumed [14]. This model takes into account transition flow within the clearance and generally gives an effective area slightly smaller than the Poiseuille flow model. This alternative model resulted in an effective area 2.5 × 10 -6 below the Poiseuille flow model. The average value of the effective area for the two models is: We have taken as an uncertainty for model dependent crevice effects, the standard deviation obtained from the two models which is 1.8 × 10 -6 . The uncertainty in Eq. (15) is obtained by combining the uncertainty of the numerical integration, 0.0012 mm 2 , with the flowmodel uncertainty, 0.0018 mm 2 in quadrature. Note that the uncertainty given in Eq. (15) would result in an uncertainty in generated pressure of 2.2 × 10 -6 P. This however, does not include uncertainties from mass loading and other "in use" effects when used in a secondary calibration.
Thermal Expansion Coefficient
For operation of the gage at temperatures other than 20°C a thermal expansion coefficient for the piston/cylinder assembly's area is needed. With the special environmental chamber constructed to fit the gage, a coefficient was found to be: where the uncertainty represents a coverage factor (k = 1). Thus when used near the Pressure and Vacuum Group's reference temperature 23°C an additional uncertainty of only (23°C -20°C) × (0.03 × 10 -6 /K) = 0.09 × 10 -6 is incurred.
Pressure Coefficient
For operation of the gage over the intended pressure range, (0.05 to 1.0) MPa, a pressure coefficient is needed. It can be estimated from elasticity theory using Young's modulus and Poisson's ratio [15] or obtained from calibrations to other gages. We obtained values for Young's modulus from speed of sound measurements on the piston and cylinder [12,16]. The speed of sound was measured ultrasonically and found to be (6380 ± 140) m/s for the piston and (6580 ± 146) m/s for the cylinder (1σ). With a material density of 14 × 10 3 kg/m 3 , Eq. (1) yields Young's moduli of (5.70 ± 0.24) × 10 11 Pa and (6.06 ± 0.26) × 10 11 Pa for the piston and cylinder respectively, (1σ).
Jain et al. derived the pressure coefficients for both piston and cylinder for this gage using elasticity theory and the thick-wall formula [7]. (In that report the gage is referred to as NIST-9.) They used a value b = 8.0 × 10 -12 Pa -1 for the pressure coefficient of the gage. No uncertainty was given but values from calibrations to other gages yield a spread of values between 2.8 × 10 -12 Pa -1 and 5.18 × 10 -12 Pa -1 . An axi-symmetric finite element model produced a value (10 ± 2.0) × 10 -12 Pa -1 , based on a Young's modulus of 6.0 × 10 11 Pa and Poisson's ratio 0.218. If one takes a square distribution of values for b between the lowest, 2.8 × 10 -12 Pa -1 , and highest values, 10 × 10 -12 Pa -1 , one obtains the value: b = 6.4 × 10 -12 Pa -1 , where the standard uncertainty is 2.1 × 10 -12 Pa -1 .
Clearance
The clearance, h, between the piston and cylinder can be determined using a variety of techniques and although they do not provide direct help in reducing the uncertainty of the effective area, based on the dimensional measurements, these other measurement techniques can provide consistency checks on the dimensional measurements. Primarily, the radial clearance can be obtained from the dimensions of the piston and cylinder, secondly via fall-rate measurements and thirdly via capacitance measurements.
Via Dimensional Measurements
The dimensional measurements lead to an average clearance of: where h Dim is the clearance. The average diameters D c and D p were determined from direct dimensional measurements and were listed earlier.
Via Fall-Rate Measurements
Fall-rate measurements, interpreted with the Poiseuille flow equation for a uniform crevice [17,18], were also used to obtain the clearance: (19) Here η is the viscosity of the pressure fluid (nitrogen), R is the radius of the piston, L is the engagement length, P 0 and P 1 are the absolute pressures at the top and the bottom of the crevice respectively and dz/dt is the fall rate. This method has been used by Molinar and Vatasso [19], by Dolinskii et al. [20] and by Meyers and Jessup [21].
The fall-rates at several pressures are listed in Table 4. The clearance h Poise from Eq. (19) is listed in the 4th column. These values for the clearance are seen to be about 30 % higher than the values obtained from dimensional measurement, h Dim , and from capacitance measurements, h Cap . (See below.) However, slip-flow phenomena have not been taken into account in Eq. (19). Slip flow has been used before in the interpretation of fall-rate data [22] and can be important in describing flow in narrow channels [23]. When slip flow is taken into account the apparent clearance is reduced by about 10 %: (20) where K Slip is an accommodation coefficient taken to be 1.0 and K n is the Knudsen number, (21) and where λ is the mean free path, (22) Here R g is the gas constant, T is the thermodynamic temperature, M is the molar mass of the gas (N 2 ), η is the viscosity of the gas and <P> is the average pressure in the crevice. When Eqs. (20) with Eq. (21) are used with h Poise from Eq. (19), values for h Slip result that are about (0.800 ± 0.110) µm. This is about 10 % larger than h Dim , but within the combined uncertainty of the different techniques. See Table 4.
Via Capacitance Measurements
Lastly, clearances were determined using capacitance measurements [24]: (23) Here ε 0 is the permittivity of the vacuum, K is the dielectric coefficient of the pressure fluid (nitrogen), and C is the measured capacitance. For the interpretation of the capacitance measurements an ideal geometry was assumed, as was the case for the interpretations of the fall-rate measurements using the Poiseuille flow model. Minimal efforts were made to shield extraneous signals from the capacitance gauge. After transients had subsided, very stable operation was found with the piston only in the column and pressurized to a value near 4 kPa. The piston was allowed to float without spinning. Values for the capacitance ranged between 91 nF and 96 nF. Most of the time the piston seemed to selfcenter for long periods as indicated by the measured capacitance, which is at a relative minimum when the piston is centered. From time to time the values of capacitance would increase dramatically indicating that the piston was drifting off center. When more weights were added, some configurations were found to be stable, while others were unstable. The clearances obtained from the capacitance measurements were found to be: h cap~( 0.725 ± 0.020) µm.
This is for a pressure of about 4 kPa generated by the piston only.
Summary
We have characterized a 35 mm dead-weight tester, known within NIST as PG-39, using dimensions obtained from PTB. An effective area was obtained by averaging the eight absolute diameters, four for the piston and four for the cylinder.
In addition a numerical integration of forces over the surface of the piston was performed and yielded a value about 1.6 × 10 -6 higher than the simple average. For this integration, Poiseuille flow was assumed in the crevice. A second numerical integration was performed in which an alternative model for flow was assumed [14]. In this case the effective area was 0.9 × 10 -6 lower than the simple average. Averaging the results of the two numerical integrations yields an effective area A NI = (1007.925 2 ± 0.002 2) mm 2 , and is the recommended value @20°C. The standard uncertainty given here also covers the averaged value obtained from the eight absolute diameters. For transferring this characterization to other gages, uncertainties from other sources will come into play and are not covered by this uncertainty. For use at temperatures other than 20°C, the thermal expansion coefficient for the effective area was measured in our laboratory in a controlled environmental chamber and was found to be α = (8.754 ± 0.03) × 10 -6 /K.
For use at higher pressures up to 1 MPa, a pressure coefficient was estimated using a variety of sources. The recommended value is b = (6.4 ± 2.1) × 10 -12 Pa -1 .
Auxiliary measurements (based on fall rates and capacitances) were made on the clearances between the piston and cylinder. These served as checks on the dimensional measurements. These measurements agreed with the dimensional measurement within their combined standard uncertainties. | 4,998 | 2003-03-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Toroidal and poloidal Alfvén waves with arbitrary azimuthal wavenumbers in a finite pressure plasma in the Earth’s magnetosphere
. In this paper, in terms of an axisymmetric model of the magnetosphere, we formulate the criteria for which the Alfv´en waves in the magnetosphere can be toroidally and poloidally polarized (the disturbed magnetic field vector oscillates azimuthally and radially, respectively). The obvious condition of equality of the wave frequency ω to the toroidal (poloidal) eigenfrequency (cid:127) TN ( (cid:127) PN ) is a necessary and sufficient one for the toroidal polarization of the mode and only a necessary one for the poloidal mode. In the latter case we must also add to it a significantly stronger condition | (cid:127) TN − (cid:127) PN | /(cid:127) TN (cid:29) m − 1 , where m is the azimuthal wave number, and N is the longitudinal wave number. In cold plasma (the plasma to magnetic pressure ratio β = 0) the left-hand side of this inequality is too small for the routinely recorded (in the magnetosphere) second harmonic of radially polarized waves, therefore these waves must have non-realistically large values of m . By studying several models of the magnetosphere differing by the level of disturbance, we found that the left-hand part of the poloidality criterion can be satisfied by taking into account finite plasma pressure for the observed values of m ∼ 50 − 100 (and in some cases, for even smaller values of the azimuthal wave num-bers). When the poloidality condition is satisfied, the existence of two types of radially polarized Alfv´en waves is possible. In magnetospheric regions, where the function (cid:127) PN is a monotonic one, the mode is poloidally polarized in a part of its region of localization. It propagates slowly across magnetic shells and changes its polarization from poloidal to toroidal. The other type of radially polarized waves can exist in those regions where this function reaches its extreme values (ring current, plasmapause). These waves are standing waves across magnetic shells, having a poloidal polarization throughout the region of its existence. Waves of this type are likely to be exemplified by giant pulsations. If the poloidality condition is not satisfied, then the mode is toroidally polarized throughout the region of its existence. Furthermore, it has a resonance peak near the magnetic shell, the toroidal eigenfrequency of which equals the frequency of the wave.
Introduction
A great variety of Alfvén waves has been recorded in the magnetosphere to date. They are usually categorized into short-period (Pc 1-2 and Pi 1) and long-period (Pc 3-5 and Pi 2) oscillations. Of these waves, the former represent waves traveling along field lines, while the latter are standing waves similar to vibrations of guitar strings. Standing waves have small longitudinal wave numbers (i.e. the number of halfwaves fitting along a field line between magnetically conjugate points of the ionosphere), N ∼ 1, while traveling waves represent packets composed of harmonics with N 1. Recently, it has been customary to categorize the long-period pulsations into azimuthally large-scale waves (the azimuthal wave number m ∼ 1) and azimuthally small-scale waves (m 1). A physical substantiation for such a categorization is the difference of the sources of these two wave modes: Alfvén oscillations with small m are generally thought of as being generated by a magnetoacoustic wave arriving from the outer boundary of the magnetosphere, and waves with large m by some source inside the magnetosphere (Glassmeier, 1995). Furthermore, long-period hydromagnetic waves in the magnetosphere are classed according to the predominant polarization (Anderson et al., 1990): azimuthally polarized, or toroidal if the magnetic field vector oscillates in an azimuthal direction, radially polarized, or poloidal if the magnetic field vector oscillates in a radial direction, and compressional if there is a significant disturbance of the magnetic field modulus (within the linear approximation, this signifies the presence of a longitudinal component of the wave's magnetic field). The question is how these categorizations are correlated, i.e. under what conditions the waves with particular values of m can have a particular polarization.
Usually, this question is given a very simple answer: when m ∼ 1 the Alfvén wave is predominantly toroidally polarized, and when m 1 its polarization is predominantly poloidal. This conclusion is in general agreement with experimental data. A theoretical substantiation for this conclusion is the solution of the MHD equation in dipole geometry in two limiting cases: when m = 0 the mode is purely toroidal, and when m = ∞, it is purely poloidal (Dungey, 1967;Radoski, 1967). Nevertheless, the large value of the azimuthal wave number cannot be recognized as a sufficient condition for the poloidal polarization of the Alfvén wave. Krylov et al. (1981) showed that both toroidal and poloidal modes can have both low and large m values. For example, at any m in a plasma that is inhomogeneous across magnetic shells, at a certain frequency of the wave there is a surface on which the wave field has a singularity accompanied by the toroidal polarization of the mode (Krylov and Lifshitz, 1984;Wright and Thompson, 1994). Leonovich and Mazur (1990) noticed one paradox which called into question the very existence of poloidal modes. The paradox is as follows. The eigenfrequency of poloidal oscillations varies across magnetic shells. In order for the mode to be poloidally polarized, it is necessary that the wave frequency ω equals the eigenfrequency of poloidal oscillations. This means that the poloidal mode is concentrated only on the magnetic shell where these frequencies are equal. In this case, however, the radial component of the wave vector must be equal to infinity, as well as the azimuthal component, the role of which is played by the number m. On the other hand, in order for the Alfvén wave to be poloidally polarized, it is necessary that the radial wavelength exceeds significantly the azimuthal one.
To resolve this paradox, Leonovich and Mazur (1990) investigated the wave field structure by assuming that the wave frequency differs little from the poloidal eigenfrequency. They showed that the wave's transverse structure is described by the Airy equation, the solution of which has the form of a wave outside of the poloidal surface. The mode is poloidally polarized if the radial wavelength far exceeds the azimuthal wavelength. This condition is satisfied at sufficiently large values of the azimuthal wave number m. Hence, the poloidal mode does exist, but it is not localized near the only one magnetic shell but is more-or-less widely distributed in space.
This example shows that studying the polarization of the mode necessarily leads to the study of its global structure. Of course, such an investigation is important per se, especially now that the system of four CLUSTER satellites holds much promise for the separation of the spatial and temporal structure of the mode (Glassmeier et al., 2001). When studying the structure of the toroidal and poloidal modes, it is appropriate to take into account the plasma inhomogeneity not only across magnetic shells, but also in the direction along the external magnetic field, and, in addition, the field line curvature and finite plasma pressure, because all of these factors affect the difference between the frequen-cies of poloidal and toroidal oscillations (Krylov et al., 1981;Walker, 1987). A study of the global structure of the wave was carried out by Leonovich and Mazur (1993), Klimushkin et al. (1995), Kouznetsov and Lotko (1995), Vetoulis and Chen (1996), and Klimushkin (1998a, b). However, the question still is: What are the conditions and magnetospheric regions where Alfvén waves can have particular polarization properties? This question is addressed in the present paper.
This study is based on using an axisymmetric model of the magnetosphere, taking into account all of the abovementioned factors. Plasma pressure is considered small but finite. The presence of the plasmapause and ring current is taken into account. Our treatment is based on the equations of ideal magnetohydrodynamics, which leads us to exclude storm-time compressional Pc 5 waves from our consideration, as there are grounds to believe that they are mirror modes (Woch et al., 1988), an understanding of which requires to leave the ideal MHD.
This paper is organized as follows. Section 2 provides a system of equations describing MHD waves in plasma of finite but low pressure. In Sect. 3, the frequencies of toroidal and poloidal oscillations are studied analytically and numerically. It is also established in this section that the longitudinal structure of these modes for N ∼ 1 differs little from each other. Based on this fact, in Section 4 we derive an ordinary differential equation describing the structure of the wave across magnetic shells. This equation is solved in Sect. 5. In Sect. 6, we summarize our knowledge of the conditions of the toroidal and poloidal polarization of Alfvén waves and carry out a comparison with experimental data. The main results of this study are summarized in Sect. 7.
Basic equations
First, we introduce the following designations: the capital letters B, P and J stand for the equilibrium values of the magnetic field, pressure and current, the small letters b, p and j denote the wave-associated perturbations of these quantities, ξ is the displacement of plasma from the equilibrium position, ρ is equilibrium plasma density, E is the wave's electric field, and ω is the wave frequency. These quantities are related by the relation (condition of hydromagnetic equilibrium), (Maxwell equation), (freezing-in condition). We consider the hydromagnetic waves in those magnetospheric regions where the plasma to magnetic pressure ratio β ≡ 8π P /B 2 is much less than unity. In these regions equilibrium plasma pressure across and along field lines differs no more than by 20% (Lui and Hamilton, 1992;Michelis et al., 1997); therefore, the anisotropy of the pressure tensor can be neglected. The pressure perturbation can then be found using the adiabaticity condition, the linearized form of which is written as (Kadomtsev, 1963). A linearized equation of small monochromatic oscillations in plasma has the form −ρω 2 ξ + ∇p = We now introduce a curvilinear coordinate system {x 1 , x 2 , x 3 }, in which the field lines play the role of coordinate lines x 3 , i.e. such lines, along which the other two coordinates are invariable (recall that the superscripts and subscripts denote counter-variant and covariant coordinates, respectively). In this coordinate system the stream lines are coordinate lines x 2 , and surfaces of constant pressure (magnetic shells) are coordinate surfaces x 1 = const. This coordinate system is orthogonal if J · B = 0 (Salat and Tataronis, 2000). The coordinates x 1 and x 2 have the role of the radial and azimuthal coordinates, and we shall use the McIlwain parameter L and the azimuthal angle ϕ, respectively, to represent them. The physical length along a field line is expressed in terms of an increase on the corresponding coordinate as dl 3 = √ g 3 dx 3 , where g 3 is the component of the metric tensor, and √ g 3 is the Lamé coefficient. Similarly, dl 1 = √ g 1 dx 1 , and dl 2 = √ g 2 dx 2 . The determinant of the metric tensor is g = g 1 g 2 g 3 . This paper considers the axisymmetric model of the magnetosphere. In this case all perturbed quantities can be specified in the form exp (−iωt + ik 2 x 2 ), where k 2 is the azimuthal component of the wave vector. If the azimuthal angle ϕ is used as the coordinate x 2 , then k 2 = m, where m is the azimuthal wave number. The "physical" value of the azimuthal component of the wave vector isk 2 = k 2 / √ g 2 .
Unlike k 2 , the value ofk 2 depends on the radial and longitudinal coordinates, because such a dependence is contained in the component of the metric tensor g 2 ; in the equatorial planek 2 (L, x 3 eq ) = k 2 /L = m/L in particular. An important consequence of Eq. (6) is the smallness of the longitudinal component of the plasma displacement vector when compared with its transverse component when β 1. Within the approximation of ideal plasma conductivity, the longitudinal component of the wave's electric field is zero, i.e. the electric field is a two-dimensional one; it lies on surfaces orthogonal to field lines. According to the Helmholz theorem (see, for example, Morse and Feshbach, 1953), an arbitrary vector field can split into the sum of the potential and vortical components. By applying this theorem to a twodimensional field E, we put where e || = B/B. In a homogeneous plasma, the "potentials" and describe the electric field of the Alfvén wave and fast magnetosound (FMS), respectively (Klimushkin, 1994;Glassmeier, 1995). Regarding the third MHD mode, slow magnetosound, it can be neglected for plasmas with β 1. Let all perturbed quantities in Eq. (6) be expressed in terms of the wave's electric field written as Eq. (7). In obtaining the equations relating and at finite but small pressure, we shall neglect the second and higher degrees of β. By letting the operator ∇ ⊥ act on the left-hand and right-hand sides of Eq. (6) (i.e. by taking its divergence from transverse coordinates), in view of Eqs. (1) -(5) we obtain the equation: Here,L A is the Alfvén operator defined aŝ whereL T is the operator of the toroidal mode, 4πρ is the Alfvén velocity) andL P is the operator of the poloidal mode, 1/R is the local curvature of a field line, and s = √ γ P /ρ is the sound velocity.L c in Eq. (8) is the operator describing the FMS influence on the Alfvén mode, Here,L F is the operator of the fast mode equal to The operatorL + c (Hermitian conjugate to the operatorL c ) describes the back influence of the Alfvén mode on FMS.
In the limiting case of a homogeneous plasma,L c ,L + c = 0, and Eq. (8) becomes (ω 2 − k 2 || A 2 ) = 0. This equation has an nontrivial solution when the dispersion relation of the Alfvén wave holds. It is for this reason that we refer to the potential as a function describing the Alfvén wave field. Equation (10) for homogeneous plasma has the form the nontrivial solution exists provided that the dispersion relation for FMS in a plasma with 0 < β 1 is satisfied. Thus, the potential describes the field of the fast magnetosound.
For a further understanding of this system, we invoke the only conceivable method of analytical research, perturbation theory. To do this, assume that the operatorsL c andL + c are small when compared with the operatorsL A andL F (see also Fedorov et al., 1998). This is true if the scale of variation of equilibrium magnetospheric parameters a, to which the operatorsL c andL + c are inversely proportional, far exceeds the scales of variation of the functions and . This assumption looks rather natural, since it is well known that near the Alfvén resonance surface there occurs a singularity of the wave field, and the function changes quite drastically within a very short distance; moreover, the characteristic radial wavelength a/m when m 1 is much smaller than the characteristic scale of space plasma inhomogeneity (or, roughly speaking, the size of the magnetosphere).
First, we turn our attention to Eq. (10). Formally, it may be treated as an inhomogeneous differential equation, the general solution of which is the sum of the solution of the homogeneous equation and a partial solution of the inhomogeneous equation. The solution of the homogeneous equation describes the FMS structure without taking into account the interaction with the Alfvén mode. The solution of this equation in cold plasma was addressed in papers of Lee (1996), and Leonovich and Mazur (2000a, b;, who established that at low frequencies the FMS transparent region lies at the edge of the magnetosphere, and as m increases, it is pressed even more strongly to the magnetopause. This solution does not contain any singularities. Obviously, the influence of small pressure implies merely a slight change in the shape of the FMS transparent region. In this paper, however, our concern is primarily with the Alfvén mode, whereas FMS is of our interest only because it is its source. Following perturbation theory, it is the solution of the homogeneous magnetosound equation which should be substituted, to represent , into the Alfvén Eq. (8). For a qualitative study of the partial solution of the inhomogeneous magnetosound equation, it is worthwhile to note that the region of localization of the Alfvén mode, at large m on the right-hand side of the equation, is dominated by the transverse Laplacian ⊥ . It is this operator, however, which defines the longitudinal (compressional) component of the wave's magnetic field; as can be readily ascertained using formulas (3) and (7), The potential of the Alfvén mode is not involved in the definition of the longitudinal magnetic field. However, the transverse Laplacian in the preceding formula is expressed in terms of , i.e.
b || = cm ω Thus, the coupling of FMS with the Alfvén mode in an inhomogeneous magnetic field gives rise to a marked longitudinal component of the magnetic field in the region of localization of the Alfvén wave; see also (Safargaleev and Maltsev, 1986). As far as the electric field and the transverse components of the FMS magnetic field are concerned, they are lost at the background of the corresponding components of the Alfvén wave because, as can be readily demonstrated, Note that magnetosound must not necessarily be the source of the Alfvén wave. In particular, when m 1 this mode can be neglected altogether, i.e. its transparent region is very narrowly localized at the magnetopause (Leonovich and Mazur, 2001). That is why it is usually believed that high-m waves must be excited by a source inside the magnetosphere. Currents in the ionosphere (Leonovich and Mazur, 1993) and currents in the magnetosphere (Saka et al., 1992) can play the role of such a source. We now introduce the function q to describe all possible sources of the Alfvén mode. Equation (8) may then be written as
Toroidal and poloidal modes
As is evident from the expression (7), when the condition |∂ 1 / √ g 1 | |m / √ g 2 | is satisfied, the electric field of the Alfvén wave is dominated by the radial component; otherwise the azimuthal component is dominant. On the contrary, in the former case the main contribution to the wave's magnetic field is made by the azimuthal component; in the latter case the radial component makes the main contribution. The wave structure in the former case is determined by the toroidal operator, and by the poloidal operator in the latter case. Let T N and P N denote the eigenfunctions of these operators. These functions alone do not describe the global structure of the mode because they are not the solution of Eq. (12), but they have the role of "scaffolding" for solving Eq. (12), as will be described in Sect. 4. Let T N and P N denote the eigenfrequencies of the toroidal and poloidal operators. The difference between these eigenfrequencies is often referred to as the polarization splitting of the Alfvén oscillation spectrum. These quantities are functions of the radial coordinate. Plasma pressure influences the value of T N , as well as of P N , because the definition of both the toroidal and poloidal operators involves the coefficients of the metric tensor determined by the equilibrium magnetic field, which, in turn, depends on the current, i.e. on the derivative of pressure along the radial coordinate. However, pressure is involved explicitly only in the definition of the operatorL P in terms of the quantity η. For that reason, taking it into account has a greater influence on the value of P N compared with T N .
Further, we introduce the notion of the toroidal and poloidal surfaces defined by the equations and The graphical solution of these equations is illustrated by Fig. 1. Let us designate the distance between toroidal and poloidal surfaces in the equatorial plane as N (ω) = x 1 T N − x 1 P N . By the order of magnitude, this value is (see Appendix A). The noncoincidence of the toroidal and poloidal surfaces is caused by the polarization splitting of the spectrum, i.e. ultimately, by the field line curvature. Thus the magnetospheric model under study now involves a parameter N which has no analog either in a homogeneous plasma or in the one-dimensionally inhomogeneous model with straight field lines.
To study the functions T N and P N and calculating the frequencies T N and P N and the coordinates of the toroidal and poloidal surfaces, we avail ourselves of the fact that when β 1 the difference in the magnetosphere from a dipole one can be neglected. We considered three models of magnetospheric plasma: model I corresponds to a low level of magnetospheric disturbance when a significant time has elapsed after the storm; model II corresponds to a high level of disturbance, and model III which also corresponds to a low level of disturbance but when a short time has elapsed after the storm. We approximated the plasma pressure by the expression where L 0 is the coordinate of the magnetic shell, on which pressure reaches its maximum value, and D is the parameter that determines the characteristic width of the pressure profile. The coordinate of maximum pressure is taken to be L 0 = 3.5 in the three models. We put D = 2 in model I and D = 2 in model II, in an attempt to reflect the fact that the higher the level of magnetic disturbance, the narrower the localization of the current across the magnetic shells (Sugiura, 1972;Lui et al., 1987;Lui and Hamilton, 1992;Michelis et al., 1997), which is related to pressure by the relation (1). For model III, we took D = 0.7, because in this case we take into account the strong current inside of the plasmapause (Williams and Lyons, 1974). It is worth noting that accorging to observational data (Sugiura, 1972) there is no jump of plasma pressure on the plasmapause. Figures 2a and b present the radial profiles of pressure P and the current J for models I, II, and III. As far as the quantity P 0 is concerned, its specification is equivalent to specifying the parameter β on the shell with the coordinate L 0 . In this case we started from the fact that the higher the level of magnetic disturbance, the higher the pressure, in general. Accordingly, in model II the maximum value of beta must be higher than that in model I and III. We chose the following numerical values. We put the value of the plasma parameter on the L-shell of maximum pressure β(L 0 ) = 0.055 for models II and III, and β(L 0 ) = 0.4 for model I. This parameter is plotted in Fig. 2c. Plots of the value of η, that determine the polarization splitting of the spectrum at finite pressure according to formula (A5) are presented in Fig. 3. Note that these figures plot the values of these quantities in the equatorial plane; β and η decrease rapidly with the distance from it, because, by virtue of the MHD equilibrium condition, plasma pressure is constant along a field line, whereas the magnetic field depends on the geomagnetic latitude of theta as Noteworthy is the fact that in some regions of the magnetosphere (especially in model II and III) the condition β eq 1 is violated. Even in such regions, however, the field lineaveraged value of β is small compared to unity. Neverthe-less, no calculations were performed for such regions. Note, by the way, that for the polarization splitting of the spectrum (Eq. A5) the value of η (which is proportional to the parameter β) is involved just in the form of an integral along a field line.
The function that approximates the Alfvén velocity profile, with the plasmapause taken into account, was taken from a paper of Leonovich and Mazur (2000b), with minor modifications: where A 1 = 250 km/s, A 2 = 500 km/s, L 1 = 2.5, L 2 = 5, c 1 = 1.5, c 2 = 1. The parameter D pp = 0.1 determines the width of the plasmapause, and the quantity L pp determines its coordinate. Since, with increasing magnetospheric disturbance, the magnetopause is displaced toward the Earth (see, for example, Chappell et al., 1970), we put L pp = 5.5 for models I and III, and L pp = 3.0 for model II. Figure 2d presents the radial profile of the Alfvén velocity A for models I, II, III. It must be added that the models which we have used do not even reach the limit of the whole variety of conditions in the magnetosphere. Even at the same value of the K p -index, the profiles of equilibrium quantities can be quite different; situations are possible with several plasmapauses, with the maximum of pressure shifted onto the outer L-shells, etc. Numerical values in these formulas can also be open to argument. Nevertheless, these models are, in a sense, extreme ones and permit the value of the poloidal and toroidal eigenfrequencies to be judged qualitatively in some limiting and most interesting cases.
Results of our calculations of the frequencies are shown in Figs. 4-6 (to ease the comparison with observational data, the frequencies f T N,P N = T N,P N /2π are presented); for the purposes of illustration we also give the values of these quantities in a cold plasma for β = 0. We note once more that for models II and III, we studied only those regions where β 1, otherwise we violate the limits of applicability of our theory. As is evident, pressure has quite a substantial influence on the value of the poloidal frequency. For the fundamental harmonic (N = 1), at positive values of η, the difference of the poloidal and toroidal frequencies is considerably higher than that in a cold plasma, and when η < 0, pressure leads to a change in the sign of the polarization splitting of the spectrum in some regions of the magnetosphere (as is evident from Figs. 4 and 5; in a cold plasma the poloidal frequency is always less than the toroidal frequency). With an increase in the harmonic number, the polarization splitting of the spectrum decreases, but much more slowly than in the case of zero pressure. For the second and higher harmonics, the curves T N (x 1 ) and P N (x 1 ) in a cold plasma practically coinside, whereas in the presence of pressure the difference T N − P N is quite pronounced, even for N ≥ 2 (the plots for N > 2 for models I and II and for N > 1 for model III are not given here, to save room). Notice also the appearance of additional extrema of the function P N (x 1 ) in models II and III in regions of strong current which are not accompanied by extrema of the function T N (x 1 ); in a cold plasma, the extrema of these two functions occurred at the plasmapause only. It is also of interest to investigate the functions N (ω) for different N , for different models. Here we confined ourselves only to model I and II (Figs. 7 and 8). Figure 7 shows that in model I, plasma pressure makes the poloidal surface shift to more distance magnetic shells compared to the toroidal surface, whereas in the cold plasma case the poloidal surface is always closer to the Earth than the toroidal surface. Furthermore, when N = 1, even in cold plasma, the value of N is relatively large, so that plasma pressure can contribute to a decrease in the distance between the toroidal and poloidal surfaces (see Fig. 7a, b). But, on the other hand, when N = 2 in cold plasma this distance is very small, and pressure contributes greatly to its increase. In model II (see Fig. 8) pressure generally increases the width of the interface between the toroidal and poloidal surfaces, and it shifts the poloidal surface even closer to the Earth than it does in cold plasma (although its behavior may be the opposite for higher frequencies).
Thus, we can draw the following general conclusion: usually, pressure contributes to an increase in the polarization splitting of the spectrum and, hence, to an increase in the distance between the toroidal and poloidal surfaces.
We now turn our attention to the question of the toroidal and poloidal eigenfunctions. Using the WKB approximation in longitudinal coordinate it is possible to find that when N 1, even in a cold plasma, these functions differ rather strongly from one another (Leonovich and Mazur, 1993). At small N , the form of the functions T N (x 3 ) and P N (x 3 ) can only be determined numerically, but it is the waves with small N which manifest themselves in the form of geomagnetic Pc 3-5 pulsations addressed in this paper. Results of our calculations of these functions for N = 1, 2 for model I are presented in Fig. 9. It is evident from the plots that for the first two harmonics, the differences between the poloidal and toroidal eigenfunctions are reasonably small. In models II and III, this conclusion remains valid. Based on the fact of the small difference in the functions T N (x 3 ) and P N (x 3 ) that determine the longitudinal structure of long-period Alfvén waves, in the next section we shall bring the partial differential Eq. (12) to an ordinary differential equation, describing the structure of the wave across magnetic shells.
The question arises as to whether it is possible to extend our results to a more general case where the inequality β 1 does not hold. Klimushkin (1998a) studied the structure of MHD waves for arbitrary β, but with m 1. There exist two modes of MHD oscillations in that limit: the Alfvén mode and the slow magnetosound mode (SMS); in the m 1 case fast magnetosound (FMS) can be neglected (whereas at this point we consider arbitrary m, but β 1, so Alfvén mode and FMS exist, but SMS is unimportant). The coupled Alfvén and SMS modes are described by the system of Eqs. (36) and (37) of the cited reference. A study of this system showed that when β ∼ 1, in addition to the Alfvén resonance surface (toroidal surface), there arises the SMS resonance surface, with which one more poloidal surface is associated. It is easy to show, however, than when the toroidal frequency far exceeds the SMS resonance frequency, the equation, corresponding to the Alfvén, reduces approximately to Eq. (12) of this paper, but with no terms containing (the absence of these terms is, of course, accounted for by the fact that they are responsible for the FMS which is absent in the limit m 1). Further, numerical calculations performed by Cheng et al. (1993) and Lui and Cheng (2001) showed that the SMS resonance frequency is indeed much lower than T N (and the cited authors did not introduce the limitation β 1). The reason seems to lie in the abovementioned fact that even if at the equator β eq ∼ 1, at high latitudes beta decreases rapidly, due to the crowding of field lines. Hence, we can conclude that the Eq. (12) describes qualitatively the Alfvén waves, even if β eq ≤ 1.
The equation for the Alfvén wave structure across magnetic shells
The toroidal and poloidal modes are two limiting cases of Alfvén waves in the magnetosphere. If their longitudinal structure differs little from one another, then it can be suggested that the longitudinal structure of field line oscillations differs little from the toroidal function in the general case as well. Then may be represented as where δ N is a small correction. Let us assume that the Alfvén wave is sufficiently narrowly localized across magnetic shells, and the regions of localization of different Nharmonics do not cross each other. Since the characteristic scale of variation of the function T N across magnetic shells coincides by the order of magnitude with the scale of variation of the equilibrium parameters a (roughly speaking, with the size of the magnetosphere), we can formulate a limitation on the function R N : To determine the radial structure of the wave specified by the function R N , we use the method of successive approximations by treating the deviation of the function from the toroidal function as a small perturbation. We substitute Eq. (16) into Eq. (12), multiply the resulting expression by T N and integrate along the field line from the point x 3 − to x 3 + of the intersection of a field line with the ionosphere; in doing this, we neglect small terms: The derivation of this equation was based on using the normalization conditions for the function T N (Eq. A2) and the Hermitian character of the operatorL T . We transform the second term on the left-hand side of Eq. (18) by making use of the Hermitian nature of the operatorL P and introducing a difference between the toroidal and poloidal eigenfunctions φ N = P N − T N : Since the value of φ N is assumed small, the second term on the right-hand side can be neglected. As a result, we obtain an ordinary differential equation describing the radial structure of the wave: Here the following abbreviations are used: Using numerical calculations it was established that the value of K N coincides by the order of magnitude with the azimuthal component of the wave vector in the equatorial plane m/L (Fig. 10).
In publications on MHD waves in a two-dimensionally inhomogeneous magnetosphere, Eq. (19) was, for the first time, reported by Leonovich and Mazur (1997), who also solved it numerically. An important difference in our article from that paper is the fact that we obtained this equation for plasma with finite pressure.
It remains to add a boundary condition for this equation. A natural boundary condition with respect to the radial coordinate is the absence of any increase in the potential when x 1 → ∞:
Alfvén waves with the toroidal and poloidal polarization in different regions of the magnetosphere
At this point we introduce the quantity ν N ≡ K N N , the number of azimuthal wavelengths fitting into the transparent regions. Since the estimation (Eq. 15) holds and, by the order of magnitude, K N ∼ m/a, one has There are two possible limiting cases: ν N 1, and ν N 1, which will be considered in Sects. 5.1 and 5.2. Section 5.3 addresses the waves in those regions where the function P N (x 1 ) reaches its extreme values.
Case ν N 1: localized toroidal modes
Within the ν N 1 approximation, the differences between the toroidal and poloidal surfaces can be neglected. This means that within this approximation the field line curvature is unimportant, and the wave structure qualitatively coincides with the wave field described in earlier publications on field-line resonance (Tamao, 1965;Southwood, 1974;Chen and Hasegawa, 1974). Since in most of the magnetosphere the functions 2 T N (x 1 ) and 2 P N (x 1 ) are monotonically decreasing ones, we can avail ourselves of the linear expansion (Eq. A6). Equation (19) then becomes (cf. Tataronis and Grossman, 1973). Note that x 1 T N is a function of the wave frequency ω. Next, we introduce a new vari- Fig. 11. Three kinds of wave structure across magnetic shells: localized resonance (a), traveling wave (b), and standing wave in the resonator (c). The relation between the Alfvén wave "potential" , shown in the figure, and the function R N used in the text is given by formula (16).
The relation between R N and the "potential" of the Alfvén wave is given by the formula (16), where the function T N (it will be recalled) depends relatively slowly on the radial coordinate. In Fig. 11a, we show the transverse structure of the wave field described by the solution (22). On the toroidal surface this solution has a logarithmic singularity, which, since classical publications of Chen and Hasegawa (1974) and Southwood (1974), has been regarded as the distinctive property of Alfvén resonance. The singularity can be regularized by taking into account the presence of finite conductivity of the ionosphere, in view of which the boundary condition on the function δ N is formulated thus: where χ is the angle between the field line and a normal to the ionosphere, and p is global Pedersen ionospheric conductivity (Leonovich and Mazur, 1993). Then in Eq. (19) there appears an additional term where γ N is the mode damping decrement at the ionosphere (its value is assumed small compared to the wave frequency, which reflects high ionospheric conductivity). This term vanishes in the case of infinite ionospheric conductivity. This gives rise to a small imaginary addition to x 1 in formula (21), Im x 1 = N ≡ 2γ N a/ω (since γ N /ω 1, then N /a 1). In view of this correction when x 1 Hence, it follows that on the toroidal surface (that is, on a magnetic surface, where toroidal eigenfrequency is equal to the wave frequency) there occurs a sharp wave amplitude peak, the characteristic scale of localization of which N /a 1. And, on the contrary, at a given magnetic shell the wave has a maximum amplitude in the case where the toroidal eigenfrequency at it coincides with the wave's frequency. As is apparent from Fig. 11a, the wave, when ν 1, may be described as a localized resonance, having a toroidal polarization throughout the region of its existence.
Notice that the mode can be toroidal even when m 1, provided only that the inequality ν N 1 holds. An example of this is just the magnetospheric model with straight parallel field lines where the polarization splitting of the spectrum is absent altogether, i.e. ν N = 0 for any azimuthal wavelengths. Thus, a large value of the azimuthal wave number is not a sufficient condition of the poloidal polarization of the Alfvén wave.
Another feature of this solution is the change in the wave phase by 180 • , i.e. the change in sign of the ratio E 2 /E 1 at the crossing of the toroidal surface. This is obvious from the fact that for the Alfvén wave we have ∂ 2 E 1 − ∂ 1 E 2 = 0, whence it follows that (Southwood, 1974) In this case, E 1 ∝ (x 1 − x 1 T N ) −1 , i.e. when x 1 > x 1 T N (ω) and x 1 < x 1 T N (ω) the logarithmic derivative of the function E 1 (x 1 ) has a different sign. We shall return to the relation (25) in Sect. 3. 5.2 Case ν N 1: poloidal modes that transform into toroidal ones To solve Eq. (21) for ν N 1, we can avail ourselves of the method of matching asymptotic expansions. For the time being, we consider the situation where the toroidal frequency is larger than the poloidal frequency. In this case x 1 T N > x 1 P N . The magnetospheric regions in which this inequality is realized in models I-III is evident from Figs. 4-8. The details of the calculations are given in Appendix B, and here we restrict ourselves to the final answer only.
In the region |x 1 − x 1 T N | N the solution is where is the characteristic wavelength near the toroidal surface and C T is a constant defined by Eq. (B3). In the region |x 1 − x 1 P N | N the solution can be written in the integral form (Leonovich and Mazur, 1993): where is the characteristic wavelength near the poloidal surface. In the region x 1 P N < x 1 < x 1 T N , where the WKB approximation is applicable, the solution is where C W is a constant defined by Eq. (B4) and is a radial component of the wave vector squared. The function = T N R N determined by Eqs. (26)-(29) is plotted in Fig. 11b. We emphasize once again that the functions T N and P N introduced in Sect. 3 and used in many other publications do not describe on their own accord the wave structure in the magnetosphere, as they are not the solutions of the wave Eq. (12). Let us discuss the main features of this solution. As in the case ν N 1, the wave field in the case ν N 1 has a logarithmic singularity on the surface which is also regularized by taking into account the finite ionospheric conductivity. However, the term before the logarithm differs from the one in the case ν N 1 (cf. Eq. (23)). Besides, in that case the function (x 1 ) was a monotonic one on both sides of the resonance surface (see Fig. 11a), whereas in the case ν N 1 this function is an oscillating one in the interface between the surfaces x 1 T N and x 1 P N , including in the region of toroidal polarization of the mode, as is clearly seen from the asymptotic representation (Eq. (B1)) given in Appendix B, as well as from Eq. (30).
As is evident, k 1 is a function of the wave frequency ω, i.e. the field line curvature also leads, along with the polarization splitting of the spectrum, to the appearance of the Alfvén wave dispersion across magnetic shells. The wave's transparent region (i.e. the region where k 2 1 > 0) lies between the toroidal and poloidal points. This solution describes the wave, the phase velocity of which is directed from the poloidal to the toroidal surface. The wave's group velocity is determined from the relation . (32) As is apparent, v 1 gN > 0, i.e. the wave energy is also transported from the poloidal to the toroidal surface. By the order of magnitude, the group velocity i.e. it is much less than the Alfvén velocity. On the poloidal and toroidal surfaces the group velocity becomes zero.
If the poloidal surface is farther away from the Earth than the toroidal surface, then the solution coincides qualitatively with the solution for x 1 T N > x 1 P N . But there is one difference: the phase velocity of the wave is directed from the toroidal to poloidal surface. Nevertheless, energy is transferred, as before, from the poloidal to the toroidal surface. This is evident from the fact that when T N < P N , the group velocity is negative.
Thus, we arrive at the following picture. The wave is generated near the poloidal surface and propagates toward the toroidal surface where it is totally attenuated, transferring its energy to the ionosphere due to its finite conductivity. Furthermore, the wave is a standing wave along field lines. As the wave is propagating, the radial wavelength decreases and its polarization changes from poloidal to toroidal. We can call this phenomenon the transformation of the poloidal mode to a toroidal mode. Leonovich and Mazur (1993) were the first to establish this picture for the case of a cold plasma (β = 0). The propagation of Alfvén waves across the L-shells in a finite-β plasma was studied by Safargaleev and Maltsev (1986), Kouznetsov and Lotko (1995), and Klimushkin (1997Klimushkin ( , 1998a. Besides, Klimushkin et al. (1995) explored the transverse propagation within the approximation β = 0, but with the threedimensional inhomogeneity of the magnetosphere taken into account.
When ν N 1 the mode is confined between the poloidal and toroidal surfaces, i.e. its scale of localization is determined by the field line curvature. This contrasts with the case ν N 1 when the scale of localization of the wave is determined by the mode dissipation from the ionosphere. It is of interest to consider the situation where ν N 1 but N N , i.e. the scales of localization that are determined by the curvature and attenuation, compete with each other. It is easy to see that in this case the attenuation at the ionosphere is so strong that while propagating across field lines, the mode is now dissipated within a small distance from the poloidal surface without reaching the toroidal surface (Klimushkin, 2000).
Waves with m
1 in the range of extreme values of the function P N (x 1 ): localized poloidal modes In some magnetospheric regions the mode is bounded on either side by poloidal surfaces. They are magnetic shells near minima of the function P N (x 1 ), if P N < T N holds there, and regions near maxima of this function, if an inverse inequality holds there (see . The cavity between two poloidal surfaces will be henceforth referred to as the Alfvén resonator. At zero pressure the resonator can lie on the inner plasmapause edge only. Finite pressure in models I and II leads to the elimination of the resonator, because the poloidal frequency becomes larger than the toroidal frequency; instead, there arises a resonator on the outer edge of the plasmapause. In model II, the resonator is produced in the westward current region. In model III, the resonator arises inside of the plasmasphere in the eastward current region; the westward current in model III that was accidentally coincident with the plasmapause led to a deepening of the resonator on the inner edge of the plasmapause (the situation is even possible where 2 P N < 0 in this model, and this will be discussed below). Note that the appearance of cavities in the region of currents requires a rather rigorous selection of equilibrium conditions, unlike the cavities in the plasmapause region.
We now derive the equation describing the radial structure of the mode within the resonator near the extremum of the function P N (x 1 ); we designate this value by 0 . For definiteness, we consider the resonator on the outer edge of the plasmapause where the following representation can be used: where the quantity l defines the characteristic width of the resonator, and the coordinate x 1 is measured from the point of extremum. The coordinates of the poloidal surfaces that bound the mode within the resonator are b = ±l ω 2 − 2 0 2 0 1/2 . When x 1 0 the toroidal frequency can be considered approximately constant if 2 0 −ω 2 2 0 − 2 T N . We introduce a new variable ξ = x 1 /λ RN , where Equation (19) then becomes where the designation σ = b 2 /λ 2 RN is introduced. It is an easy matter to show that this equation defines the structure of the mode within the resonator in the general case, and not only on the outer edge of the plasmapause.
In contrast to the situations considered in two previous subsections, this equation has the solution that satisfies the boundary condition -Eq. (20) -even without a source, q N = 0. In this case Eq. (35) has the same form as one of the best known equations of physics, the Schrödinger equation for the harmonic oscillator. As is known, the existance of the solution requires that the parameter σ be quantized, σ = 2n + 1, where n = 0, 1, 2, ... is an integer number. From this follows the quantization condition for the wave frequency: Here the "−" sign refers to the case where the resonator is localized near a maximum of the function P N (x 1 ), and the "+" sign corresponds to the opposite case. The solution of Eq. (35) is expressed in terms of Hermitian polynomials H n : This solution describes the standing wave confined within the transparent region between poloidal surfaces (Fig. 11c). When the right-hand side is nonzero, q N = 0, Eq. (35) has the solution bounded when x 1 → ±∞, at any frequency. However, the amplitude of the solution of the inhomogeneous equation is still maximal when ω ω n , and under this condition function Eq. (37) is an approximate solution of Eq. (35). We do not give here any mathematical details, as they may be found in, for example, a paper of Leonovich and Mazur (1995).
Since H 0 , when n = 0 the wave equation is described by the Gaussian function with the half-width b. It is a very important result, because in many observed cases of poloidal pulsations the amplitude is indeed close to a Gaussian (e.g. Chisham et al., 1997;Cramm et al., 2000). Note that this result is the solution if the wave equation and is not a consequence of any assumptions of the initial conditions. The derivative of the function E 2 (x 1 ) = −im on different slopes of the Gaussian has a different sign; therefore, in accordance with formula (25), the transition through the region of localization of the mode must be accompanied by a change in the wave phase by 180 • . This phenomenon has already been pointed out by considering an example of the localized resonance (Sect. 5.1). In this case, however, the mode can not be toroidally polarized. Indeed, it is an easy matter to check that the inequality |E 1 / √ g 1 | |E 2 / √ g 2 | has as a consequence the inequality ω 2 0 − ω 2 1 2 0 − 2 T N , indicating that the resonator is so shallow that no harmonic is accommodated in it. On the contrary, the mode in the resonator is poloidal if the resonator is deep enough. The question of the conditions of the poloidal and toroidal polarization of the wave will be discussed in greater detail in the next section.
In the plasmapause region the situation is also possible where the transparent region is bounded on either side by two toroidal surfaces (see, for example, Fig. 4b). This may produce the impression that the solution of the wave equations in this case describes a double resonance when two maxima of the amplitude lying at x 1 = x 1 T N are interconnected by a continuous transparent region. However, such a solution does not satisfy the natural boundary conditions of the decrease in the opaque region. Indeed, as has been pointed out in the preceding subsection, the solution bounded by the opaque region which describes a resonance singularity, has the form of a wave arriving at the singular turning point. But there cannot be a wave arriving at two turning points simultaneously. In fact, the solution in this region does not contain any resonance singularities and, in essence, describes the noise background of hydromagnetic oscillations of the magnetosphere (Klimushkin, 1998b).
Previous studies of the resonator in the plasmapause region were carried out by Mazur, 1990, 1995;Vetoulis and Chen, 1996;Klimushkin, 1998b;Denton and Vetoulis, 1998). The possibility of existence of the resonator on the current inside the plasmasphere was showed by Klimushkin (1998b).
The conditions of the poloidal and toroidal polarization of Alfvén waves
The poloidality condition of the Alfvén mode in a general form implies that the radial wavelength λ r far exceeds the azimuthal wavelength λ a . For the toroidal polarization, an inverse inequality, λ r λ a , must hold. This may produce the impression that the toroidal and poloidal polarizations are equivalent. This is in fact not the case.
If somewhere in the magnetosphere the equality (Eq. 13) holds, i.e. at some wave frequency there is a toroidal surface, then on this magnetic shell there is a wave field singularity, in the area of which the mode has a toroidal polarization. But the existence of Eq. (14) is only necessary but not sufficient for the poloidal polarization of the mode. As an example, we consider the case of the wave traveling from the poloidal the to toroidal surface. In this case the radial wavelength near the poloidal surface is given by formula (28). By the order of magnitude, λ a ∼ a/m, we avail ourselves of the estimation (Eq. 15) to obtain the poloidality condition of the mode λ r λ a in the form which coincides with the condition of applicability of the WKB approximation in the radial coordinate (ν N 1). If this approximation is applicable and if there exists the solution of Eq. (14), the Alfvén wave must have a poloidal polarization in a part of its transparent region, near the poloidal surface. If, however, the inequality does not hold, then even near the surface x 1 P N the mode is not poloidal. Hence, more stringent conditions are required for the poloidal polarization of the wave than for the toroidal polarization.
This gives us a clue to an understanding of the situation when ν N ∼ 1, where it is impossible to develop approximate methods for solving Eq. (19). In this case there also occurs an Alfvén resonance accompanied by the toroidal polarization of the mode, and since the poloidality condition does not hold anywhere, the mode in the region of its existence has predominantly a toroidal polarization with some addition of the poloidal component in some places where the wave amplitude is substantially smaller. This is also confirmed by numerical calculations performed by Leonovich and Mazur (1997). Thus, we can conclude that when ν N ≤ 1 the mode has predominantly a toroidal polarization throughout the region of its existence.
Generally, plasma pressure contributed to the poloidal polarization of the mode, as it leads to an increase in the polarization splitting of the spectrum and to an increase in the width of the transparent region. Moreover, in the case of finite pressure in some regions of the magnetosphere the poloidality condition can be satisfied, even for m values (m 10, say), that are not very large. Alfvén waves with such m values still can be generated through the interaction with FMS, i.e. the resonance excitation of Alfvén oscillations by magnetosound can also give rise to poloidally polarized waves. Such a possibility was, for the first time, pointed out by Kouznetsov and Lotko (1995), who considered the possibility that the poloidal surface can lie between the toroidal surface and the transparent region of FMS (they called the wave propagating across magnetic shells as the "Alfvén buoyancy wave"). But the widest transparent region accommodating even low-m waves is produced in the case where plasma pressure causes the poloidal surface to be displaced significantly toward the Earth. Just such a situation arises in model III when N = 1 (see Fig. 5). The width of the transparent region in this case can reach several terrestrial radii (see Fig. 8). Note, by the way, that in the case of very wide transparent regions, our results should be regarded with caution, because when deriving Eq. (19), it was assumed that the transparent region was significantly narrower than the magnetosphere. Leonovich and Mazur's (1993) two-dimensional WKB approximation is more suited for investigating wide transparent regions.
In the case where the mode is confined within the resonator, the general poloidality condition λ r λ a is trans-formed in a somewhat different manner: λ RN m/L, where the characteristic wavelength in the resonator, λ RN , is defined by the equality (34). After some arithmetic, from this we obtain A maximum width of the resonator is By combining the two last formulas, we obtain the poloidality condition in the resonator at the plasmapause in the form i.e. the resonator must accommodate many azimuthal wavelengths. The same poloidality condition can also be obtained for the resonator in the ring current region. As is evident, the condition ν N 1 is also satisfied for the poloidal mode in the resonator, if the width of the transparent region is meant to be a maximum width of the resonator. Again, finite pressure favors the fulfilment of the poloidality condition: rather wide cavities appear on the outer edge of the plasmapause in models I and III (b max ∼ 1 R E ) for all N that have been studied, and in the wing current region in models II and III (b max ∼ 2 R E , and ∼ 0.5 R E , respectively) when N = 1.
It is also important to remark that the higher the harmonic number N , the smaller the relative polarization splitting of the spectrum | 2 T N − 2 P N |/ 2 T N , and the more difficult it is to satisfy the poloidality condition. This is obvious from our Figs. 4 and 5, showing how much the difference between the poloidal and toroidal frequencies decreases when passing from N = 1 to N = 2; at even higher N , this difference is still smaller. Hence, the lower the wave frequency, the smaller the values of the azimuthal wave number, at which it can be poloidal.
Noteworthy is also the fact that in the case of large and small ν N at a given value of the source q, the wave amplitude near the resonance (toroidal) surface is different, because the terms of the resonance logarithm are different in these two cases. As is seen from Eq. (23), the wave amplitude is independent of ν N , whereas when ν N 1 it decreases with increasing ν N (Eq. 31). From this it is easy to find that in the case of large ν N the wave amplitude near the resonance surface is by a factor of ν 2/3 N 1 smaller compared to small ν N . Thus, if magnetospheric conditions are conducive to the existence of poloidally polarized waves, at the same time they make the toroidally polarized waves less clearly pronounced.
On the observation of toroidal and poloidal Alfvén waves in the magnetosphere
We now compare the picture outlined above with the experiment. However, we are not yet fully prepared for this endeavor, because currently, available theories are still too concerned with simplified models. We still do not fully know what changes can be introduced into this picture by the azimuthal inhomogeneity of the magnetosphere and the associated field-aligned currents, the wide-band character and the possible narrow localization of oscillation sources, the interaction of waves with particles drifting in the magnetosphere, and the active role of the ionosphere. Work in this direction is underway, and this is testified by recently published papers addressing these issues (Salat and Tataronis, 1999;Klimushkin et al., 1995;Mann et al., 1997;Leonovich, 2000;Antonova et al., 2000;Vetoulis and Chen, 1996;Klimushkin, 2000;Glassmeier et al., 1999a;Leonovich and Mazur, 1996). However, the creation of a unified realistic model of MHD waves in the magnetosphere is a long way from now. Observations and experiments can have a leading role in such efforts, and at the present stage we need at least to understand whether the picture available to us has anything to do with the information provided by experiments.
Observations from the ground recorded repeatedly nearly monochromatic toroidal Alfvén waves in the Pc 4-5 range, which showed characteristic properties of a localized resonance described in Sect. 5.1: a strong localization of the wave across L-shells, toroidal polarization, and a phase change by 180 • at the passage across the resonance peak (Samson et al., 1971;Walker et al., 1979; see also Fenrich and Samson, 1997; references therein). On the other hand, it was pointed out earlier (Glassmeier et al., 1999b) that when observed from satellites, these features of the localized resonances were never identified, in spite of the vast occurrence of toroidal pulsations. A likely explanation for this paradox would be to assume that most of the monochromatic ULF waves in the magnetosphere in the Pc 5 range have large azimuthal wave numbers and ν N 1. In this case the oscillations are no longer a localized resonance, and the behavior of their phase is much more complicated than in the case ν N ≤ 1, which corresponds to a localized resonance: when ν N 1 the Alfvén wave travels across magnetic shells, having close to the toroidal surface a very small radial component of the wave vector. If this is indeed the case, then the chance to capture in the magnetosphere a localized resonance is relatively poor. Further, the atmosphere comes into play which has the role of a filter transmitting to the ground only waves with a sufficiently smooth dependence of the field on transverse coordinates; thus, the waves with m 1 almost do not penetrate through the atmosphere (e.g. Hughes, 1974;Glassmeier and Stellmacher, 2000). For that reason, observations from the ground provide a distorted picture of the wave processes in the magnetosphere. Because of the influence of the ionosphere, only localized resonances are able to penetrate to the ground, and they are the ones that are observed from the radars and magnetometers. In the absence of a detailed theory that would take into account the factors mentioned in the preceding paragraph, this hypothesis must be regarded only as a preliminary explanation for this paradox. But it clearly demonstrates how important it is to take into account the entire body of theoretical knowledge when interpreting experimental data.
We now turn our attention to poloidal pulsations. Space experiments show that they occur much more rarely than toroidal pulsations. For instance, according to the AMPTE/CCE data (Anderson et al., 1990), about five toroidal pulsations correspond to one poloidal pulsation. This is consistent with our conclusion that for the poloidal polarization of the wave, more stringent conditions are required than those for the toroidal wave. Furthermore, the occurrence rate of radially-polarized pulsations decreases with the increasing harmonic number N . This is readily illustrated by dynamic spectrograms obtained by Takahashi et al. (1984) from the ATS 6 and SMS 1 and 2 satellite data: when N ≥ 3 the azimuthal component of the magnetic field of the pulsations is distinguished much more clearly than the radial component. Within the framework of our theory, this fact is readily explained, because with the increasing harmonic number, the poloidality conditions are satisfied even less. On the other hand, the second harmonic of radially polarized waves with azimuthal wave numbers from 20 to 150 is very often recorded in the magnetosphere. In this case, in a cold plasma the left-hand side of the inequality involved in the poloidality condition (38) for N = 2 makes up no more than 1%; therefore, this condition can only be satisfied for pulsations with unrealistically large azimuthal wave numbers, m 100. Taking finite pressure into account saves the situation, since in this case the left-hand side readily reaches the values 10-20%, and the waves with N = 2 and m ∼ 20 − 150 may well have a poloidal polarization. An indication of the important role of finite pressure in the formation of these waves is also provided by the existence of a substantial longitudinal component of the magnetic field observed in a number of poloidal pulsations (e.g. Hughes et al., 1979), as it can reach marked values for Alfvén waves only in the case of finite β (see Eq. 11). Singer et al. (1982) and Engebretson et al. (1992) considered the radially polarized Pc 4 pulsations events which are strongly localized across magnetic shells. One would expect that these pulsations were the excitations of the Alfvén resonator described in Sect. 5.3. Cramm et al. (2000) explored a poloidal Pc 4 pulsation observed by the Equator-S satellite. An analysis showed that this pulsation was nearly monochromatic and very narrowly localized across magnetic shells (a Gaussian with the half-width of about 0.1 R E ), and there was a phase change by 180 • at the transition through the region of localization. Such a behavior is characteristic for poloidal waves confined within the resonator. The authors made an estimate of the azimuthal wave number using reasoning similar to ours in Sect. 6.1, and obtained the value of m 150. Of course, it is necessary to understand where this resonator was localized in any particular case, but the relevant information is not always available.
One of the resonators must be localized on the outer boundary of the plasmapause. In all likelihood, radially polarized waves that are confined within this resonator represent a reasonably widespread phenomenon. Singer et al. (1982) reported ISEE-1, 2 satellite observations of poloidal waves which were strongly localized across the Lshells in this region. Takahashi and Anderson (1992) showed the presence of a marked increase in intensity of poloidal waves near the plasmapause using AMPTE/CCE data. In all of these cases the scale of localization across the magnetic shells was ≤ 1 R E . This is in good agreement with the assumption that in these cases, the poloidal waves were eigenmodes of the resonator in the plasmapause region.
It seems likely that the same can also be said of one of the most interesting varieties of poloidally polarized waves, giant pulsations (Pg). These nearly monochromatic waves are usually observed during quite geomagnetic conditions, when the plasmapause lies somewhere at L ∼ 5.5 − 6, and giant pulsations (Pg) are recorded just there. Rostoker et al. (1979) were the first to notice this. The assumption that Pg are resonator modes on the outer edge of the plasmapause is consistent with the strong localization of Pg across magnetic shells accompanied by a phase change by 180 degrees (Green, 1979;Rostoker et al., 1979;Glassmeier, 1980), and the amplitude distribution in L is described by the Gaussian function with the halfwidth of about 1 R E (Chisham et al., 1997), and this is indeed expected for the fundamental radial harmonic within the resonator on the outer edge of the plasmapause. The poloidality condition (40) for the values of m ∼ 20 observed in Pg in the models which we have studied, is satisfied, though without a very large reserve. On the other hand, it seems feasible to rule out the possibility that Pg are Alfvén waves traveling across magnetic shells. Satellite observations do not show any indications of the transformation of poloidal Pg-wave to toroidal waves Glassmeier et al., 1999a). Here it is very important to make reference to satellite experiments, because such a transformation is also impossible to notice from the ground: at the same m the transverse component of the wave vector k ⊥ = (k 2 1 /g 1 ) + (m 2 /g 2 ) in poloidal waves (k 1 = 0) is much smaller than that in toroidal waves (k 1 → ∞); therefore, when ν N 1, only the oscillations near the poloidal surface have a chance to be transmitted through the ionosphere (Leonovich and Mazur, 1996).
On the other hand, Green (1985) detected several Pg events deep inside the plasmasphere. The geomagnetic conditions where these pulsations were observed, were characterized by the presence of a significant ring current inside the plasmasphere, exactly as in the case of our model III. But this model assumes a resonator inside the plasmasphere. Thus, we can conclude that the events observed by Green (1985) were the eigenmodes of this resonator.
At the same time, the L-dependence of averaged spectra of poloidal oscillations observed by AMPTE/CCE (Takahashi and Anderson, 1992) shows that there exists a rather clearly pronounced population of radially polarized waves, not associated with regions of poloidal frequency extrema. These waves ought to be traveling across magnetic shells. It seems likely that the radially polarized waves that are standing waves across magnetic shells are generally more accessible to observations when compared with poloidal waves. At present, the concept is widely held that high-energy particles drifting in the magnetosphere supply energy to observed poloidal pulsations via bounce-drift resonance. Indeed, in some cases unstable distribution functions of particles associated with poloidal waves were observed (Hughes et al., 1978(Hughes et al., , 1979Glassmeier et al., 1999a;Wright et al., 2001); there are also a number of indirect arguments in favor of this concept (Takahashi et al., 1990;Fenrich and Samson, 1997;Ozeke and Mann, 2001). It can be suggested that, were it not for the high-energy particles, the waves with large m would simply not have had a sufficiently large amplitude in order to be observed. But in the case of the displacement of the azimuthally small-scale waves across magnetic shells, the most enhanced waves would be the ones near the toroidal surface, because the waves were to accumulate the particle energy in the process of their propagation from the poloidal to toroidal surface (Klimushkin, 2000), although the build-up rate of the wave energy decreases as the wave detaches itself from the poloidal surface. And only when high-m waves are confined within the resonator, is the transfer of energy from particles able to enhance the poloidal pulsations.
Conclusion
In conclusion, we briefly restate the logic of our paper and describe the main results. Our principal intent was to study the conditions where the Alfvén waves in the magnetosphere can be toroidally or radially polarized. Since the toroidal (poloidal) polarization of Alfvén waves implies that the radial wavelength of the wave λ r is significantly smaller (larger) than the azimuthal wavelength λ a , it is impossible to study the polarization without studying the structure of the wave field across magnetic shells. To do this, we made use of the system of MHD equations by writing them for plasma of finite but small pressure residing in a curved magnetic field.
As a consequence of this system, we obtained Eq. (12), the basic equation of our paper. It describes the Alfvén wave excited by the magnetosound and, perhaps, by some other sources. This equation defines both the transverse and longitudinal structure of the wave. This is described in the limit λ r λ a by a toroidal longitudinal function, otherwise it is described by a poloidal function. Using numerical calculations we found that when N = 1 − 3 (with these longitudinal harmonic numbers were our prime interest), these functions differ relatively slightly from one another. That permitted us to separate the longitudinal and transverse structures by the method of successive approximations. Thus, we obtained Eq. (19), describing the structure of the wave across magnetic shells. The solution of this equation allowed us to determine both the spatial structure of the wave and the conditions of toroidal and poloidal polarization.
In order for the wave to be toroidally polarized on the magnetic shell with the radial coordinate x 1 , it is necessary and sufficient that the condition ω = T N is satisfied, where T N is the toroidal eigenfrequency on a given shell. A similar condition ω = P N , developed for the poloidal frequency, is not a sufficient condition of poloidal polarization -it is also necessary that condition (38) is satisfied, which implies that many azimuthal wavelengths are accommodated between the toroidal and poloidal surfaces. If this condition is not satisfied, then the mode is toroidally polarized throughout the region of its existence. Furthermore, it is sharply localized across magnetic shells, having a singularity on the toroidal surface (regularized by taking into account the ionospheric dissipation). If the poloidality condition is satisfied, then the wave is poloidally polarized in the part of its transparent region. It propagates slowly across the magnetic shells and changes its polarization from poloidal to toroidal. Finally, there exist regions in which the poloidal frequency P N reaches its extreme values. The poloidality condition for these regions is written as Eq. (39). In this case the wave is a standing wave across the magnetic shells, having a poloidal polarization throughout the region of its existence. The fundamental (most easily excited) harmonic of this resonator is described by a Gaussian function.
It is progressively easier to satisfy the poloidality condition with the increasing difference between the toroidal and poloidal frequencies (polarization splitting of the spectrum) and with the increasing azimuthal wave number m. The former of these quantities is determined by geospace plasma and magnetic field parameters, and by the longitudinal harmonic number N . We studied three models of the magnetosphere: (I) low level of disturbance when a significant time has elapsed after the storm; (II) high level of disturbance; here is a well-developed ring current; and (III) low level of disturbance, but when a short time has elapsed after the storm (significant ring current inside of the plasmasphere).
The main conclusion drawn by considering these models implies that an increasing plasma pressure contributes to satisfying the poloidality condition at fixed m. It was ascertained that with β actually observed in the magnetosphere, this condition is satisfied for poloidal Alfvén waves with N = 2 and m ∼ 50 − 100 that are routinely observed in the magnetosphere. The presence of a special criterion of poloidality explains the scarcity of poloidal pulsations compared to toroidal pulsations, especially when N > 2.
A further important result is the inferred possible existence of the resonator for poloidal waves in the plasmapause region. We adduced arguments in support of the fact that oscillations that are modes of this resonator are indeed observed. Possibly, they include, among others, giant pulsations (Pg).
At the same time our conclusion about the agreement of theory and observations is a preliminary one, because there are a large number of factors which are neglected by our theory and which can have a substantial influence on the behavior of MHD waves in the magnetosphere. Specifically, they include the azimuthal inhomogeneity of the magnetosphere, field-aligned currents, the non-stationarity of the oscillations, the narrow localization of their sources, the interaction of waves with particles drifting in the magnetosphere, and the active role of the ionosphere. Hence, further efforts are needed, in order to create the more realistic models of ULF waves in the magnetosphere.
Appendix A Definitions and basic properties of toroidal and poloidal modes
Let T N and P N denote the eigen-functions of toroidal and poloidal operators satisfying the boundary conditions where x 3 ± stands for the intersection points of a field line with the upper ionospheric boundary. Toroidal and poloidal functions are conveniently normalized in the following manner: hold. The difference between the toroidal and poloidal eigenfrequencies is often referred to as the polarization splitting of the Alfvén oscillation spectrum. To find an analitical expression for it we multiply Eq. (A3) by P N and Eq. (A4) by T N , extract one from the other, and integrate along the field line. After the integration by parts, we obtain the difference between the squares of these eigenfrequencies: 2 T N − 2 P N = √ g 3 η P N T N + P N T N (e || · ∇) ln The polarization splitting of the spectrum is caused by the presence of the field line curvature. This is obvious if finite plasma pressure is taken into account, because the first term of the expression (A5) that takes this factor into account, contains explicitly the field line curvature R −1 according to formula (9). The situation is somewhat more complicated in cold plasma, where the second term of this formula is responsible for the splitting of the spectrum. The quantity √ g 2 /g 1 involved in the formula has a simple geometrical meaning. If we take a flux tube with the cross section dx 1 = 1, dx 2 = 1, then the physical dimensions in these directions will be, respectively, dx 1 = 1 · √ g 2 , dx 2 = 1 · √ g 2 . Thus the quantity √ g 2 /g 1 describes the variation of the ratio of these physical dimensions along the tube, i.e. the change of the form of this cross-section (Leonovich and Mazur, 1990). One may well imagine the magnetic field configurations in which this quantity varies even along straight field lines. In these configurations field lines must become increasingly sparser with the advance along them. Such configurations, however, are unlikely to be relevant to magnetosphere physics, where it is assumed that field lines become sparser when leaving one magnetic flux and become denser when entering another flux. Obviously, in this case the derivative (e || · ∇) √ g 2 /g 1 can be nonzero only when field lines are curved. Moreover, in this case the curvature is only a necessary rather than sufficient condition of the polarization splitting of the spectrum. Indeed, it can be shown (Krylov et al., 1981;Krylov and Lifshitz, 1984) that the following relation holds: (e || · ∇) ln g 2 /g 1 = K + − K − , where K + and K − are a maximum and minimum curvature of the surfaces that are orthogonal to field lines (i.e. of the x 3 = const surfaces). As an example of the model in which there is a curvature but no polarization splitting, we consider the situation where the magnetic shells are semicylinders and the field lines are circles. The surfaces x 3 = const are plane in this model, K + − K − = 0, and, hence, the toroidal and poloidal eigenfrequencies coincide in this model. For further discussion of this issue see paper of Leonovich and Mazur (1990). The final conclusion from this discussion is thus: in geomagnetic field models the polarization splitting of the spectrum is possible only in the case of curved field lines.
To make a rough estimate of the distance between the toroidal and poloidal surfaces N , we assume that it is small compared to the typical size of the magnetosphere. We can then avail ourselves of the expansions and Because the difference between the toroidal and poloidal eigenfrequencies is rather small, T N − P N T N , P N , and T N ∼ ω in the mode localization region, we then obtain from Eqs. (A6, A7) the ordering (15).
Appendix B The asymptotic solution of the radial structure equation when ν N 1
The interval between the toroidal and poloidal surfaces can be broken up into three regions: near the toroidal surface (|x 1 −x 1 T N | N ), near the poloidal surface (|x 1 −x 1 P N | N ), and sufficiently far away from these surfaces where the WKB approximation is applicable. Here we consider only the situation where the toroidal frequency is larger than the poloidal frequency. In this case x 1 T N > x 1 P N . In the region |x 1 − x 1 T N | N , the expansion (A6) can be used. Then Eq. (19), through the substitution is transformed to the zero-order Bessel equation. The solution of this equation, bounded when x 1 > x 1 T N , is where C T is an arbitrary constant yet to be determined. We now put the asymptotic representation of the solution (26) for x 1 T N > x 1 , (x 1 T N − x 1 )/λ T N 1: This expression describes the wave propagating toward the increase of the coordinate x 1 . Near the poloidal surface, when |x 1 −x 1 P N | N , we can make use of the linear expansion (A7). Then, we introduce a new variable Then Eq. (19) is brought to the inhomogeneous Airy equation. We need to find such a solution to this equation that is bounded when x 1 < x 1 P N and represents a wave propagating toward the increase in the coordinate x 1 (in order that it can be matched with the solution when x 1 x 1 T N ). We give this solution in the integral form (Eq. 27) (see Leonovich and Mazur, 1993). The asymptotic representation of this solution when z P > 0, z P 1 is When x 1 P N < x 1 < x 1 T N in the region where the WKB approximation is applicable, the solution is given by Eq. (29). The asymptotic representations (Eqs. (B1), (B2), (29)) are matched in regions of their common applicability, defining the constants C T and C W : πq N a N /, ν −5/3 N ω −2 e iπ/4 .
Noteworthy is the importance of taking into account the right-hand side of Eq. (19), the source of oscillations q N . Without the source, this equation would not have any solutions at all, which are bounded in the opaque region, according to Eq. (20), because it would be impossible to match the solutions near the poloidal and toroidal surfaces. | 18,884.6 | 2004-01-01T00:00:00.000 | [
"Physics"
] |